COVID-19 Update

Visit our COVID update page.

Deep Reinforcement Learning | AISV.802

Deep Reinforcement Learning | AISV.802

In this advanced AI course students get hands-on experience with a variety of reinforcement learning (RL) and deep reinforcement learning (DRL) tools used to teach machines to make human-like decisions based on observation and interpretation of surrounding environments. The development of a plethora of DRL algorithms shows tremendous improvement in state-of-the-art games like Go and highly sophisticated multi-player games such as StarCraft and Dota, as well as control systems, natural language, self-driving cars, and robotics.

After a quick review of deep learning building blocks, and RL and DRL fundamentals, we will dive into available promising DRL algorithms, illustrating them with concrete examples and simulation environments. Students will learn to solve everyday tasks in RL, including well-known simulations such as CartPole, MountainCar, and MuJoCo.

You will learn Markov decision process (MDP) formulation and an extensive collection of DRL algorithms: deep q-learning (DQN, DDQN, PER), policy gradients methods (A2C, A3C, TRPO, PPO, ACER, ACKTR, SAC), deterministic policy gradients methods (DPG, DDPG, TD3), and inverse reinforcement learning. To implement these DRL algorithms, students will code in Python 3, OpenAI Gym, tf2.keras, and TensorFlow-Agents. We will also review other popular DRL libraries, such as Google Dopamine, Keras-RL, and Facebook Horizon.

Learning Outcomes
At the conclusion of the course, you should be able to

  • Formulate an MDP
  • Describe value functions, models, and policies
  • Define the purpose of the Bellman equation
  • Discuss the advantages and disadvantages of RL
  • Explain how the epsilon-greedy algorithm differs from a pure greedy algorithm
  • Explain the difference between model-based and model-free RL
  • Discuss how DL enhances RL
  • Discuss and implement the value-based and policy-based RL
  • Use and create RL environments with OpenAI Gym and TF-Agents
  • Apply learned RL algorithms to a few popular simulators

Topics Include

  • Deep learning building blocks
  • Markov decision processes
  • Reinforcement and deep reinforcement learning
  • Value-based, model-based, model-free algorithms
  • Policy gradients-based algorithms
  • Proximal policy optimization
  • Various actor/critic algorithms
  • Deep RL libraries
  • Term project
Have a question about this course?
Speak to a student services representative.
Call (408) 861-3860
This course is related to the following programs:


Sections Open for Enrollment:

Open Sections and Schedule
Start / End Date Quarter Units Cost Instructor
04-05-2023 to 06-07-2023 3.0 CEUs $980

Ajay K Baranwal


Final Date To Enroll: 04-05-2023


Date: Start Time: End Time: Meeting Type: Location:
Wed, 04-05-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE
Wed, 04-12-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE
Wed, 04-19-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE
Wed, 04-26-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE
Wed, 05-03-2023 6:30 p.m. 7:30 p.m. Live-Online REMOTE
Wed, 05-10-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE
Wed, 05-17-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE
Wed, 05-24-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE
Wed, 05-31-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE
Wed, 06-07-2023 6:30 p.m. 9:30 p.m. Live-Online REMOTE