Modern Reinforcement-learning using Deep Learning

Being able to start Deep reinforcement-learning research

Being able to start Deep reinforcement-learning engineering role

Understand modern state-of-the-art Deep reinforcement-learning knowledge

Understand Deep reinforcement-learning knowledge

Requirements

  • Interest in Deep reinforcement-learning

Description

Hello I am Nitsan Soffair, A Deep RL researcher at BGU.

In my Deep reinforcement-learning course you will learn the newest state-of-the-art Deep reinforcement-learning knowledge.

You will do the following

  1. Get state-of-the-art knowledge regarding
    1. Model types
    2. Algorithms and approaches
    3. Function approximation
    4. Deep reinforcement-learning
    5. Deep Multi-agent Reinforcement-learning
  2. Validate your knowledge by answering short and very short quizzes of each lecture.
  3. Be able to complete the course by ~2 hours.

Syllabus

  1. Model types
    1. Markov decision process (MDP)A discrete-time stochastic control process.
    2. Partially observable Markov decision process (POMDP)A generalization of MDP in which an agent cannot observe the state.
    3. Decentralized Partially observable Markov decision process (Dec-POMDP)A generalization of POMDP to consider multiple decentralized agents.
  2. Algorithms and approaches
    1. Bellman equationsA condition for optimality of optimization of dynamic programming.
    2. Model-freeA model-free algorithm is an algorithm which does not use the policy of the MDP.
    3. Off-policyAn off-policy algorithm is an algorithm that use policy 1 for learning and policy 2 for acting in the environment.
    4. Exploration-exploitationA trade-off in Reinforcement-learning between exploring new policies to use existing policies.
    5. Value-iterationAn iterative algorithm applying bellman optimality backup.
    6. SARSAAn algorithm for learning a Markov decision process policy
    7. Q-learningA model-free reinforcement learning algorithm to learn the value of an action in a particular state.
  3. Function approximation
    1. Function approximatorsThe problem asks us to select a function among a well-defined class that closely matches (“approximates”) a target function in a task-specific way.
    2. Policy-gradientValue-based, Policy-based, Actor-critic, policy-gradient, and softmax policy
    3. REINFORCEA policy-gradient algorithm.
  4. Deep reinforcement-learning
    1. Deep Q-Network (DQN)A deep reinforcement-learning algorithm using experience reply and fixed Q-targets.
    2. Deep Recurrent Q-Learning (DRQN)Deep reinforcement-learning algorithm for POMDP extends DQN and uses LSTM.
    3. Optimistic Exploration with Pessimistic Initialization (OPIQ)A deep reinforcement-learning for MDP based on DQN.
    4. Value Decomposition Networks (VDN)A multi-agent deep reinforcement-learning algorithm for Dec-POMDP.
    5. QMIXA multi-agent deep reinforcement-learning algorithm for Dec-POMDP.
    6. QTRANA multi-agent deep reinforcement-learning algorithm for Dec-POMDP.
    7. Weighted QMIXA deep multi-agent reinforcement-learning for Dec-POMDP.

Resources

  • Wikipedia
  • David Silver’s Reinforcement-learning course

Who this course is for:

  • Anyone who interests in Deep reinforcement-learning

Course content

Leave a Reply

Your email address will not be published. Required fields are marked *