In this course, you will learn how to solve problems with large, high-dimensional, and potentially infinite state spaces. You will see that estimating value functions can be cast as a supervised learning problem---function approximation---allowing you to build agents that carefully balance generalization and discrimination in order to maximize reward. We will begin this journey by investigating how our policy evaluation or prediction methods like Monte Carlo and TD can be extended to the function approximation setting. You will learn about feature construction techniques for RL, and representation learning via neural networks and backprop. We conclude this course with a deep-dive into policy gradient methods; a way to learn policies directly without learning a value function. In this course you will solve two continuous-state control tasks and investigate the benefits of policy gradient methods in a continuous-action environment.
This course is part of the Reinforcement Learning Specialization
Offered By
About this Course
Probabilities & Expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), implementing algorithms from pseudocode.
Skills you will gain
- Artificial Intelligence (AI)
- Machine Learning
- Reinforcement Learning
- Function Approximation
- Intelligent Systems
Probabilities & Expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), implementing algorithms from pseudocode.
Syllabus - What you will learn from this course
Welcome to the Course!
On-policy Prediction with Approximation
Constructing Features for Prediction
Control with Approximation
Policy Gradient
Reviews
- 5 stars84.16%
- 4 stars12.92%
- 3 stars1.97%
- 2 stars0.65%
- 1 star0.26%
TOP REVIEWS FROM PREDICTION AND CONTROL WITH FUNCTION APPROXIMATION
Martha and Adam are excellent instructors. This course is so well organized and presented. I have learned a lot! Thanks very much!
The course was really good one with quizzes to make us remember the important lesson items and well polished Assignments are given which i haven't seen before in coursera
Solid intro course. Wish we covered more using neural nets. The neural net equations used very non-standard notation. Wish the assignments were a little more creative. Too much grid world.
Well peaced and thoughtfully explained course. Highly recommended for anyone willing to set solid grounding in Reinforcement Learning. Thank you Coursera and Univ. of Alberta for the masterclass.
About the Reinforcement Learning Specialization

Frequently Asked Questions
When will I have access to the lectures and assignments?
What will I get if I subscribe to this Specialization?
Is financial aid available?
More questions? Visit the Learner Help Center.