Back to Sample-based Learning Methods
University of Alberta

Sample-based Learning Methods

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna

Status: Artificial Intelligence and Machine Learning (AI/ML)
Status: Simulations
IntermediateCourse22 hours

Featured reviews

ST

4.0Reviewed Feb 27, 2020

Itwasgoodinsubstane but there is plenty of issues with the automated grader. you spend most time dealing with the letter not on actual learning of the matter.

DP

5.0Reviewed Feb 14, 2021

Excellent course that naturally extends the first specialization course. The application examples in programming are very good and I loved how RL gets closer and closer to how a living being thinks.

GC

5.0Reviewed Feb 14, 2020

The course is intermediate in difficulty. But it explains the concept very clearly for me to understand difference between different sample based learning methods.

NG

4.0Reviewed Jun 25, 2020

It's an important course in understanding the working of reinforcement learning. Although some important and complex topics are not explored in this course which are mentioned in the textbook.

AS

4.0Reviewed Jul 15, 2023

It was a good course, but I was expecting more explanation on the subjects in the book. For example Prioritized Sweeping was missing and the videos are not instructive enough.

DA

5.0Reviewed Jul 3, 2022

E​xcellent paced course that helped me understand sample based methods. Assignments were thoroughly build to practically utilize these concepts

DC

5.0Reviewed Aug 23, 2020

The material discussed is very clear, and the graded quizzes and programming assignments force you to really understand what you have just heard. I enjoyed this course a lot, and learned even more.

KD

5.0Reviewed Oct 19, 2020

Excellent course. Really well taught. Good pace of videos and assignments, with the support of perfect reading material. thank you tot he teachers.

FF

5.0Reviewed Dec 19, 2024

I love the course authors approach to teaching RL, building up each algorithm step by step. it's a bit of hard work to do get the assignments correct, but it's well worth the effort.

BL

4.0Reviewed May 21, 2020

The lectures and quiz tests are perfect. Jupyter. Programming exercises can be a little confusing sometimes but are also great. A great course, overall.

KM

5.0Reviewed Jan 9, 2020

Really great resource to follow along the RL Book. IMP Suggestion: Do not skip the reading assignments, they are really helpful and following the videos and assignments becomes easy.

MC

4.0Reviewed Jun 29, 2020

This course excellent, my only complaint is that there is a 5 attempts limits and a 4 months wait to retry. It seems excesive to me and adds extra pressure when taking on assignments.

All reviews

Showing: 20 of 244

P-51 D
4.0
Reviewed Sep 22, 2019
Kaiwen Yang
2.0
Reviewed Oct 2, 2019
hope
3.0
Reviewed Jan 25, 2020
Juan Carlos Esquivel
1.0
Reviewed Mar 7, 2020
Maxim Volgin
4.0
Reviewed Jan 12, 2020
Bernard Chan
3.0
Reviewed Mar 22, 2020
Mukund Chavan
5.0
Reviewed Mar 17, 2020
Kinal Mehta
5.0
Reviewed Jan 10, 2020
Andrew Gnias
3.0
Reviewed Dec 24, 2019
Maximiliano Beber
5.0
Reviewed Feb 23, 2020
Jonathan Bechtel
5.0
Reviewed May 9, 2020
Rishi Rao
5.0
Reviewed Aug 3, 2020
Benjamin Alsbury-Nealy
5.0
Reviewed Oct 3, 2019
Ivan Sanchez Fernandez
5.0
Reviewed Sep 29, 2019
Manuel Bolívar
5.0
Reviewed Nov 28, 2019
Amit Joshi
4.0
Reviewed Feb 27, 2021
Manuel Velarde
4.0
Reviewed Oct 4, 2019
Stevie Weiss
5.0
Reviewed May 11, 2021
Renato Cesar Menendes Cruz
5.0
Reviewed Sep 17, 2023
Sandesh Jain
5.0
Reviewed Jun 8, 2020