Chevron Left
Back to Prediction and Control with Function Approximation

Learner Reviews & Feedback for Prediction and Control with Function Approximation by University of Alberta

4.8
stars
803 ratings

About the Course

In this course, you will learn how to solve problems with large, high-dimensional, and potentially infinite state spaces. You will see that estimating value functions can be cast as a supervised learning problem---function approximation---allowing you to build agents that carefully balance generalization and discrimination in order to maximize reward. We will begin this journey by investigating how our policy evaluation or prediction methods like Monte Carlo and TD can be extended to the function approximation setting. You will learn about feature construction techniques for RL, and representation learning via neural networks and backprop. We conclude this course with a deep-dive into policy gradient methods; a way to learn policies directly without learning a value function. In this course you will solve two continuous-state control tasks and investigate the benefits of policy gradient methods in a continuous-action environment. Prerequisites: This course strongly builds on the fundamentals of Courses 1 and 2, and learners should have completed these before starting this course. Learners should also be comfortable with probabilities & expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), and implementing algorithms from pseudocode. By the end of this course, you will be able to: -Understand how to use supervised learning approaches to approximate value functions -Understand objectives for prediction (value estimation) under function approximation -Implement TD with function approximation (state aggregation), on an environment with an infinite state space (continuous state space) -Understand fixed basis and neural network approaches to feature construction -Implement TD with neural network function approximation in a continuous state environment -Understand new difficulties in exploration when moving to function approximation -Contrast discounted problem formulations for control versus an average reward problem formulation -Implement expected Sarsa and Q-learning with function approximation on a continuous state control task -Understand objectives for directly estimating policies (policy gradient objectives) -Implement a policy gradient method (called Actor-Critic) on a discrete state environment...

Top reviews

WP

Apr 11, 2020

Difficult but excellent and impressing. Human being is incredible creating such ideas. This course shows a way to the state when all such ingenious ideas will be created by self learning algorithms.

AC

Dec 1, 2019

Well peaced and thoughtfully explained course. Highly recommended for anyone willing to set solid grounding in Reinforcement Learning. Thank you Coursera and Univ. of Alberta for the masterclass.

Filter by:

26 - 50 of 143 Reviews for Prediction and Control with Function Approximation

By Cesar S

•

Aug 8, 2021

Awesome course that complements both first courses on RL. Excellent chapter selection from the book that shows just the necessary and sufficient information to get a good idea of many of the concepts of function approximation. Very recommended.

By Lim G

•

May 10, 2020

I enjoyed the course because the content delivery was clear and concise. The hands-on assignment helped me better understand the concepts that were taught. I was able to draw connections and link between textbook and hands-on experience.

By Thomas G

•

Apr 21, 2020

A very ambitious course where you have to invest a lot in reading the book but therefore you also learn a lot. I prefer more of those advanced courses here on Coursera.

The course is a very good complement to the book from Sutton.

By Mateusz K

•

Oct 29, 2019

Its got a great variety of very applicable examples, use cases, and assignments. May be tough if people don't quite understand how neural networks work, so I suggest having a basic understanding of NN for parts of this course.

By Steven H

•

Jul 9, 2020

Excellent course! The assignment could be improved by adding input checking in methods with one-hot encoding of state as input. Which I suffered when I forgot to use the one-hot encoding and spent much time debugging.

By Fred A

•

Jun 9, 2020

These series of courses provide one of the best materials for an introduction to reinforcement learning and optimal control. If you are motivated to learn and challenge yourself with RL, don't look elsewhere.

By Chamani S

•

Feb 2, 2021

Thank you so much for this invaluable gift.!! I am a knowledge seeker in Reinforcement Learning and UOA is my dream place I am wishing to enter for my Ph.D. This is a good guide I received. Many thanks.!!!

By Wojtek P

•

Apr 12, 2020

Difficult but excellent and impressing. Human being is incredible creating such ideas. This course shows a way to the state when all such ingenious ideas will be created by self learning algorithms.

By Rafael B M

•

Sep 1, 2020

The course extends the foundations of Reinforcement Learning to function approximation, which allows the application of the previous learned method to tackle more complex and real world problems.

By Antonio C

•

Dec 2, 2019

Well peaced and thoughtfully explained course. Highly recommended for anyone willing to set solid grounding in Reinforcement Learning. Thank you Coursera and Univ. of Alberta for the masterclass.

By Sandesh J

•

Jun 25, 2020

Surely a level-up from the previous courses. This course adds to and extends what has been learned in courses 1 & 2 to a greater sphere of real-world problems. Great job Prof. Adam and Martha!

By Jose M R F

•

Aug 14, 2020

Adam & Martha really make the walk through Sutton & Barto's book a real pleasure and easy to understand. The notebooks and the practice quizzes greatly help to consolidate the material.

By ding l

•

Jun 1, 2020

I had been reading the book of Reinforcement Learning An Introduction by myself. This class helped me to finish the study with a great learning environment. Thank you, Martha and Adam!

By Kouassi A J

•

Apr 30, 2023

I recommend this course for all students or professionals that would learn more and deeper about reinforcement learning. Thanks to all the team that participate to create this course.

By Akash B

•

Nov 5, 2019

Great Learning, the best part was the Actor-Critic algorithm for a small pendulum swing task all from stratch using RLGLue library. Love to learn how experimentation in RL works.

By Niju M N

•

Oct 24, 2020

The course was really good one with quizzes to make us remember the important lesson items and well polished Assignments are given which i haven't seen before in coursera

By Christos P

•

Jan 19, 2020

Good course with a lot of technical information. I would add another assignment or make current ones a little bit more extensive, as there are many concepts to learn.

By Jau-Jie Y

•

Jul 7, 2021

Prof Satinder Singh lecture of "Where the rewards come from in RL" is very suprised.

Thanks to Prof Martha White and Prof Adam White, for their lecture and management.

By Eric B

•

Nov 14, 2021

Super interesting, challenging but the videos are very helpful to complement the understanding of the Sutton and Barto RL book. Thanks the Univ. of Alberta team!

By Roberto M

•

Mar 29, 2020

I found the course quite tough but really interesting. I would say that reading the book's chapters more than once is necessary to optimally grasp the concepts.

By John J

•

Apr 28, 2020

This is the third instalment in reinforcement learning.so far so good. yeah, you can get stuck some times but it is okay you can make it out.

By Sandro A

•

Jul 29, 2020

I consider the professors explain in a feasible way the main concepts of RL hence communicate effectively and concise in the course videos.

By Doug

•

May 21, 2021

This specialization is a gift to humanity. It should have been inscribed into the golden disc of the Voyager and shared with the aliens.

By Casey S S

•

Feb 11, 2021

this course bridged the gap to Deep Learning, the most exciting direction in RL. I would like a sequel dedicated to this from U Alberta

By Bhooshan V

•

Sep 3, 2021

Really enjoyed every part of the course. Programming assignments are helpful in asserting the theoretical understanding of the subject.