University of Alberta
Prediction and Control with Function Approximation
University of Alberta

Prediction and Control with Function Approximation

This course is part of Reinforcement Learning Specialization

Taught in English

Some content may not be translated

Martha White
Adam White

Instructors: Martha White

24,096 already enrolled

Included with Coursera Plus

Course

Gain insight into a topic and learn the fundamentals

4.8

(803 reviews)

|

90%

Intermediate level

Recommended experience

21 hours (approximately)
Flexible schedule
Learn at your own pace

Details to know

Shareable certificate

Add to your LinkedIn profile

Assessments

4 quizzes

Course

Gain insight into a topic and learn the fundamentals

4.8

(803 reviews)

|

90%

Intermediate level

Recommended experience

21 hours (approximately)
Flexible schedule
Learn at your own pace

See how employees at top companies are mastering in-demand skills

Placeholder

Build your subject-matter expertise

This course is part of the Reinforcement Learning Specialization
When you enroll in this course, you'll also be enrolled in this Specialization.
  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate
Placeholder
Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 5 modules in this course

Welcome to the third course in the Reinforcement Learning Specialization: Prediction and Control with Function Approximation, brought to you by the University of Alberta, Onlea, and Coursera. In this pre-course module, you'll be introduced to your instructors, and get a flavour of what the course has in store for you. Make sure to introduce yourself to your classmates in the "Meet and Greet" section!

What's included

2 videos2 readings1 discussion prompt

This week you will learn how to estimate a value function for a given policy, when the number of states is much larger than the memory available to the agent. You will learn how to specify a parametric form of the value function, how to specify an objective function, and how estimating gradient descent can be used to estimate values from interaction with the world.

What's included

13 videos2 readings1 quiz1 programming assignment1 discussion prompt

The features used to construct the agent’s value estimates are perhaps the most crucial part of a successful learning system. In this module we discuss two basic strategies for constructing features: (1) fixed basis that form an exhaustive partition of the input, and (2) adapting the features while the agent interacts with the world via Neural Networks and Backpropagation. In this week’s graded assessment you will solve a simple but infinite state prediction task with a Neural Network and TD learning.

What's included

11 videos2 readings1 quiz1 programming assignment1 discussion prompt

This week, you will see that the concepts and tools introduced in modules two and three allow straightforward extension of classic TD control methods to the function approximation setting. In particular, you will learn how to find the optimal policy in infinite-state MDPs by simply combining semi-gradient TD methods with generalized policy iteration, yielding classic control methods like Q-learning, and Sarsa. We conclude with a discussion of a new problem formulation for RL---average reward---which will undoubtedly be used in many applications of RL in the future.

What's included

7 videos2 readings1 quiz1 programming assignment2 discussion prompts

Every algorithm you have learned about so far estimates a value function as an intermediate step towards the goal of finding an optimal policy. An alternative strategy is to directly learn the parameters of the policy. This week you will learn about these policy gradient methods, and their advantages over value-function based methods. You will also learn how policy gradient methods can be used to find the optimal policy in tasks with both continuous state and action spaces.

What's included

11 videos2 readings1 quiz1 programming assignment1 discussion prompt

Instructors

Instructor ratings
4.8 (107 ratings)
Martha White
University of Alberta
4 Courses90,722 learners
Adam White
University of Alberta
4 Courses90,722 learners

Offered by

Recommended if you're interested in Machine Learning

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

Learner reviews

Showing 3 of 803

4.8

803 reviews

  • 5 stars

    84.59%

  • 4 stars

    12.54%

  • 3 stars

    1.86%

  • 2 stars

    0.62%

  • 1 star

    0.37%

CP
5

Reviewed on Jan 18, 2020

FR
5

Reviewed on Sep 11, 2023

JF
5

Reviewed on Aug 13, 2020

New to Machine Learning? Start here.

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions