Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
This course is part of the Probabilistic Graphical Models Specialization
Offered By
About this Course
Skills you will gain
- Algorithms
- Expectation–Maximization (EM) Algorithm
- Graphical Model
- Markov Random Field
Offered by

Stanford University
The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto, California, United States.
Syllabus - What you will learn from this course
Learning: Overview
This module presents some of the learning tasks for probabilistic graphical models that we will tackle in this course.
Review of Machine Learning Concepts from Prof. Andrew Ng's Machine Learning Class (Optional)
This module contains some basic concepts from the general framework of machine learning, taken from Professor Andrew Ng's Stanford class offered on Coursera. Many of these concepts are highly relevant to the problems we'll tackle in this course.
Parameter Estimation in Bayesian Networks
This module discusses the simples and most basic of the learning problems in probabilistic graphical models: that of parameter estimation in a Bayesian network. We discuss maximum likelihood estimation, and the issues with it. We then discuss Bayesian estimation and how it can ameliorate these problems.
Learning Undirected Models
In this module, we discuss the parameter estimation problem for Markov networks - undirected graphical models. This task is considerably more complex, both conceptually and computationally, than parameter estimation for Bayesian networks, due to the issues presented by the global partition function.
Learning BN Structure
This module discusses the problem of learning the structure of Bayesian networks. We first discuss how this problem can be formulated as an optimization problem over a space of graph structures, and what are good ways to score different structures so as to trade off fit to data and model complexity. We then talk about how the optimization problem can be solved: exactly in a few cases, approximately in most others.
Learning BNs with Incomplete Data
In this module, we discuss the problem of learning models in cases where some of the variables in some of the data cases are not fully observed. We discuss why this situation is considerably more complex than the fully observable case. We then present the Expectation Maximization (EM) algorithm, which is used in a wide variety of problems.
Reviews
- 5 stars71.42%
- 4 stars19.72%
- 3 stars5.44%
- 2 stars2.72%
- 1 star0.68%
TOP REVIEWS FROM PROBABILISTIC GRAPHICAL MODELS 3: LEARNING
Great course, though with the progress of ML/DL, content seems a touch outdated. Would
Tougher course than the 2 preceding ones, but definitely worthwhile.
An amazing course! The assignments and quizzes can be insanely difficult espceially towards the conclusion.. Requires textbook reading and relistening to lectures to gather the nuances.
Awesome course... builds intuitive thinking for developing intelligent algorithms...
About the Probabilistic Graphical Models Specialization
Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.

Frequently Asked Questions
When will I have access to the lectures and assignments?
What will I get if I subscribe to this Specialization?
Is financial aid available?
Learning Outcomes: By the end of this course, you will be able to
More questions? Visit the Learner Help Center.