Introduction to Machine Learning: Supervised Learning offers a clear, practical introduction to how machines learn from labeled data to make predictions and decisions. You’ll build a strong foundation in regression and classification, starting with linear and logistic regression and progressing to resampling, regularization, and tree-based ensemble methods. Along the way, you’ll learn how to evaluate models, manage bias–variance trade-offs, and balance interpretability with predictive power, all while working hands-on in Python. By the end of the course, you’ll have the skills and intuition needed to confidently apply supervised learning techniques to real-world problems.

Introduction to Machine Learning: Supervised Learning

Introduction to Machine Learning: Supervised Learning
This course is part of Machine Learning: Theory and Hands-on Practice with Python Specialization

Instructor: Daniel E. Acuna
Access provided by Echo Refugee Library
Recommended experience
What you'll learn
Explain and apply the core concepts of supervised learning.
Build, interpret, and evaluate predictive models for regression and classification.
Assess model reliability and improve generalization using validation and regularization techniques.
Apply tree-based and ensemble methods to capture complex relationships in data.
Skills you'll gain
Details to know

Add to your LinkedIn profile
6 assignments
January 2026
See how employees at top companies are mastering in-demand skills

Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate

There are 5 modules in this course
Welcome to Introduction to Machine Learning: Supervised Learning. In this first module, you will begin your journey into supervised learning by exploring how machines learn from labeled data to make predictions. You will learn to distinguish between supervised and unsupervised learning, and understand the key differences between regression and classification tasks. You will also gain insight into the broader machine learning workflow, including the roles of predictors, response variables, and the importance of training versus testing data. By the end of this module, you will have a solid foundation in the goals and mechanics of supervised learning.
What's included
12 videos7 readings2 assignments1 programming assignment1 discussion prompt
In this module, you will expand your understanding of linear models by incorporating multiple predictors, including categorical variables and interaction terms. You will learn how to interpret partial regression coefficients and assess the fit of your models using metrics like R² and RMSE. As you build more complex models, you will also explore the risks of overfitting and the importance of model validation. By the end of this module, you will be equipped to build and evaluate multiple linear regression models with confidence.
What's included
7 videos1 reading1 assignment1 programming assignment
In this module, you will transition from predicting continuous outcomes to modeling categorical ones. You will learn how logistic regression models binary outcomes, like whether a customer will default on a loan, using probabilities and odds, and how to interpret the results. You will also explore k-Nearest Neighbors, a flexible, non-parametric method that classifies observations based on their proximity to others in the dataset. To evaluate your models, you will use tools like confusion matrices, accuracy, and precision/recall, gaining insight into how well your classifiers perform. This module lays the groundwork for tackling real-world classification problems with confidence and clarity.
What's included
13 videos1 reading1 assignment1 programming assignment
In this module, you will learn how to evaluate your models more reliably and improve their generalization to new data. You will explore resampling methods like k-fold cross-validation and the bootstrap, which help estimate test performance without needing a separate test set. You will also be introduced to the regularization techniques Ridge and Lasso that prevent overfitting by constraining model complexity. Using cross-validation, you will learn how to select the optimal regularization strength, balancing predictive accuracy with model simplicity. These tools are essential for building models that perform well not just in theory, but in practice.
What's included
10 videos2 readings1 assignment1 programming assignment
This module introduces you to one of the most intuitive and interpretable machine learning models: decision trees. You will explore how trees split the feature space into regions, how to read their structure, and why they are prone to overfitting if left unchecked. Trees are just the beginning; this module also introduces ensemble techniques that elevate predictive accuracy by combining many models. You will get a first look at methods like bagging, random forests, and boosting, and see how they compare to the models you have already studied. By the end, you will understand when and why tree-based models can outperform simpler approaches, especially in capturing complex, non-linear relationships.
What's included
8 videos1 reading1 assignment1 programming assignment
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.
Instructor

Offered by
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
Explore more from Data Science

University of Colorado Boulder

Alberta Machine Intelligence Institute

University of Colorado Boulder

Dartmouth College

