This course introduces you to one of the main types of modeling families of supervised Machine Learning: Classification. You will learn how to train predictive models to classify categorical outcomes and how to use error metrics to compare across different models. The hands-on section of this course focuses on using best practices for classification, including train and test splits, and handling data sets with unbalanced classes.
Offered By


About this Course
Skills you will gain
- Decision Tree
- Ensemble Learning
- Classification Algorithms
- Supervised Learning
- Machine Learning (ML) Algorithms
Offered by

IBM
IBM is the global leader in business transformation through an open hybrid cloud platform and AI, serving clients in more than 170 countries around the world. Today 47 of the Fortune 50 Companies rely on the IBM Cloud to run their business, and IBM Watson enterprise AI is hard at work in more than 30,000 engagements. IBM is also one of the world’s most vital corporate research organizations, with 28 consecutive years of patent leadership. Above all, guided by principles for trust and transparency and support for a more inclusive society, IBM is committed to being a responsible technology innovator and a force for good in the world.
Syllabus - What you will learn from this course
Logistic Regression
Logistic regression is one of the most studied and widely used classification algorithms, probably due to its popularity in regulated industries and financial settings. Although more modern classifiers might likely output models with higher accuracy, logistic regressions are great baseline models due to their high interpretability and parametric nature. This module will walk you through extending a linear regression example into a logistic regression, as well as the most common error metrics that you might want to use to compare several classifiers and select that best suits your business problem.
K Nearest Neighbors
K Nearest Neighbors is a popular classification method because they are easy computation and easy to interpret. This module walks you through the theory behind k nearest neighbors as well as a demo for you to practice building k nearest neighbors models with sklearn.
Support Vector Machines
This module will walk you through the main idea of how support vector machines construct hyperplanes to map your data into regions that concentrate a majority of data points of a certain class. Although support vector machines are widely used for regression, outlier detection, and classification, this module will focus on the latter.
Decision Trees
Decision tree methods are a common baseline model for classification tasks due to their visual appeal and high interpretability. This module walks you through the theory behind decision trees and a few hands-on examples of building decision tree models for classification. You will realize the main pros and cons of these techniques. This background will be useful when you are presented with decision tree ensembles in the next module.
Reviews
- 5 stars87.87%
- 4 stars10.90%
- 3 stars0.60%
- 1 star0.60%
TOP REVIEWS FROM SUPERVISED MACHINE LEARNING: CLASSIFICATION
Great! Helps me build my career path in Data Science
This course is has a detailed explanation on each and every aspect of classification.
Thank you Coursera. Thank you IBM\n\nThank you to all instructors.
I would like to give especial thanks to the instructor (the one in the videos) for his great job. It would be nice to know who is is.
Frequently Asked Questions
When will I have access to the lectures and assignments?
What will I get if I subscribe to this Certificate?
More questions? Visit the Learner Help Center.