This course will help us to evaluate and compare the models we have developed in previous courses. So far we have developed techniques for regression and classification, but how low should the error of a classifier be (for example) before we decide that the classifier is "good enough"? Or how do we decide which of two regression algorithms is better?
This course is part of the Python Data Products for Predictive Analytics Specialization
Offered By
About this Course
Could your company benefit from training employees on in-demand skills?
Try Coursera for BusinessWhat you will learn
Understand the definitions of simple error measures (e.g. MSE, accuracy, precision/recall).
Evaluate the performance of regressors / classifiers using the above measures.
Understand the difference between training/testing performance, and generalizability.
Understand techniques to avoid overfitting and achieve good generalization performance.
Could your company benefit from training employees on in-demand skills?
Try Coursera for BusinessOffered by
Syllabus - What you will learn from this course
Week 1: Diagnostics for Data
Week 2: Codebases, Regularization, and Evaluating a Model
Week 3: Validation and Pipelines
Final Project
Reviews
- 5 stars58.69%
- 4 stars23.91%
- 3 stars13.04%
- 2 stars4.34%
TOP REVIEWS FROM MEANINGFUL PREDICTIVE MODELING
Excellent content, but presentation is a bit challenging at times.
The course provided a lot of insights into predictive modeling.
About the Python Data Products for Predictive Analytics Specialization

Frequently Asked Questions
When will I have access to the lectures and assignments?
What will I get if I subscribe to this Specialization?
Is financial aid available?
More questions? Visit the Learner Help Center.