Explaining Tree Based Models Using SHAP

Offered By
Coursera Project Network
In this Guided Project, you will:

How to explain the output of tree based ensemble machine learning models.

How to generate global and local explainability plots and then interpret it.

How to create different SHAP plots like - waterfall plot, force plot, decision plot etc and also understand the use cases for each of those plots.

Clock2 hours
IntermediateIntermediate
CloudNo download needed
VideoSplit-screen video
Comment DotsEnglish
LaptopDesktop only

In this 2-hour long project-based course, you will learn how to interpret or explain the output of tree based ensemble machine learning models. You will generate shapely values for all the features for each observations in the dataset. You will then learn to generate global and local explainability plots and then interpret it. You will learn how to create different shap plots for interpretability like - waterfall plot, force plot, decision plot etc. and also understand the use cases for each of these plots. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

Skills you will develop

SHAPMachine LearningMachine Learning Interpretability

Learn step-by-step

In a video that plays in a split-screen with your work area, your instructor will walk you through these steps:

  1. Loading and Understanding Dataset

  2. Data Visualization

  3. Data Preparation

  4. Model Building

  5. Model explainability

How Guided Projects work

Your workspace is a cloud desktop right in your browser, no download required

In a split-screen video, your instructor guides you step-by-step

Frequently asked questions

Frequently Asked Questions

More questions? Visit the Learner Help Center.