What Is Machine Learning Fairness? What You Need to Know

Written by Coursera • Updated on

Here’s what you need to know about machine learning fairness and incorporating ethics into an increasingly automated world.

Our world today is becoming more and more automated. Machine learning, the method of creating computer algorithms that improve (and learn) by experience and data usage, is integrated into our everyday lives to facilitate things like job application screenings and university admissions. There is an increasing need to make sure this data science, including the tools and systems we use, is ethical and fair. 

When machine learning isn’t fair, the outcome can be detrimental to users and the community. For example, algorithms on social media sites may have sparked political tensions due to skewed or siloed news feeds (and fake news), when the intention was to deliver personalized recommendations for users. 

What is machine learning fairness?

Machine learning is a branch of artificial intelligence (AI) that stems from the idea that computers can learn from data collected to identify patterns and make decisions that mimic those of humans, with minimal human intervention. Machine learning fairness is the process of correcting and eliminating algorithmic bias (of race and ethnicity, gender, sexual orientation, disability, and class) from machine learning models.

Watch this video for an introduction to algorithmic fairness:

video-placeholder
Loading...
Introduction to algorithmic fairness:

Why is it important to address fairness and ethics in machine learning?

“It is up to us as responsible data scientists to make sure that we're using the power of the technology we have to do the right thing,” states Professor H.V. Jagadish at the University of Michigan. Unintentional discrimination in machine learning algorithms is just one of the reasons why it’s important to address fairness and ethics.

Machine learning is enmeshed in the systems and applications we use to help us buy furniture, find jobs, recruit new hires, apply for universities, listen to music, get loans, find news, search on Google, target ads, and so much more. It has both enhanced humans’ ability to connect with others and streamlined information. But it can have serious consequences if the systems fail to promote fair and equal practices.

To remove these potential biases, data scientists and machine learning experts must look out for them in algorithmic models and correct them. The book Fairness and Machine Learning: Limitations and Opportunities by machine learning researchers and professors Solon Barocas, Moritz Hardt, and Arvind Narayanan, highlights that it is nearly impossible to “hand code a program that exhaustively enumerates all the relevant factors that allow us to recognize objects from every possible perspective or in all their potential visual configurations”—but machine learning solves this problem because it enables a computer can learn by example, rather than being taught explicit instructions [1]. 

The momentum of Diversity, Equity, and Inclusion (DEI) initiatives in recent years likely plays into how much people are thinking about machine learning fairness. Machine learning is used in a variety of industries: in the criminal justice system to predict the risk of committing future crimes that inform policy on bail and sentencing, in business to filter job applicants, and in credit lending and insurance, just to name a few [1]. 

The COMPAS controversy

COMPAS is a decision support tool that was developed by Northpointe, used by the US court system to assess the likelihood of a criminal to be a repeat offender (recidivist). The algorithm they used predicts which criminals are most likely to reoffend, taking a quantitative approach to fairness that sparked controversy.

Placeholder

Nonprofit news organization ProPublica investigated and found that the algorithm “correctly predicted recidivism for Black and White defendants at roughly the same rate,” but that when it was wrong, Blacks were almost “twice as likely to be labeled a higher risk but not actually re-offend,” while Whites were more likely labeled lower risk but would go on to commit more crimes [2, 3]. 

How to make machine learning fairer and more ethical

For those working in data science and artificial intelligence with algorithms, there are a few ways to make sure that machine learning is fair and ethical. You can:

  • Examine the algorithms’ ability to influence human behavior and decide whether it is biased. Then, create algorithmic methods that avoid predictive bias.

  • Identify any vulnerabilities or inconsistencies in public data sets, and assess whether there is a privacy violation.

  • Utilize tools that can help prevent and eliminate bias in machine learning.

You can learn these technical skills with the Ethics in the Age of AI specialization from LearnQuest:

Placeholder

specialization

Ethics in the Age of AI

What's the ethical impact of AI making decisions?. Learn how to impose ethical behavior on machine models.

4.7

(79 ratings)

2,878 already enrolled

BEGINNER level

Average time: 4 month(s)

Learn at your own pace

Skills you'll build:

Predictive Modelling, data bias, Algorithms, Ethics Of Artificial Intelligence, Machine Learning (ML) Algorithms, Understanding of algorithms, Familiarity with predictive models, Overview of ethics considersations, machine learning fairness, Ethics, Machine Learning, security, Privacy, analysis, Artificial Intelligence (AI), Bias

Tools for machine learning fairness

Today, plenty of tools are available at your fingertips to integrate into your organization’s workflow to catch and prevent machine learning malpractice. Here are a few you can check out:

  • IBM’s AI Fairness 360: A Python toolkit of technical solutions on fairness metrics and algorithms that helps users and researchers share and evaluate discrimination and bias in machine learning models.

  • Google’s What-If Tool: A visualization tool that explores a model’s performance on a data set, assessing against preset definitions of fairness constraints. It supports binary classification, multi-class classification, and regression tasks.

  • Google’s Model Cards and Toolkit: This tool confirms that a given model’s intent matches its use case, and helps users understand the conditions in which their model is safe and appropriate to move forward with. 

  • Microsoft’s fairlearn.py: An open-source Python toolkit that assesses and improves fairness in machine learning. With an interactive visualization dashboard and unfairness mitigation algorithms, this tool helps users analyze the trade-offs between fairness and model performance.

  • Deon: An ethics checklist that facilitates responsible data science by evaluating and systematically reviewing applications for potential ethical implications, from early stages of data collection to implementation. 

Fairness in machine learning with Coursera

Contribute to tackling machine learning ethics with the Ethics in the Age of AI specialization from LearnQuest, an award-winning provider of global business and IT technical training for corporations and government agencies. You’ll learn job-ready skills in four months or less.

To learn more about machine learning, consider enrolling in our most popular course Machine Learning, taught by Stanford University professor and Coursera founder Andrew Ng.

Placeholder

course

Machine Learning

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us ...

4.9

(169,785 ratings)

4,797,652 already enrolled

Average time: 1 month(s)

Learn at your own pace

Skills you'll build:

Logistic Regression, Artificial Neural Network, Machine Learning (ML) Algorithms, Machine Learning

Related articles

Article sources

1. Fair ML Book. “Fairness and Machine Learning: Limitations and Opportunities, https://fairmlbook.org/pdf/fairmlbook.pdf.” Accessed May 2, 2022.

2. ProPublica. “How We Analyzed the COMPAS Recidivism Algorithm, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.” Accessed May 2, 2022.

3. MIT Technology Review. “Inspecting Algorithms for Bias, https://www.technologyreview.com/2017/06/12/105804/inspecting-algorithms-for-bias.” Accessed May 2, 2022.

Written by Coursera • Updated on

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.

Learn without limits