University of Glasgow
Clinical Decision Support Systems - CDSS 4
University of Glasgow

Clinical Decision Support Systems - CDSS 4

Fani Deligianni

Instructor: Fani Deligianni

Included with Coursera Plus

Course

Gain insight into a topic and learn the fundamentals

Intermediate level
Some related experience required
8 hours (approximately)
Flexible schedule
Learn at your own pace

What you'll learn

  • Evaluating Clinical Decision Support Systems

  • Bias, Calibration and Fairness in Machine Learning Models

  • Decision Curve Analysis and Human-Centred Clinical Decision Support Systems

  • Privacy concerns in Clinical Decision Support Systems

Details to know

Shareable certificate

Add to your LinkedIn profile

Assessments

5 quizzes

Course

Gain insight into a topic and learn the fundamentals

Intermediate level
Some related experience required
8 hours (approximately)
Flexible schedule
Learn at your own pace

See how employees at top companies are mastering in-demand skills

Placeholder

Build your subject-matter expertise

This course is part of the Informed Clinical Decision Making using Deep Learning Specialization
When you enroll in this course, you'll also be enrolled in this Specialization.
  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate
Placeholder
Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 4 modules in this course

Adopting a machine learning model in a Clinical Decision Support System (CDSS) requires several steps that involve external validation, bias assessment and calibration, 'fairness' assessment, clinical usefulness, ability to explain the model's decision and privacy-aware machine learning models. In this module, we are going to discuss these concepts and provide several examples from state-of-the-art research in the area. External validation and bias assessment have become the norm in clinical prediction models. Further work is required to assess and adopt deep learning models under these conditions. On the other hand, research in 'fairness', human-centred CDSS and privacy concerns of machine learning models are areas of active research. The first week is going to cover the ground around the difference between reproducibility and generalisability. Furthermore, calibration assessment in clinical prediction models will be explored while how different deep learning architectures affect calibration will be discussed.

What's included

4 videos3 readings1 quiz1 discussion prompt

Naively, machine learning can be thought as a way to come to decisions that are free from prejudice and social biases. However, recent evidence show how machine learning models learn from biases in historic data and reproduce unfair decisions in similar ways. Detecting biases against subgroups in machine learning models is challenging also due to the fact that these models have not been designed or trained to discriminate deliberately. Defining 'fairness' metrics and investigating ways in ensuring that minority groups are not disadvantaged from machine learning models' decisions is an active research area.

What's included

3 videos3 readings1 quiz1 discussion prompt

Decision curve analysis is used to assess clinical usefulness of a prediction model by estimating the net benefit with is a trade-off of the precision and accuracy of the model. Based on this approach the strategy of ‘intervention for all’ and ‘intervention for none’ is compared to the model’s net benefit. Decision curve analysis is a human-centred approach of assessing clinical usefulness, since it requires experts’ opinion. Ethical Artificial Intelligence initiative indicate that a human-centred approach in clinical decision support systems is required to enable accountability, safety and oversight while the ensure ‘fairness’ and transparency.

What's included

3 videos3 readings1 quiz1 discussion prompt

Deep learning models have remarkable ability to memorise data even when they do not overfit. In other words, the models themselves can expose information about the patients that compromise their privacy. This can results in unintentional data leakage in inference and also provide opportunities for malicious attacks. We will overview common privacy attacks and defences against them. Finally, we will discuss adversarial attacks against deep learning explanations.

What's included

3 videos3 readings2 quizzes1 discussion prompt

Instructor

Fani Deligianni
University of Glasgow
5 Courses3,865 learners

Offered by

Recommended if you're interested in Machine Learning

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

New to Machine Learning? Start here.

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions