Master the critical skills needed to secure AI inference endpoints against emerging threats in this comprehensive intermediate-level course. As AI systems become integral to business operations, understanding their unique vulnerabilities is essential for security professionals. You'll learn to identify and evaluate AI-specific attack vectors including prompt injection, model extraction, and data poisoning through hands-on labs and real-world scenarios. Design comprehensive threat models using STRIDE and MITRE ATLAS frameworks specifically adapted for machine learning systems. Create automated security test suites covering unit tests for input validation, integration tests for end-to-end security, and adversarial robustness testing. Implement these security measures within CI/CD pipelines to ensure continuous validation and monitoring. Through practical exercises with Python, GitHub Actions, and monitoring tools, you'll gain experience securing production AI deployments. Perfect for developers, security engineers, and DevOps professionals ready to specialize in the rapidly growing field of AI security.

Secure AI: Threat Model & Test Endpoints

Secure AI: Threat Model & Test Endpoints
This course is part of multiple programs.


Instructors: Starweaver
Access provided by ExxonMobil
Recommended experience
What you'll learn
Analyze and evaluate AI inference threat models, identifying attack vectors and vulnerabilities in machine learning systems.
Design and implement comprehensive security test cases for AI systems including unit tests, integration tests, and adversarial robustness testing.
Integrate AI security testing into CI/CD pipelines for continuous security validation and monitoring of production deployments.
Skills you'll gain
Details to know

Add to your LinkedIn profile
December 2025
See how employees at top companies are mastering in-demand skills

Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate

There are 3 modules in this course
This module introduces learners to the unique security challenges of AI systems, covering attack surfaces specific to machine learning models and inference endpoints. Learners will explore various threat vectors including prompt injection, model extraction, and data poisoning attacks through hands-on analysis and practical examples.
What's included
4 videos2 readings1 peer review
This module focuses on designing and implementing comprehensive security test cases for AI endpoints. Learners will create unit tests for input validation, integration tests for end-to-end security, and adversarial tests to evaluate model robustness against real-world attacks.
What's included
3 videos1 reading1 peer review
This module covers the integration of AI security testing into CI/CD pipelines. Learners will implement automated security checks, set up monitoring systems, and create feedback loops for continuous security improvement in production environments.
What's included
4 videos1 reading1 assignment2 peer reviews
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.
Offered by
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
Âą Some assignments in this course are AI-graded. For these assignments, your data will be used in accordance with Coursera's Privacy Notice.

