Identify, analyze, and defend against the security vulnerabilities that arise when Large Language Models (LLMs) are integrated into production applications. This course begins with how LLMs function in applications—tokenization, next-token prediction, and the architectural patterns that determine attack surface—then surveys real-world application types including Application Programming Interface (API)-based services, embedded-model deployments, and multi-model orchestration pipelines. You will examine each architecture's distinct security profile and the trade-offs that shape deployment decisions.

LLM Security and Vulnerabilities

LLM Security and Vulnerabilities
This course is part of AI Tooling Specialization

Instructor: Alfredo Deza
Access provided by INEFOP - Instituto Nacional de Empleo y FormaciĂłn Profesional de Uruguay
Recommended experience
What you'll learn
Analyze how API-based, embedded, and multi-model application architectures create distinct LLM vulnerability surfaces
Apply defense patterns against prompt injection, insecure output handling, model theft, and sensitive information disclosure
Evaluate plugin designs and tool integrations against permission boundary and excessive agency risks
Skills you'll gain
Tools you'll learn
Details to know

Add to your LinkedIn profile
1 assignment
April 2026
See how employees at top companies are mastering in-demand skills

Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate

There are 3 modules in this course
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.
Instructor

Offered by
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
Explore more from Computer Science

Pragmatic AI Labs

Pragmatic AI Labs

Pragmatic AI Labs

Pragmatic AI Labs

