Back to GenAI and LLMs on AWS
Duke University

GenAI and LLMs on AWS

This course will teach you how to deploy and manage large language models (LLMs) in production using AWS services like Amazon Bedrock. By the end of the course, you will know how to: Choose the right LLM architecture and model for your application using services. Optimize cost, performance and scalability of LLMs on AWS using auto-scaling groups, spot instances and container orchestration Monitor and log metrics from your LLM to detect issues and continuously improve quality Build reliable and secure pipelines to train, deploy and update models using AWS services Comply with regulations when deploying LLMs in production through techniques like differential privacy and controlled rollouts This course is unique in its focus on real-world operationalization of large language models using AWS. You will work through hands-on labs to put concepts into practice as you learn. Whether you are a machine learning engineer, data scientist or technical leader, you will gain practical skills to run LLMs in production.

Status: Serverless Computing
Status: Model Deployment
BeginnerCourse46 hours

Featured reviews

ND

5.0Reviewed Aug 21, 2024

Great learning resources that will be useful long after completing the course, concise presentations, and clear explanations of all topics

All reviews

Showing: 5 of 5

Nicole D
5.0
Reviewed Aug 22, 2024
Javier Mira Tuñón
5.0
Reviewed Jul 9, 2024
CG - DIAZ SANTOS JONATHAN
5.0
Reviewed Dec 7, 2024
Huy Nguyễn
5.0
Reviewed Nov 5, 2025
Danilo Del Fio
1.0
Reviewed May 5, 2025