Advance your PyTorch skills by building sophisticated deep learning models and preparing them for deployment. You’ll design custom architectures that go beyond Sequential models, exploring Siamese Networks, ResNet, and DenseNet to understand how modern systems handle complex data.


PyTorch: Advanced Architectures and Deployment
This course is part of PyTorch for Deep Learning Professional Certificate

Instructor: Laurence Moroney
Access provided by Imagimob AB
Recommended experience
What you'll learn
Design and implement advanced architectures in PyTorch.
Apply advanced techniques in vision, language, and generative modeling—including Transformers and diffusion models.
Prepare, compress, and deploy models for real-world use.
Skills you'll gain
Details to know

Add to your LinkedIn profile
8 assignments
October 2025
See how employees at top companies are mastering in-demand skills

Build your Software Development expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate from DeepLearning.AI

There are 4 modules in this course
This module introduces custom architectures that go beyond Sequential models, showing how PyTorch’s dynamic graphs support multi-input/multi-output design, parameter sharing, conditional execution, and dynamic creation. You’ll build Siamese Networks, ResNet, and DenseNet to see how architectural choices solve real challenges like similarity comparison, vanishing gradients, and information reuse.
What's included
5 videos3 readings2 assignments1 programming assignment3 ungraded labs
This module explores specialized vision approaches in PyTorch, starting with how receptive fields grow in CNNs and moving into interpretability tools like saliency maps and Grad-CAM to reveal what drives model predictions. You’ll then dive into generative models, using diffusion techniques with Hugging Face’s diffusers library and Stable Diffusion to create images while experimenting with parameters that shape the output.
What's included
5 videos1 reading2 assignments1 programming assignment3 ungraded labs
This module demystifies transformer architectures by showing how modern NLP models are built from familiar PyTorch components like linear layers, embeddings, and attention. You’ll explore encoder-only, decoder-only, and encoder-decoder designs step by step, learning how attention, positional encoding, and cross-attention make these models so powerful for tasks from classification to translation.
What's included
5 videos1 reading2 assignments1 programming assignment3 ungraded labs
This module bridges the gap between training models and deploying them in the real world, covering how to save, track, and manage experiments with PyTorch serialization and MLflow. You’ll then make models portable with ONNX and optimize them for production using pruning and quantization techniques that shrink size and boost speed without losing accuracy.
What's included
6 videos3 readings2 assignments1 programming assignment4 ungraded labs
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.
Instructor

Offered by
Why people choose Coursera for their career







