This course covers practical algorithms and the theory for machine learning from a variety of perspectives. Topics include supervised learning (generative, discriminative learning, parametric, non-parametric learning, deep neural networks, support vector Machines), unsupervised learning (clustering, dimensionality reduction, kernel methods). The course will also discuss recent applications of machine learning, such as computer vision, data mining, natural language processing, speech recognition and robotics. Students will learn the implementation of selected machine learning algorithms via python and PyTorch.



Statistical Learning for Engineering Part 2

Instructor: Qurat-ul-Ain Azim
Access provided by Korek Telecom
Skills you'll gain
- Artificial Neural Networks
- Machine Learning
- Reinforcement Learning
- Applied Machine Learning
- Classification And Regression Tree (CART)
- Statistical Machine Learning
- Machine Learning Algorithms
- Supervised Learning
- Dimensionality Reduction
- Deep Learning
- Machine Learning Software
- Random Forest Algorithm
- Unsupervised Learning
- Generative Model Architectures
- Artificial Intelligence and Machine Learning (AI/ML)
- PyTorch (Machine Learning Library)
Details to know

Add to your LinkedIn profile
6 assignments
August 2025
See how employees at top companies are mastering in-demand skills

There are 7 modules in this course
This week covers key techniques in machine learning, beginning with the kernel trick to enhance model flexibility without adding computational complexity. We will also explore decision trees for both regression and classification tasks, learning to formulate Gini impurity and entropy as measures of impurity within tree splits. Practical exercises focus on tuning tree depth, an essential step to balance model accuracy and prevent overfitting. Additionally, we will introduce ensemble models, demonstrating how combining multiple trees can improve predictive power and robustness. These exercises will provide you with experience in optimizing decision trees and ensemble methods.
What's included
4 videos7 readings2 assignments
This week’s module explores foundational concepts in classification by comparing discriminative and generative models. You will analyze the mathematical theory behind generative models, gaining insight into how these models capture the underlying data distribution to make predictions. Key focus areas include formulating the Gaussian Discriminant Analysis (GDA) model and deriving mathematical expressions for the Naive Bayes classifier. Through detailed derivations and examples, you will be able to understand how each model functions and the types of data it best serves. By the end of this module, you will be able to apply both GDA and Naive Bayes, choosing the appropriate model based on data characteristics and classification requirements.
What's included
2 videos3 readings2 assignments
This week’s module introduces neural networks, starting with how to implement linear and logistic regression models. You will explore how neural networks extend beyond linear boundaries to represent complex nonlinear relationships, making them highly adaptable for various data types. Key topics this week include conducting a forward pass through a neural network to understand how data flows and predictions are generated. The week also introduces the essential concept of backpropagation, the mechanism by which neural networks learn from errors to adjust weights and improve accuracy. Hands-on exercises in Python will allow you to implement forward and backward passes, solidifying your understanding of neural network operations and preparing them for more advanced deep learning applications.
What's included
1 video3 readings1 assignment
This week’s module focuses on deep neural networks (DNNs) and their practical applications in machine learning. We will begin by describing the structure and functionality of a deep neural network, exploring how multiple layers enable the model to learn complex patterns. The module includes hands-on exercises to implement full forward and backward passes on DNNs, reinforcing the process of training and error correction. We will also analyze Convolutional Neural Networks (CNNs), understanding their role in image processing and feature extraction. By the end of the module, students will gain proficiency in implementing and training neural networks using PyTorch, preparing them to work with deep learning models in real-world applications.
What's included
2 videos3 readings
This week’s module explores advanced clustering and estimation techniques, starting with expectation maximization (EM), a powerful algorithm used for parameter estimation in statistical models. You will formulate the theoretical foundations of k-means clustering, learning how it partitions data into distinct groups based on similarity. We also cover Gaussian mixture models (GMMs), explaining how they model data distributions using a mixture of Gaussian distributions. Additionally, you will derive the convergence properties of the EM algorithm, understanding its behavior and how it iteratively improves estimates. Through practical exercises, you will gain experience implementing these algorithms, which will allow you to apply clustering and estimation techniques to complex datasets in machine learning tasks.
What's included
2 videos5 readings
This week, we introduce dimensionality reduction techniques, which are essential for simplifying complex data while preserving key features. You will learn to mathematically formulate these techniques using eigenvalue decomposition, gaining insight into how principal components are derived. We will compare three key methods—Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Factor Analysis—highlighting their differences and applications. You will also explore spectral clustering, a powerful method for grouping data based on graph theory. The concept of autoencoders will be demonstrated as a deep learning approach for reducing dimensionality and learning efficient data representations. Hands-on coding exercises will allow implementation of these techniques, providing practical skills for tackling high-dimensional datasets in machine learning and data analysis.
What's included
1 video4 readings
In this final week of the course, we introduce Markov Decision Processes (MDPs), a foundational framework for decision-making in uncertain environments. You will learn to use MDPs to model problems where outcomes depend on both current states and actions. This week’s module will guide you through developing a mathematical framework to describe MDPs, including key components such as states, actions, and rewards. You will also learn how to implement learning processes using techniques such as value iteration and policy iteration, which are crucial for finding optimal decision strategies. Practical exercises will help you apply these concepts to tackle real-world problems in reinforcement learning and optimal decision-making.
What's included
3 readings1 assignment
Instructor

Offered by
Why people choose Coursera for their career





