This course offers a comprehensive exploration of machine learning and deep learning using PyTorch and Scikit-Learn. It provides clear explanations, visualizations, and practical examples to help learners build and deploy machine learning models. Ideal for Python developers, it covers the latest trends in deep learning, including GANs, reinforcement learning, and NLP with transformers.

Machine Learning with PyTorch and Scikit-Learn

Machine Learning with PyTorch and Scikit-Learn

Instructor: Packt - Course Instructors
Access provided by IEM UEM Group
Recommended experience
Recommended experience
Intermediate level
Developers and data scientists with a solid understanding of Python basics, calculus, and linear algebra who want to master PyTorch and Scikit-Learn.
Recommended experience
Recommended experience
Intermediate level
Developers and data scientists with a solid understanding of Python basics, calculus, and linear algebra who want to master PyTorch and Scikit-Learn.
What you'll learn
Comprehensive coverage of machine learning theory and application.
Modern content on PyTorch, transformers, and graph neural networks.
Intuitive explanations, practical examples, and labs, for hands-on learning.
Skills you'll gain
- Data Preprocessing
- Transfer Learning
- Applied Machine Learning
- Feature Engineering
- Model Evaluation
- Artificial Neural Networks
- Natural Language Processing
- Machine Learning Methods
- Machine Learning Algorithms
- Data Processing
- Dimensionality Reduction
- Machine Learning
- Reinforcement Learning
- Deep Learning
- Artificial Intelligence and Machine Learning (AI/ML)
Details to know

Add to your LinkedIn profile
18 assignments
September 2025
See how employees at top companies are mastering in-demand skills

There are 19 modules in this course
In this section, we explore the foundational concepts of machine learning, focusing on how algorithms can transform data into knowledge. We delve into the practical applications of supervised and unsupervised learning, equipping you with the skills to implement these techniques using Python tools for effective data analysis and prediction.
What's included
2 videos5 readings1 assignment
2 videos• Total 2 minutes
- Course Overview• 1 minute
- Module Overview• 1 minute
5 readings• Total 50 minutes
- Introduction• 10 minutes
- Solving Interactive Problems with Reinforcement Learning• 10 minutes
- Introduction to the Basic Terminology and Notations• 10 minutes
- A Roadmap for Building Machine Learning Systems• 10 minutes
- Using Python for Machine Learning• 10 minutes
1 assignment• Total 10 minutes
- Knowledge Check• 10 minutes
In this section, we implement the perceptron algorithm in Python to classify flower species in the Iris dataset, enhancing our understanding of machine learning classification. We also explore adaptive linear neurons to optimize models, using tools like pandas, NumPy, and Matplotlib for data processing and visualization.
What's included
1 video7 readings1 assignment1 programming assignment1 ungraded lab
1 video• Total 1 minute
- Overview• 1 minute
7 readings• Total 70 minutes
- Introduction• 10 minutes
- The Perceptron Learning Rule• 10 minutes
- Implementing a Perceptron Learning Algorithm in Python• 10 minutes
- Training a Perceptron Model on the Iris Dataset• 10 minutes
- Adaptive Linear Neurons and the Convergence of Learning• 10 minutes
- Implementing Adaline in Python• 10 minutes
- Improving Gradient Descent Through Feature Scaling• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
1 programming assignment• Total 20 minutes
- Perceptron Lab Autograder• 20 minutes
1 ungraded lab• Total 60 minutes
- Implementing a Perceptron from Scratch in Python• 60 minutes
In this section, we explore various machine learning classifiers using scikit-learn's Python API, focusing on their implementation and practical applications. We analyze the strengths and weaknesses of classifiers with both linear and nonlinear decision boundaries to enhance our understanding of solving real-world classification problems efficiently.
What's included
1 video11 readings1 assignment1 programming assignment1 ungraded lab
1 video• Total 1 minute
- Overview• 1 minute
11 readings• Total 110 minutes
- Introduction• 10 minutes
- Modeling Class Probabilities Via Logistic Regression• 10 minutes
- Learning the Model Weights via the Logistic Loss Function• 10 minutes
- Converting an Adaline Implementation Into an Algorithm for Logistic Regression• 10 minutes
- Training a Logistic Regression Model with Scikit-Learn• 10 minutes
- Tackling Overfitting via Regularization• 10 minutes
- Maximum Margin Classification with Support Vector Machines• 10 minutes
- Solving Nonlinear Problems Using a Kernel SVM• 10 minutes
- Decision Tree Learning• 10 minutes
- Building a Decision Tree• 10 minutes
- K-Nearest Neighbours A Lazy Learning Algorithm• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
1 programming assignment• Total 180 minutes
- Decision Tree Lab • 180 minutes
1 ungraded lab• Total 60 minutes
- Decision Tree Lab• 60 minutes
In this section, we focus on data preprocessing techniques using pandas 2.x to enhance machine learning model performance. We address missing data handling and feature selection to optimize model accuracy and efficiency.
What's included
1 video9 readings1 assignment1 programming assignment1 ungraded lab
1 video• Total 1 minute
- Overview• 1 minute
9 readings• Total 90 minutes
- Introduction• 10 minutes
- Understanding the scikit-learn Estimator API• 10 minutes
- Performing One-Hot Encoding on Nominal Features• 10 minutes
- Partitioning a Dataset Into Separate Training and Test Datasets• 10 minutes
- Bringing Features Onto the Same Scale• 10 minutes
- Selecting Meaningful Features• 10 minutes
- Sparse Solutions With L1 Regularization• 10 minutes
- Sequential Feature Selection Algorithms• 10 minutes
- Assessing Feature Importance with Random forests• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
1 programming assignment• Total 180 minutes
- Graded Assignment: Random Forests for Feature Importance• 180 minutes
1 ungraded lab• Total 60 minutes
- Hands-On: Random Forests for Feature Importance• 60 minutes
In this section, we explore dimensionality reduction techniques such as PCA and LDA to simplify large datasets while preserving essential information. We also examine t-SNE for effective data visualization, enhancing our ability to manage and interpret complex data efficiently.
What's included
1 video7 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
7 readings• Total 70 minutes
- Introduction• 10 minutes
- Extracting the Principal Components Step by Step• 10 minutes
- Feature Transformation• 10 minutes
- Principal Component Analysis in scikit-learn• 10 minutes
- Supervised Data Compression via Linear Discriminant Analysis• 10 minutes
- Selecting Linear Discriminants for the New Feature Subspace• 10 minutes
- Nonlinear Dimensionality Reduction and Visualization• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we explore best practices for evaluating and refining machine learning models, focusing on techniques like K-Fold Cross-Validation and hyperparameter tuning to enhance model performance. We also diagnose bias and variance issues using learning curves, ensuring models are both accurate and reliable in real-world applications.
What's included
1 video8 readings1 assignment1 programming assignment1 ungraded lab
1 video• Total 1 minute
- Overview• 1 minute
8 readings• Total 80 minutes
- Introduction• 10 minutes
- Using K-Fold Cross-Validation to Assess Model Performance• 10 minutes
- Estimating generalization performance• 10 minutes
- Addressing Over- And Underfitting With Validation Curves• 10 minutes
- More Resource-Efficient Hyperparameter Search With Successive Halving• 10 minutes
- Looking at Different Performance Evaluation Metrics• 10 minutes
- Plotting a Receiver Operating Characteristic• 10 minutes
- Dealing With Class Imbalance• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
1 programming assignment• Total 30 minutes
- Performance Evaluation Metrics graded assignment• 30 minutes
1 ungraded lab• Total 35 minutes
- Hands-on: Performance Evaluation Metrics lab• 35 minutes
In this section, we explore ensemble learning techniques by implementing majority voting, bagging, and boosting to enhance model accuracy and robustness. We focus on practical applications, such as reducing overfitting and improving weak learner performance, to build more reliable predictive models.
What's included
1 video9 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
9 readings• Total 90 minutes
- Introduction• 10 minutes
- Combining Classifiers Via Majority Vote• 10 minutes
- Using the Majority Voting Principle to Make Predictions• 10 minutes
- Evaluating and Tuning the Ensemble Classifier• 10 minutes
- Bagging Building An Ensemble Of Classifiers From Bootstrap Samples• 10 minutes
- Leveraging Weak Learners Via Adaptive Boosting• 10 minutes
- Applying AdaBoost Using scikit-learn• 10 minutes
- Gradient Boosting Training An Ensemble Based On Loss Gradients• 10 minutes
- Explaining the Gradient Boosting Algorithm for Classification• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we apply machine learning to sentiment analysis by preparing IMDb movie review data, transforming text into feature vectors, and training a logistic regression model for classification. We also explore out-of-core learning techniques to handle large datasets efficiently, enhancing our ability to derive insights from extensive text data collections.
What's included
1 video7 readings1 assignment1 programming assignment1 ungraded lab
1 video• Total 1 minute
- Overview• 1 minute
7 readings• Total 70 minutes
- Introduction• 10 minutes
- Introducing the Bag-Of-Words Model• 10 minutes
- Assessing Word Relevancy Via Term Frequency-Inverse Document Frequency• 10 minutes
- Cleaning Text Data• 10 minutes
- Training a Logistic Regression Model for Document Classification• 10 minutes
- Working with Bigger Data Online Algorithms and Out-of-Core Learning• 10 minutes
- Topic Modeling with Latent Dirichlet Allocation• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
1 programming assignment• Total 35 minutes
- Assignment: Cleaning text and building a bag-of-words• 35 minutes
1 ungraded lab• Total 45 minutes
- Hands-on: Cleaning text and building a bag-of-words• 45 minutes
In this section, we explore regression analysis to predict continuous target variables, focusing on implementing linear regression with scikit-learn and designing robust models to handle outliers. We also analyze nonlinear data using polynomial regression, enhancing our ability to interpret complex data patterns and make informed predictions in scientific and industrial contexts.
What's included
1 video6 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
6 readings• Total 60 minutes
- Introduction• 10 minutes
- Looking at Relationships Using a Correlation Matrix• 10 minutes
- Estimating the Coefficient of a Regression Model via scikit-learn• 10 minutes
- Using Regularized Methods for Regression• 10 minutes
- Dealing With Nonlinear Relationships Using Random Forests• 10 minutes
- Random Forest Regression• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we explore clustering analysis to organize unlabeled data into meaningful groups using unsupervised learning techniques. We implement k-means clustering with scikit-learn, design hierarchical clustering trees, and analyze data density with DBSCAN to enhance data analysis and decision-making processes.
What's included
1 video5 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
5 readings• Total 50 minutes
- Introduction• 10 minutes
- A smarter way of placing the initial cluster centroids using k-means++• 10 minutes
- Using the elbow method to find the optimal number of clusters• 10 minutes
- Grouping clusters in a bottom-up fashion• 10 minutes
- Attaching dendrograms to a heat map• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we implement a multilayer neural network from scratch using Python, focusing on the backpropagation algorithm for training. We also evaluate the network's performance on image classification tasks, emphasizing the importance of understanding these foundational concepts for developing advanced deep learning models.
What's included
1 video8 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
8 readings• Total 80 minutes
- Introduction• 10 minutes
- Introducing the Multilayer Neural Network Architecture• 10 minutes
- Activating a Neural Network via Forward Propagation• 10 minutes
- Classifying Handwritten Digits• 10 minutes
- Implementing a Multilayer Perceptron• 10 minutes
- Coding the Neural Network Training Loop• 10 minutes
- Evaluating the Neural Network Performance• 10 minutes
- Training Neural Networks Via Backpropagation• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we delve into how PyTorch enhances neural network training efficiency by utilizing its Dataset and DataLoader for streamlined input pipelines. We also explore the implementation of neural networks using PyTorch's torch.nn module and analyze various activation functions to optimize artificial neural networks.
What's included
1 video9 readings1 assignment1 programming assignment1 ungraded lab
1 video• Total 1 minute
- Overview• 1 minute
9 readings• Total 90 minutes
- Introduction• 10 minutes
- First Steps with PyTorch• 10 minutes
- Split, Stack, And Concatenate Tensors• 10 minutes
- Shuffle, Batch, and Repeat• 10 minutes
- Fetching Available Datasets From the torchvision.datasets Library• 10 minutes
- Building an NN Model in PyTorch• 10 minutes
- Model Training via the torch.nn and torch.optim Modules• 10 minutes
- Saving and Reloading the Trained Model• 10 minutes
- Estimating Class Probabilities in Multiclass Classification via the Softmax Function• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
1 programming assignment• Total 35 minutes
- Assignment: the basics of PyTorch• 35 minutes
1 ungraded lab• Total 60 minutes
- Hands-On: The basics of PyTorch• 60 minutes
In this section, we delve into PyTorch's mechanics, focusing on implementing neural networks using the `torch.nn` module and designing custom layers for research projects. We also analyze computation graphs to enhance model building, equipping you with skills to tackle complex machine learning tasks efficiently.
What's included
1 video9 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
9 readings• Total 90 minutes
- Introduction• 10 minutes
- Computing Gradients via Automatic Differentiation• 10 minutes
- Simplifying Implementations of Common Architectures via the torch.nn Module• 10 minutes
- Solving an XOR Classification Problem• 10 minutes
- Making Model Building More Flexible With nn.Module• 10 minutes
- Project One Predicting the Fuel Efficiency of a Car• 10 minutes
- Training a DNN Regression Model• 10 minutes
- Higher-Level PyTorch APIs A Short Introduction to PyTorch-Lightning• 10 minutes
- Training the Model Using the PyTorch Lightning Trainer Class• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we explore the implementation of convolutional neural networks (CNNs) in PyTorch for image classification tasks, focusing on understanding CNN architectures and enhancing model performance through data augmentation techniques. We also delve into the building blocks of CNNs, including convolution operations and subsampling layers, to equip you with the skills necessary for developing robust image recognition systems.
What's included
1 video10 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
10 readings• Total 100 minutes
- Introduction• 10 minutes
- Padding inputs to control the size of the output feature maps• 10 minutes
- Performing a discrete convolution in 2D• 10 minutes
- Subsampling layers• 10 minutes
- Working with multiple input or color channels• 10 minutes
- Regularizing an NN with L2 regularization and dropout• 10 minutes
- Loss functions for classification• 10 minutes
- The multilayer CNN architecture• 10 minutes
- Loading the CelebA dataset• 10 minutes
- Training a CNN smile classifier• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we explore the implementation of recurrent neural networks (RNNs) for sequence modeling in PyTorch, focusing on their application in sentiment analysis and character-level language modeling. We delve into the intricacies of RNNs, including long short-term memory (LSTM) cells, to enhance our understanding of processing sequential data effectively.
What's included
1 video7 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
7 readings• Total 70 minutes
- Introduction• 10 minutes
- Computing activations in an RNN• 10 minutes
- The challenges of learning long-range interactions• 10 minutes
- Project one - predicting the sentiment of IMDb movie reviews• 10 minutes
- Building an RNN model• 10 minutes
- Project two - character-level language modeling in PyTorch• 10 minutes
- Building a character-level RNN model• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we explore how attention mechanisms enhance NLP by improving RNNs and introducing self-attention in transformer models. We also learn to fine-tune BERT for sentiment analysis using PyTorch, advancing language processing applications.
What's included
1 video14 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
14 readings• Total 140 minutes
- Introduction• 10 minutes
- Generating Outputs from Context Vectors• 10 minutes
- Introducing the Self-Attention Mechanism• 10 minutes
- Parameterizing the Self-Attention Mechanism Scaled Dot-Product Attention• 10 minutes
- Attention Is All We Need: Introducing the Original Transformer Architecture• 10 minutes
- Learning a Language Model Decoder and Masked Multi-Head Attention• 10 minutes
- Building Large-Scale Language Models by Leveraging Unlabeled Data• 10 minutes
- Leveraging Unlabeled Data with GPT• 10 minutes
- Using GPT-2 to Generate New Text• 10 minutes
- Bidirectional Pre-Training with BERT• 10 minutes
- The Best of Both Worlds BART• 10 minutes
- Fine-Tuning a BERT Model in PyTorch• 10 minutes
- Loading and Fine-Tuning a Pre-Trained BERT Model• 10 minutes
- Fine-Tuning a Transformer More Conveniently Using the Trainer API• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we explore generative adversarial networks (GANs) and their application in synthesizing new data samples, focusing on implementing a simple GAN to generate handwritten digits. We also analyze the loss functions for the generator and discriminator, and discuss improvements using convolutional techniques to enhance data generation quality.
What's included
1 video8 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
8 readings• Total 80 minutes
- Introduction• 10 minutes
- Generative models for synthesizing new data• 10 minutes
- Training GAN models on Google Colab• 10 minutes
- Defining the training dataset• 10 minutes
- Transposed convolution• 10 minutes
- Implementing the generator and discriminator• 10 minutes
- Dissimilarity measures between two distributions• 10 minutes
- Using EM distance in practice for GANs• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
In this section, we explore the implementation of graph neural networks (GNNs) using PyTorch Geometric, focusing on designing graph convolutions for molecular property prediction. We also analyze how graph data is represented in neural networks to enhance the understanding and application of GNNs in AI tasks such as drug discovery and traffic forecasting.
What's included
1 video7 readings1 assignment
1 video• Total 1 minute
- Overview• 1 minute
7 readings• Total 70 minutes
- Introduction• 10 minutes
- Implementing a Basic Graph Convolution• 10 minutes
- Implementing a GNN in PyTorch from Scratch• 10 minutes
- Batch Is a List of Dictionaries Each Containing the Representation and Label of a Graph• 10 minutes
- Implementing a GNN Using the PyTorch Geometric Library• 10 minutes
- Other GNN Layers and Recent Developments• 10 minutes
- Pooling• 10 minutes
1 assignment• Total 10 minutes
- Knowledge check• 10 minutes
This chapter introduces reinforcement learning, covering the theory and implementation of algorithms for training agents to make optimal decisions. We explore key concepts like Markov decision processes, Q-learning, and deep Q-learning, with practical examples in Python using OpenAI Gym.
What's included
1 video13 readings
1 video• Total 1 minute
- Overview• 1 minute
13 readings• Total 130 minutes
- Introduction• 10 minutes
- Defining the agent-environment interface of a reinforcement learning system• 10 minutes
- Visualization of a Markov process• 10 minutes
- Value Function• 10 minutes
- Dynamic programming using the Bellman equation• 10 minutes
- Dynamic programming• 10 minutes
- Value iteration• 10 minutes
- Temporal difference learning• 10 minutes
- Off-policy TD control (Q-learning)• 10 minutes
- Implementing the grid world environment in OpenAI Gym• 10 minutes
- Solving the grid world problem with Q-learning• 10 minutes
- Training a DQN model according to the Q-learning algorithm• 10 minutes
- Implementing a deep Q-learning algorithm• 10 minutes
Instructor

Offered by

Offered by

Packt helps tech professionals put software to work by distilling and sharing the working knowledge of their peers. Packt is an established global technical learning content provider, founded in Birmingham, UK, with over twenty years of experience delivering premium, rich content from groundbreaking authors on a wide range of emerging and popular technologies.
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
Explore more from Data Science

Course

Course
CCoursera
Course

Course