You will develop the ability to rigorously formulate learning tasks using probability and statistics, distinguish Bayesian and frequentist perspectives, build linear models for regression and classification, estimate optimal model parameters via Maximum Likelihood Estimation (MLE), and apply neural networks to practical problems. The series progresses from foundational methods to real-world neural network implementation.
By the end of this specialization, learners will be able to:
Express learning tasks with mathematical rigor using ideas from probability and statistics.
Deconstruct Bayesian and frequentist perspectives and utilize these perspectives to approach machine learning tasks with well-reasoned strategies.
Apply maximum likelihood estimate (MLE) to find optimal parameters of a model.
Build linear models for regression and for classification.
Design and implement artificial neural networks tailored to the needs of particular regression and classification tasks.Apply the theory of neural networks to building models.
Applied Learning Project
Hands-on laboratory exercises enable you to explore key machine learning model architectures, placing emphasis on experimentation and intuition building. Rather than leveraging high-level packages, you will learn how to build models from first principles to solve myriad and universalizable regression and classification tasks. You will build linear regression models, implement the perceptron algorithm, develop logistical regression, and implement a variety of neural networks, utilizing the understanding of the underlying mathematics and probability theory we cultivate throughout the specialization.
















