University of Colorado Boulder
Resampling, Selection and Splines
University of Colorado Boulder

Resampling, Selection and Splines

Osita Onyejekwe

Instructor: Osita Onyejekwe

Included with Coursera Plus

Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

15 hours to complete
3 weeks at 5 hours a week
Flexible schedule
Learn at your own pace
Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

15 hours to complete
3 weeks at 5 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Apply resampling methods in order to obtain additional information about fitted models.

  • Optimize fitting procedures to improve prediction accuracy and interpretability.

  • Identify the benefits and approach of non-linear models.

Details to know

Shareable certificate

Add to your LinkedIn profile

Taught in English

See how employees at top companies are mastering in-demand skills

Placeholder

Build your subject-matter expertise

This course is part of the Statistical Learning for Data Science Specialization
When you enroll in this course, you'll also be enrolled in this Specialization.
  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate
Placeholder
Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 5 modules in this course

Welcome to our Resampling, Selection, and Splines class! In this course, we will dive deep into these key topics in statistical learning and explore how they can be applied to data science. The module provides an introductory overview of the course and introduces the course instructor.

What's included

6 videos2 readings1 discussion prompt

In this module, we will turn our attention to generalized least squares (GLS). GLS is a statistical method that extends the ordinary least squares (OLS) method to account for heteroscedasticity and serial correlation in the error terms. Heteroscedasticity is the condition where the variance of the errors is not constant across all levels of the predictor variables, while serial correlation is the condition where the errors are correlated across time or space. GLS has many practical applications, such as in finance for modeling asset returns, in econometrics for modeling time series data, and in spatial analysis for modeling spatially correlated data. By the end of this module, you will have a good understanding of how GLS works and when it is appropriate to use it. You will also be able to implement GLS in R using the gls() function in the nlme package.

What's included

1 video1 reading1 programming assignment1 ungraded lab

In this module, we will explore ridge regression, LASSO, and principal component analysis (PCA). These techniques are widely used for regression and dimensionality reduction tasks in machine learning and statistics.

What's included

7 videos3 readings3 programming assignments

This week, we will be exploring the concept of cross-validation, a crucial technique used to evaluate and compare the performance of different statistical learning models. We will explore different types of cross-validation techniques, including k-fold cross-validation, leave-one-out cross-validation, and stratified cross-validation. We will discuss their strengths, weaknesses, and best practices for implementation. Additionally, we will examine how cross-validation can be used for model selection and hyperparameter tuning.

What's included

1 video1 reading1 programming assignment

For our final module, we will explore bootstrapping. Bootstrapping is a resampling technique that allows us to gain insights into the variability of statistical estimators and quantify uncertainty in our models. By creating multiple simulated datasets through resampling, we can explore the distribution of sample statistics, construct confidence intervals, and perform hypothesis testing. Bootstrapping is particularly useful when parametric assumptions are hard to meet or when we have limited data. By the end of this week, you will have an understanding of bootstrapping and its practical applications in statistical learning.

What's included

1 video1 reading1 programming assignment

Instructor

Osita Onyejekwe
University of Colorado Boulder
5 Courses1,525 learners

Offered by

Recommended if you're interested in Probability and Statistics

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

New to Probability and Statistics? Start here.

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions