Chevron Left
Back to Machine Learning: Classification

Learner Reviews & Feedback for Machine Learning: Classification by University of Washington

4.7
stars
3,688 ratings

About the Course

Case Studies: Analyzing Sentiment & Loan Default Prediction In our case study on analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...). In our second case study for this course, loan default prediction, you will tackle financial data, and predict when a loan is likely to be risky or safe for the bank. These tasks are an examples of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification. In this course, you will create classifiers that provide state-of-the-art performance on a variety of tasks. You will become familiar with the most successful techniques, which are most widely used in practice, including logistic regression, decision trees and boosting. In addition, you will be able to design and implement the underlying algorithms that can learn these models at scale, using stochastic gradient ascent. You will implement these technique on real-world, large-scale machine learning tasks. You will also address significant tasks you will face in real-world applications of ML, including handling missing data and measuring precision and recall to evaluate a classifier. This course is hands-on, action-packed, and full of visualizations and illustrations of how these techniques will behave on real data. We've also included optional content in every module, covering advanced topics for those who want to go even deeper! Learning Objectives: By the end of this course, you will be able to: -Describe the input and output of a classification model. -Tackle both binary and multiclass classification problems. -Implement a logistic regression model for large-scale classification. -Create a non-linear model using decision trees. -Improve the performance of any model using boosting. -Scale your methods with stochastic gradient ascent. -Describe the underlying decision boundaries. -Build a classification model to predict sentiment in a product review dataset. -Analyze financial data to predict loan defaults. -Use techniques for handling missing data. -Evaluate your models using precision-recall metrics. -Implement these techniques in Python (or in the language of your choice, though Python is highly recommended)....

Top reviews

SM

Jun 14, 2020

A very deep and comprehensive course for learning some of the core fundamentals of Machine Learning. Can get a bit frustrating at times because of numerous assignments :P but a fun thing overall :)

SS

Oct 15, 2016

Hats off to the team who put the course together! Prof Guestrin is a great teacher. The course gave me in-depth knowledge regarding classification and the math and intuition behind it. It was fun!

Filter by:

501 - 525 of 579 Reviews for Machine Learning: Classification

By Deleted A

Aug 26, 2019

A good course to teach the key points.

By Hexuan Z

Oct 6, 2016

could be more challengable homework!!

By Vladislav V

May 13, 2016

It feels like it lacks certain depth.

By Shashwat G

May 22, 2020

Course material can be much better

By Farmer

Aug 12, 2018

Exercises are way too easy.

By Aadesh N

Jun 13, 2016

Great course materials

By Xiaojie Z

Jan 31, 2017

Can be more detailed.

By Ragunandan R M

Sep 17, 2018

Good overall course.

By 2K18/SE/035 A K

Nov 11, 2020

content is complete

By Lim W A

Nov 21, 2016

Learnt new things.

By Mehul P

Aug 17, 2017

Nice explanation.

By gaozhipeng

Jun 30, 2016

good introduction

By Alberto B

Mar 17, 2018

Very good course

By Antonio P L

Apr 30, 2016

Fantastic Course

By Anand B

Aug 7, 2017

Great course!

By PRASAD N

Dec 3, 2020

good course.

By ayshwarya s

Feb 5, 2019

best course

By Alberto J L R

Oct 12, 2017

Good Mooc

By Syamsul B

Aug 31, 2020

Great

By VIGNESHKUMAR R

Aug 23, 2019

good

By Serge B

Jul 2, 2016

good

By IDOWU H A

May 20, 2018

B

By Ole H S

Jun 16, 2016

First. I like these courses allot. They are pretty close to covering just what you need to actually do machine learning in the real world and not dive too deep into topics that have no practical value.

However:

This course was a bit too thin, the last 4 weeks of the course contained little in depth informations and seemed to brush over allot of different topics that could have contained more information. Although they where important topics the course could go more in depth on at least 3 or 4 of those topics. The last 3 weeks could have been a course on its own if properly explored. However the concepts are well enough covered to be usable in practice i belive.

The programming exercises where ridiculously simple. Everything was reduced to filling in 1 or two lines in a bigger function. I understand that the point was to see how these functions are made and that it increases our understanding of the algorithms already existing in packages like schikit-learn and graphlab. Also the content became a bit too repetetive (actually started in the second course but continues in this course). The time used on variation over the same topic in different models made it challenging to pay attention when the lecture finally came to a new point (brain fell a sleep while waiting for something new).

By Ryan M

Aug 25, 2020

While I feel like I have a good theoretical understanding of the issues involved in classification, with an understanding of how the algorithms work and how to implement them, this course could have prepared me better to attack an actual problem by following a real case study through, showing me what steps someone with experience in attacking real problems would take in order to come up with a good classifier.

In particular, while a number of classifiers were presented, there was little to no discussion of the relative advantages and disadvantages of each algorithm. In what cases should I choose logistic regression? A decision tree or a boosted decision tree?

Finally, it seems that random forests and support vector machines are common classifiers, and this course did not cover them. I instead had to learn about random forests (a relatively simple concept that could have been included with the boosted decision tree content) from scikit-learn's web site.

By Ziyue Z

Aug 10, 2016

Compared with the regression course, this course was a slight disappointment. 1. there is less material compared to the regression course. Maybe this is because classification concepts are more intuitive. 2. the slides are much less prepared. Some of the sides even re-use earlier lesson slides in the beginning as a "review", much like soap operas re-use scenes from earlier episodes as "memory recall" to fill air time. 3. the math is more handwavy compared to the regression course. Neither course are supposed to go in depth with proofs, but I felt the regression course was at the right level and this course degraded too far. Do note it's very possible that I'm biased because I have seen more of the material from this course than the regression course.