Chevron Left
Back to Recommender Systems: Evaluation and Metrics

Recommender Systems: Evaluation and Metrics, University of Minnesota

131 ratings
20 reviews

About this Course

In this course you will learn how to evaluate recommender systems. You will gain familiarity with several families of metrics, including ones to measure prediction accuracy, rank accuracy, decision-support, and other factors such as diversity, product coverage, and serendipity. You will learn how different metrics relate to different user goals and business goals. You will also learn how to rigorously conduct offline evaluations (i.e., how to prepare and sample data, and how to aggregate results). And you will learn about online (experimental) evaluation. At the completion of this course you will have the tools you need to compare different recommender system alternatives for a wide variety of uses....

Top reviews


Jul 19, 2017

wonderful!!! They teach a lot what I did not expect!

Filter by:

19 Reviews

By Anish Sah

Feb 23, 2019

If you are new to Recommender Systems evaluation, and would like to first know why we do what we do in evaluating a recommender system, go for this course! Each and every approach is explained in vivid details, stripped to the bare essentials so you can see the skeleton of that approach! The only shortcoming, in my opinion was that i felt the codes in honours content in Lenskit could've been further explained. But, all in all, a wonderful place to start!


Aug 23, 2018

Confused about some metrics.

By Chris Colinsky

Jul 03, 2018

not an easy course, specifically the honors track. the information is good, but not presented as well as in the previous two courses. Also there are errors in the honors assignment that make it unnecessarily difficult and you spend a lot of time on irrelevant things.

By llraphael

Jun 16, 2018

The computer assignment is lack of explanation.

By Dhruv Mittal

Jun 15, 2018

I was working on a cross-domain recommendation system where i would recommend books to a user whose movie ratings have been given. I made the algorithm but didn't have any idea as to how to evaluate it but this course helped me through. Thanks

By Caio Henrique Konyosi Miyashiro

May 18, 2018

the part of offline evaluation is really good and practical as well. However, although knowing online evaluation is a more complex subject, I felt it lacked a little bit how to put all this knowledge in practice.

By Yury Zelensky

Mar 29, 2018

It is not perfect but best of specialisation so far. It is a little bit philosophical rather than technical and formal, but it was exactly meet my current personal needs. Can not be recommended as a first and only introduction to a topic of an evaluation and metrics of recommender systems.

P.S. Exercises and quizzes, both main and honour, are somewhat eccentric.

By Keshaw Singh

Feb 22, 2018

My issues about the previous courses in this specialization seem to have been addressed in this one. The assignment in the end is a real good one. The creators of this course have done well to evolve a really thought-provoking and relevant assignment. The course itself helps one develop the appropriate thought process, which comes in handy while deciding upon a metric for a problem at hand.

By zheng dai

Feb 09, 2018

nice to learn excel statistic

By Andrew Waterman

Feb 04, 2018

This course was very helpful for giving me a breadth of exposure to various ways to look at evaluating recommender systems. Having faced a very similar problem evaluating a recommender system for a legal document search/suggestion engine (like Google News for lawyers), this gave me a proper "birds eye" perspective on that problem that I wish I had before. We faced exactly the same problem you describe of finding the proper tradeoff between precision and recall, or search vs. discovery.

BUT what is lacking here is teaching us how to go implement these different evaluation metrics in practice. Sadly I don't feel any more equipped to go back to that legal search engine client and guide them toward a very concrete decision about the right metrics to use. I would just come with a mix of new opinions of metrics they should consider -- but how should they choose? what offline evaluation should we do? what online experiment could we run to decide? etc. If you had run us through problem set/assignments involving real-world situations like this, where we had to calculate these different metrics (given sample data) and come up with compelling cases for different metrics to use for evaluation, I would feel otherwise.

That said thank you for your hard work putting the course/specialization together. I hope my feedback helps constructively, but don't see it as criticism. It's because I am very enthusiastic about what you've been teaching me -- and I plan to go implement it for new clients of mine in my Data Science consulting practice ( -- that I only want the course to be the best it can be for others too.