Let's briefly summarize what we've covered in this lecture. We've talked about online product sites. We've motivated this by discussing eCommerce, and specifically Amazon being the dominant eCommerce company today. We've talked about how these sites can rank product lists and which types of rankings may be more efficient than others. Which also led us and gave us insights into when we might be able to trust the rankings, or the average ratings on the sites. We also saw a lot of the wisdom of crowds and we, we motivated that with a famous example from a Galton experiment in 1907. When a bunch of people were trying to estimate the weight of an ox, right, to guess the true value, in that case being the weight of the ox. And since certain conditions were satisfied, meaning the task was easy, there were independent, unbiased, and also the number of people was large enough that the, the error in each of the average guesses, or the error in the average was decreasing as the number of people increased because they tended to average each other out. In it's, all the errors averaged each other out and concentrate around the true value. It talks about rating and ranking them. Wow, two approaches to this. First is just the naive average, which basically says we just add up the total number of stars in the case of Amazon, and divide by the total number of reviews. Then we talked about a Bayesian adjusted ranking. Which which extends the idea of a naive average, to also take into account information given by the population as a whole. And we, we explain that using sliding ruler concept. So, if this is the individual, and this is the overall average, individual average and overall average, the adjusted average is going to be somewhere in between. And, as the number of individual ratings increases this is going to approach the naive average of the individual but as it decreases, it's going to approach the overall average. So, as we have less information, we're going to sort of rely on the overall average as a backup. Then we also turned our attention to Amazon's secret formula which is not known to the public, but we, we tried to apply some of the techniques that we had discussed to see if we could reverse engineer the secret equation. And what we applied was first a Bayesian adjusted ranking, and then we tried to look at specific cases which would, which involved things like review quality, reviewer quality and timing of reviews. The main themes that we covered here, first is that crowds are wise, which is the over arching main idea. That we can, in certain circumstances, we can trust the opinions of the masses to actually come up with a good estimate for some, something, some value. The second was opinion aggregation, which just extends that idea to a more general setting of being able to aggregate the opinions of others such as in voting. And the third maintain that is much more general than this lecture, and this whole course, in a sense, is the, the gap between theory and practice, which is especially prominent in this case, because we saw that a lot of the techniques that we discussed aren't exactly applied in practice. And there's lots of modifications that are made in each specific case to account for factors that are very hard to model theoretically. [BLANK_AUDIO]