Welcome to week three of our course. Philosophers say that any meaningful proposition or concept should also either include its own negation or alternatively, should point to things that are beyond it. And extending this idea in the course, will probably be incomplete without talking a bit about things that are beyond methods and concepts presented in such course. And respectively, I thought it would be a good idea to spend the remaining two weeks of this course on things that are easy, that either are on the move, or maybe beyond the methods that we studied in this course. So in our next and last week of this course we will overview current stage and applications of reinforcement learning and inverse reinforcement learning. And for this week, we will spend it looking into approaches that might go beyond methods of reinforcement learning that we developed so far. And these methods obviously, would also go beyond classical financial models. On the other hand, you will see that this approaches have to do with many concepts and issues that we encountered in this course. One of them would be the importance of regularization in building models. We spoke more than once, about regularization in this specialization and how it's here to improve part of sample performance. But the origin of the word regularization, I believe, is in physics. And in physics, it's used in a more dramatic to view which way. Instead of using regularization to improve a model, in physics sometimes regularization is needed long before any improvement of a model. It's actually needed first just in order for this model to make sense at all. The other issue that I want to highlight here is a principle known in industry as the GIGO principle, that stands for garbage in, garbage out. We already mentioned this principle in our previous course. A bit contrary to what its name suggests, your data should not necessarily be a total garbage to see a problem. Let's say you want to optimize your portfolio for a year ahead period and let's say all your data starts in 2009, past the crisis. And let's assume that in doing this, your portfolio optimization uses signals that you analyze using various supervised story logarithms. In this case, whether you use shallow learning, deep learning, or super deep learning, it will not matter in the sense that your final model would be totally unaware of possible market crisis. And would think that markets should always be benign, in other words, your model would severely underestimate risk in your portfolio. Granted what I just described is exactly the reason modelers do try to include periods of market crisis or market turbulence in data. Because they want models to be more robust and more realistic. Still even then, there have been only two major crisis in US economy since 2000. A technology sector crisis of 2001 and the economic crisis of 2007 and 2008, and these two crisis were different in their patterns and dynamics. Therefore it's far from clear that even if we include both crisis in our data set for model trying kit, would be enough for the model to be accurate enough on the onset of the next crisis, that will be quite different than the previous two. And in most such cases, prior information becomes very important. So in this week, we will talk about modeling market dynamics and how this problem goes beyond both classical financial models and classical reinforcement learning. Let's get it started.