[FOREIGN] Life is short, the path of the craft is long. Suitable case is fast to vanish. Experiments are risky and judgment is difficult. These are words of ancient physician Hippocrates, and I think that the situation of modern investment management looks exactly like situation of ancient medical science. In our life, in our craft, experiments are also very risky. What we have to do, usually we're using back tests. Wikipedia says that, ''backtesting is a term used in oceanography, meteorology, and the financial industry to refer to testing a predictive model using existing historical data.'' In very broad terms, backtesting is using some simulated or historical data to validate predictive model, portfolio model, risk model, or trading strategy. Let's look at examples. For example, we want to backtest algorithmic trading strategy or this set of rules which says that if data, if formulas produce some particular values, then you have to buy or sell some particular instruments. You're just applying these rules to historical or simulated data and you look how much, how big or how small would be return produced by this set of trading rules on the history. Then you would perform general performance analysis. You would look at returns, drawdowns, you would calculate profit factor. You would also slightly change different parameters of the strategy to see how stable is the result. If we're speaking about predictive model, the situation is even simpler. We just want to know how good is this model at predictions we only using historical data to validate that. We want to know how often does this model produce wrong predictions? How often it predicted something that never happened? How often it missed something that in fact actually happened. If we're speaking about value at risk model backtests, for example, at Kupiec's model tests. Backtest means that we're just looking how often on history the value of some instrument or portfolio was below value predicted by value at risk. Now let's talk about portfolio model. How back-test of portfolio models should be organized? Definitely not like backtests of algorithmic trading strategy because the goal of algorithmic trading strategy usually is achieving maximum possible result. This is absolute return. That's why we actually are looking at historical performance. Historical price performance. If we're backtesting portfolio, we usually are pursuing other goals. What goals? How we should organize backtest to portfolio model. Let's look at the blackboard. Backtesting portfolio, backtesting asset allocation methods. When doing that, we usually do have some returns information. These are series of returns for different instruments. But we're not always using this information to obtain portfolios. We somehow have to put that information into some algorithm of asset allocation but not always. You remember that sometimes we have some external information. For example, some views of analysts or whatever, and the usage of direct information from the past is very dangerous when we're speaking about NVO. We're putting that into asset allocation algorithm and then we're developing some allocation. For example, in case of portfolio of classical [inaudible]. What approaches we have here for backtesting that, first of all, we can take some portfolio on that frontier. Maybe it would be a portfolio which is maximizing Sharpe ratio or we can take just the minimum risk portfolio, maximum risk portfolio, mid risk portfolio and then we can, for example, this is our sample. We are in this moment, this is called sample. This is called out of sample. We are building portfolio for information which is available at that moment, and then we can test how the portfolio behaves on this moment. Then, we can shift this window one step further and build portfolio based on that information for this period, and so on and so forth. Now, what can we do with all that portfolios? Well, first of all, there are many authors follow the following methods, myself included by the way. They're just recording the equity curve of that portfolio. The first methodology gave me that level. The second methodology looks like that, and so on and so forth. But actually, that's not ideal way of back testing just due to the fact that when we're building portfolio, we're not trying to achieve maximum return. This approach is good for all the trading. For example, when we are building something which strives to achieve maximum return and we want to test how this thing which is designed to achieve maximum return performed in the past. Now, we're building the asset allocation to give us some exact return. We want to have exactly this return or we want to maximize some utility, which, in many cases actually, equals Mu times portfolio minus Lambda times portfolio times Sigma times portfolio. What we can do, we can actually record these utility for different portfolios because we have weights. But weights we're estimating for sample or for some information which is available at these periods. But the real returns for that portfolio, so this Mu, is taken from their next step. This is real realizations of returns. Then, in that case, we can, for example, compare two methodologies. For example, here, we have utility for methodology 1, here, we have utility of methodology 2, and here, we might have different points which correspond to different utilities calculated for different periods of time. This is first approach. The second approach is that we can actually calculate, this is our target return, and how often methodology which we are testing gave us a return which was higher or lower of the return which we require, which means that we're building portfolio, we're selecting portfolio which targets certain return. How often these methodology gave us a return which is below required return? How often it gave return which is higher than required return? Another interesting methodology to test whether the value added by some asset allocation method is statistically significant. This methodology is called quality control charts. In quality control chart or in building quality control charts, we're doing the following thing. We have the average return, which is actually zero, then we have some value-added measure like active return. Active return equals return minus benchmark. Then we have confidence lines. These are confidence lines. The confidence lines are calculated using the following formula. The confidence line equals mean return of that strategy or of asset allocation algorithm plus minus, depending upon it is upper or lower confidence line, three standard deviations divided by square root of number of observations. We have this conference lines. If value-added goes significantly higher or lower the confidence line, then probably the methodology is really adding sound value. If not, then the methodology gives value which is statistically does not differ from zero.