In this video, we are going to look at triangulation as a method to create more robustness to our simulation model. As you recall in the last video, we saw that out of the three models that generated daily revenue, two of them seem to be more in concert with each other as compared to third one. That creates a little bit of uncertainty of which model may be more appropriate to use. Data triangulation will help us to identify more characteristics or whether the characteristics that we derived out of the data. were appropriately representing the business constraints. There are many ways to do data triangulation. We can adjust our estimates based on data elements that we may not have considered. We can also estimate the parameters based on new data that we might collect. And what we are going to do in this particular video is to adjust the estimates based on business logic. Again, the reason is that it'll increase robustness and realism into our model and provide additional details. As I mentioned, the comparison of revenue creates a little bit of doubt that something may not be right in our current estimates or which model to trust more. So let's look at our coffee shop data. In particular, other parts of the data we have derived and we have seen how to use the volume information of mean. And from that, derive standard deviation as long as we can justify using a normal distribution similar thing about daily revenue where we knew the max and use that to create the distributional characteristics. Then, in our fourth model, what we did was we created the same kind of information based on just the mean that was derived from these two particular values, mean number of customers and mean of the distribution. And then we came up with a way to simulate in a model 4 we simulated daily volume and revenue separately. Then we created or derived this particular value of 5.18 based on the data that we generated out of two independent processes. Once we derive that data, though, we can create a 99 percent interval in which all the values reside. Coming from this particular distribution. What this tells us is that there are only 0.5% of value before, below 3.557 and only 0.5% of values that'll be above 6.8. Now if you look at it in the context of a coffee shop, this estimate looks a little susceptible. Why? In particular? Because if you look at a normal coffee shop and think about what small coffee might cost, which is probably more than 0.5% of students. That would not be $3.5. It'll be probably half of that. For example, Starbuck's small coffee or tall coffee costs about a $1.75. So what if we were to consider that? If he were to consider that when, we can assume that the lower bound is 1.75. And based on that, we can create a standard deviation of this distribution that had the estimated mean of 5.125, which came from dividing 2050 by 400. And then if we calculate the upper bound or 990 per cent upper bound on that, that will be 8.5. So if we did that and raise simulated our model for, let's call it model five, if we did that. And I'm not going to go through the lab part of it, but just show you the results because we have already done it and you can easily verify what happens if he did that though. We find a new estimate of revenue, daily revenue, and that is given by this black line. Now if you look at that line, you can see that it actually is in agreement with the two other models, which were the model 1 and 4. More than it is in agreement with model 2, which just directly simulated the revenue based on the daily revenue estimates. What it tells us is that probably aggregate values of revenue or not a good way of simulating it. It's better to simulate it individually. Or in fact, if we just wanted to simulate the daily revenue and that's all our questions related to. Then it's more important to simulate the number of customers as opposed to the individual valuations. Having individual valuations does provide us additional information and additional ability to simulate revenues at a given data point. But it's not. If that is not the objective, then the simplest simulation that we created was good enough. So in summary, we can say that model 1, the simplest model, was good enough for daily revenue prediction. It does underestimate the revenue a bit, but it's still quite good, quite a good estimate. The challenge with that, of course, is that it cannot estimate revenue distribution for a given number of customers on a given day. So if 300 customers come on a given day, what would be the range of revenues that we can generate for 300 customers will not be able to be estimated by that. For that, perhaps we need to use model five or the model where we have used this triangulation and that'll give us the best estimator. So the triangulated models have a potential to provide more analysis flexibility. And post model development data can be continuously checked because you have, you are modelling the individual behaviors in some sense, we can test more things from newly generated data. And especially analysis of things like rainy days can be conducted better, which in fact is what we are going to do next. So next, we are going to lab, create a lab on modelling special cases. For example, the specific one that we're going to look at as low volume, high span. So as I mentioned in one of the previous videos that on the rainy days, what happens is that there are fewer customers, but each customer spends more because they spend perhaps more time in, in the shop itself. So we'll look at that. And of course you can look at other scenarios as well. Then after that, we're going to get an introduction to discrete event simulation, where we are going to look at issues of process modeling.