Last time, we learned what simulation is and why we do it. This time, we'll explore computers simulating computers. And this is often called emulation because one computer emulates or acts like a different computer. Here's one problem we might have. We might be designing a new computer architecture, the way that we're going to have the components of our computer interact with each other. And we want to explore how that will work before we actually start manufacturing this computer. The problem is we don't have one yet, so it's hard to explore the behavior of something that we don't have. Of course, people build hardware prototypes, and that's a very useful and important piece of doing this. But it's also helpful for us to simulate the new computer as well. One way we can do this is we model, we build an abstraction of the different components that we're going to include in our computer architecture and we connect them together the way we're going to connect them together in our architecture. And then we see how it works. And we can see how it works in a couple of different ways. One way to see how it works is to actually write code that we execute on our new computer architecture in order to observe its behavior. The downside of doing this is we also need to write a compiler that will convert the code we write to the machine language for our new architecture. But let's face it, if we're going to release this new architecture, we're going to have to be able to compile the code to run on it anyway. So this is a job we're going to have to do, we just have to push it forward to be able to simulate how the architecture will work as we execute programs on it. The other thing that e can d,o which is less expensive because we don't have to write a compiler but also potentially less accurate, is we can also run a set of different patterns of usage against our new computer architecture to see how our architecture behaves in the face of those different patterns. The danger of doing this is that we might guess wrong about the patterns of usage. So we might simulate our architecture and be very excited that it works great, and then discover that the patterns of usage in practice are nothing like what we guessed. If we're replacing an existing architecture, we can gather historical data about the patterns of usage and assume those will be approximately the same on our new architecture. So we can mitigate this danger at least a little bit, but it is slightly less accurate. Another problem we might face is the computer that we want to execute code on or do some other experiment on is either unavailable or it's obsolete. So it's unavailable because it no longer exists, or it's too expensive to access, or even it's in use. So if you had one server and you're using that to serve requests from your clients, you don't necessarily want to hit that operational server with the experiment you're trying to run. We can use simulation here as well, although in this particular case, people regularly call this emulation. So we set up a different computer to emulate the target system. And the emulation is often done in software, but it can be done in hardware as well. And then, we run our experiment or test our hypotheses or whatever we want to do against the emulation rather than against the real system. This final example is sort of more wouldn't it be great if we could rather than know people do this. But say we have the problem that we're trying to provide tech support for our piece of software that's deployed on the PC. And we actually ran into this issue when we were developing games for PC. So say somebody has a problem, and they're getting a bug, and we can't see it when we run it on our computers. The issue is that they may have a different configuration than us. If you think of all the possible configurations on a PC, let's just talk about hardware first. All the possible configurations between the CPU and the RAM, and the soundcard and the graphics card. And now, throw in software, every single piece of software that's installed on, or every potential piece of software That's installed on to PC and all the combinations of them. This is a hugely complicated problem. What we know we can't do is have a room filled with all these actual hardware devices with all different configurations of software. That's just not going to be possible, right? Economically, or you even don't have the space to do it. So wouldn't it be great if we could build the simulation where we had models of how all these hardware and software components work? And then, we could just sort of mix up the correct configuration for one of our users who's having trouble, and we can resolve their problem by running the simulation. Remember though, in abstraction, we are deciding which details matter and which details don’t when we build that module. So if we happen away the details that are causing the bug, we're out of luck. We'll still not be able to simulate the error that the user is running into. This is really a pie-in-the-sky, wouldn't it be great if we could do it example. Because even simulating this stuff is probably not going to be feasible. But if we could, we could build that model and run the simulation and find the bug and then fix it. Make sure it works in our simulation and then deploy that patch. To recap, in this lecture, we talked about numerous examples where we can use computers to simulate other computers.