So, welcome to the last minute of module two, called perspective. And to remind you, we are using these perspective units to provide some historical context, and gossip a little bit about the industry, and explain why the stuff that we do in Nand to Tetris is extremely applicable and practical outside the context of this course. So we are doing this using the questions that we normally get when we teach this course. And the first questions is, how useful is this VM language? Does it really make the writing of compilers easy? Well, the VM language is surprisingly powerful. And in order to appreciate it, you can take a second look at some of the Jack applications that we showed during this module, and compare the Jack code of these applications with the VM code which was generated by the Jack compiler when it translated these Jack programs into the VM language. If you will do so, and if you will do some line counting, you will realize that each line of high level Jack code generates an average of about four lines of compiled VM code, which is quite remarkable. So even without knowing much about compilation, because we haven't discussed compilation yet, you should be impressed, I think, by the compactness and readability of the VM code which is generated by the Jack compiler. So the question arises, which is how does this compiler get away with generating so little code? Well, the answer is that this is a two tier compiler. And therefore, the compiler can happily assume that there is another agent, a VM implementation, that handles all the translation tasks from the VM level, all the way down to machine language. And that's exactly the VM translator that we've built in the last two modules. In fact, if you take a big picture approach, we have just finished developing the second back-end part of the overall compilation model. So the good news is that when we get to write the rest of the compiler in modules four and five, we will have far less work to do compared to, say, writing a C++ compiler. Which forces you to translate everything all the way from C++ down to machine language, without an intermediate level. And therefore, the hard work that you did in the last two modules will pay off big time later in the course, when you will get to develop the front-end part of the compilation model. Which we call simply, compiler. And that's the part that generates the VM code that we discussed and implemented in this module and the previous one. So to wrap up, the virtual machine is an extremely powerful and practical idea. Which is a very important piece of the Java architecture, the C# architecture. And in fact today, Microsoft also has a two tier compiler for C++. So if you want, you can also translate C++ programs into an intermediate VM layer. But if you do so, you're going to lose a lot of efficiency, which is the hallmark of using C++ in the first place. So let's see, what is the next question? Well, in fact, it's directly relevant to what I just discussed. What about efficiency? Why is C++ more efficient than Java? Well, obviously any two-stage translation model entails additional overhead, and results in more lines of less efficient machine code. If you're not convinced, try translating something from English to French, by first translating from English to Spanish, and then from Spanish to French. Obviously, the second technique will be more cumbersome and it will yield more cumbersome results. So which is more desirable, a two tier java program that eventually generates 1000 lines of machine code or an equivalent C++ program that does the same thing with 300 lines of machine code, that run three times faster? Which is more desirable? Well, the pragmatic answer is that today's computers are amazingly swift. And therefore, in many important applications, a drop of 300% in efficiency is not really noticeable. And of course, there are many areas, like real time signal processing and embedded systems, in which every efficiency gain is cherished and important. And for these applications, languages like C and C++ are indispensable and much more applicable than Java. But in numerous other domains, the elegance and security provided by languages like Java, and C#, and Python more than compensate for the loss in efficiency. Which, once again, is hardly noticeable in numerous applications out there. So moving along, let's see what is the third question? Security. Why is Java more secure than C++, and what does this has to do with our VM architecture? Well, you may know that VM code is sometimes referred to as Managed Code. And I think that this term was first introduced by Microsoft. Once again, as you may know, the centerpiece of the .NET framework is the virtual machine called CLR. It stands for Common Language Runtime. This is the equivalent of Java's JVM. So C# compilers generate VM code, which in the world of Microsoft is known as IL, Intermediate Language. And this IL code, similar to bytecode, is designed to be handled further by some CLR implementation. So to distinguish between code that is handled by the CLR, like C#, and code that runs directly on the target hardware platform like C ++. Microsoft calls the former Managed Code and the latter Unmanaged Code. And so this notion of managed, which really stuck not only Microsoft, but by anyone who uses the VM artifact. This notion of managed comes from the observation that when the code runs on some Virtual Machine environment like the CLR, or the JVM, or our own VM Translator, we can do all sorts of useful things with this code, in addition for just executing it blindly. For example, we can inspect the VM code and look for security breaches. It's much easier to analyze the symantics of VM programs compared to machine language programs, which are far more elaborate and cryptic. Also, let's not forget that the VM code must be handled by some virtual machine implementation. So we can design this implementation in a way that creates what is sometimes called a sandbox effect. The idea is that during runtime, executing programs can never reach out of this sandbox and mess up with toys that they are not allowed to manipulate, like the host RAM. And that's extremely important in a world in which software is routinely downloaded off the internet, to your computer or to your cell phone, without you even knowing this. And this software comes from all sorts of unknown sources. So in that respect, the VM implementation that runs on your PC or on your cell phone can be viewed not only as an enabling technology, but also a security layer that protects your device from malicious code. And that's another very important advantage of the VM model. All right, I wish to end this module with a general observation about the virtue of separating abstraction for implementation, which is something that we emphasize in almost every unit and module in our Nand to Tetris courses. Recall that VM functions access the memory segments using commands like push argument to, pop local one, push static seven, and so on. Now, in doing so, VM functions have no idea whatsoever how these values are actually represented on the host RAM, or how they are saved and reinstated during the function call and return protocol. In other words, the VM code treats memory segments as abstractions, without worrying at all how this abstraction is actually realized by the VM implementation. Now, this complete separation of abstraction implementation implies that, it implies many important things. But one of them is that developers of compilers that generate VM code don't have to bother at all about how this VM code will end up running. And they don't have to bother at all about the host platform. The host platform for them is a virtual machine. And that's very important because compiler developers have enough problems of their own to worry about. As you will soon realize in the next three modules, when you will develop our own compiler. But cheer up, you've just completed developing the first half of a compiler for a high level, object-based programming language. That's incredible. This is a tremendous achievement and you should feel very proud of yourself. And with this feeling of pride, we are wrapping up module two. And I will see you again in module three, in which we present our high level language.