>> So welcome to the prospective unit of week five in which we integrated all the chips that we built before into a full blown computer system. And during this week, we discussed two different computer architectures. The Von Neumann architecture and the Harvard architecture. So perhaps, we can start the, perspective unit by asking Neumann to summarize for us the main architectural differences between Von Neumann and Harvard. >> Well, in the, in our Hack computer, we had to separate, read-only memory for the program and the separate data memory that was read-write. Separating between these two different memories is sometimes called the Harvard architecture and which I tend to view as just a flavor, if you wish of Von Neumann architecture. In a classic Von Neumann architecture, you have a single memory that contains both the data and the program. As we explained in the second unit of this week doing it the classical way has a tiny complication that you need to separately fetch an instruction, that is access a program instruction. And separately at a different time at the next cycle, execute the program which to, which to required for a, accessing the data memory. In our Harvard architecture, we had the program in a separate memory unit. Basically, we could do both of them in a single cycle, which made it a bit simpler. Now this architecture, as is, is probably a very appropriate for sometimes what's called embedded computers, where for computer does a single thing that was pre-programmed to it and just put in to ROM. A general purpose computer in which you want to keep on changing the program that you run, you would probably need the full, real Von Neumann architecture. The differences are not that big between these two architectures, as we saw in a unit two of this week and we chose to take the simplest one. Again, keeping with the simplicity of our whole approach. >> So normally, if I understood your explanation one implication from Harvard to Von Neumann architecture is that in Von Neumann, we expect the computer to do different things at different cycles or different points of time. So, is there some organized way in computer science to to model this this different behavior of computers? >> Absolutely. The standard way of doing it is via the formalism of the finite machine. A specification of the different things that the computer is supposed to do at different times steps and how we're supposed to move from one statement to another. Let me demonstrate. Okay. So the idea is that for every possible state that the machine can be in, you have a type of circle and you have arrows between the different, different circles or between the different states of the machine that tell you, from which state do you move to which other state and why. In each state, you can write what are the kind of thing that the machine needs to do in that situation. For example, maybe in this state, you want to add one to the program counter and maybe you want a certain address to get a certain value. For example, the value from the memory address, address by n. And so, on every different state, you can write this information in terms of the state. This gives you a very clear picture of what needs to be done in every possible state and where, what kind of state do you move for, do you move to after everything that happened? For example, you may move from here to here, if the clock if the current clock cycle changed or if the current memory location has value zero. Or various other situations, that are, can be found in the Harvard. The values of the different hardware logic. So, once you specify this and this kind of way formalism in this kind of finite state machine formalism, now we can translate this to the normal registers and combinatorial logics that we already know. The trick would be to simply add another register that is called state. A register that will encode in which one of these cycles are we in. In our situation, we'll need 2 bits to encode three possible states. So we'll have here 2 bits. Once we have this state register, now we can write combinatoty logics that basically affects everything that we need. So for example, if the 2 bits this is just a register is has 2 bits going in and 2 bits going out. But all the combinatorial logic that we have can also accept the state as part of its input. Like it accepts counter and so on. Different combinatorial parts of our computer, accepts various inputs. Now they also have this input, the state. And of course, they can do different things according to the different state, including deciding what the next state is going to be. Again, it's a function of the current state and all the other hardware signals in the computer. So the way you translate from this formalism from the finite state machine formalism to this actual implementation is pretty straight forward. And and oh, completely technical, if you wish. Once you actually coded everything, it's just like you organized hardware previously. And that's an organized way to design computers that do different things in different times and move between the different times and different states in an organized manner. >> One question that typically comes up when we teach this part of the course is that the Hack computer can interact with a keyboard and screen. And the question is what does it take to connect a computer to more peripheral devices you know, in addition to a keyboard and a screen? So indeed, real computers have many peripheral units which are the screen, the keyboard, a mouse, a microphone, a disk and so on so forth. And also this architecture is scalable. We can add more devices as we please and the question in there is how can you possibly do it? Well just like we did with the screen and the keyboard, we can allocate memory space to represent every one of these peripheral devices. And when we want to write something, for instance when we want to sound something on the microphone or write something to the disk, we can write into memory certain codes that are later later, will be translated into physical signals that actually operate these peripheral devices. But at some point, when you add several such peripheral devices, the CPU becomes extremely overloaded. Because the poor CPU has to not only run your program, but also manage all these peripheral devices. So the typical approach is to offload the CPU from all this headache and use what is known as device controllers. So typically, when you add, for example, a disk. The disk will be equipped with device controller, which is a dedicated hardware, which knows how to manage the disk, which knows how to translate operations from what the CPU wants to do to actual movements of the disk an, and so on. And something like this happens with every particular I/O device. And for example, take the screen. In the, Hack platform, the screen is managed in a very simplistic way. When you want to turn on and off a pixel, you simply turn on and off a bit in memory. And you assume that at some point this this manipulation will be refreshed or will, will cause the screen to be refreshed. So the CPU is in charge for every thing. If you want to draw a line, the CPU has to actually write all these points all these bits into memory and, and the line will get drawn at a certain point of time. In a real computer the screen comes equipped with a graphics card or some graphics accelerator and this is a dedicated computer that can do all sorts of things internally. So if we want to draw a line from one coordinate to another, we can simply tell the controller go ahead and do it and the controller will, you know, will do everything, which is necessary to compute, you know, which pixels have to be turned on and off and so on. So, once again, we can add many I/O devices as we please. U, in principle, it will be very similar to what we did with the screen and the keyboard. But there are numerous details involved, which are specific to all these different devices.