[MUSIC] In our preceding lesson we have commented the relation between algorithms and combinational circuits. Such a relation also exists between algorithms and sequential circuits. The relation between algorithms and digital circuits is one of the bases of this course. Actually, some systems are sequential by nature, which is the case of systems including an explicit or an implicit reference to successive time intervals, and several examples have been seen before: for example, circuits that generate sequences, or circuits that detect some specific sequences, or circuits that control sequences of events. But algorithms without any time reference can also be implemented by sequential circuits. Let us see a first example. We are going to specify the working of a circuit that computes the integer square root of a natural number X using for that what we could call the "naive algorithm". It consists of computing all pair (R, S) such that S is the square of R+1. For that we can use this relation. Thus, given R and S such that S is equal to the square of R+1 (for example if R = 2, R+1 is 3, and the square of R+1 is 9), then in function of R and S we compute the next value of R, that is R+1, and the next value of S, that is to say R + 2 square, and then the corresponding algorithm is this one. This computation is the body of a loop, loop controlled by this condition that means that as long as S is smaller than or equal to X, then we execute this loop. As an example assume that X = 47. Then the execution of this loop generates uccessively number R equal to 0, S to 1, R equal to 1, S equal to 4, and so on, up to the time when for the first time the condition does not hold true because 49 is greater than 47. And the conclusion is that the square root (the integer square root) of 47 is equal to 6. A fiirst comment: this is obviously not a good algorithm as the number of steps is equal to the square root itself. That means that for great values of X, X of the order of 2**n, the number of steps is of the order of the square root of 2**n. It is used just for didactic purposes. Then, if we have this algorithm, with a loop, the first idea is an iterative combinational circuit. For that, as we don't know in advance how many times the loop will be executed, we modified the algorithm and say that once the condition stops holding true, the value of S and R won't change any more. So, if the condition holds true, then we execute the two operations that we have seen before, but if the condition doesn't hold true then the new value of S is the current value of S, and the same for R. Then the component, this component, executes this loop body and the connections between components correspond to this instruction, that is to say, the next value of S, the output of this circuit, is the current value of S, that is to say, the input of the next circuit, and the same for the other input. Then, with this configuration, the first output of the circuit computes either number i (if the component is component number i) or the square root. Now, what is the number of components in this iIterative circuit? We know that the number of step is equal to the square root of X. Then as X is smaller than 2**n, and its square root is smaller than the square root of 2**n, the number of steps is of the order (the maximum number of steps) is the square root of 2**n. That means that (an example) if we are computing the square root of numbers with 32 bits, the number of steps could be as large as 2 to the power 16, that is a number greater than 65,000 components. So, this circuit in this case would have more than 65,000 components, obviously a too large number of components. A better idea is Sequential Implementation. We know that combinational circuits implement functions. Sequential circuits include a combinational circuit so they also implement functions. But what else does implement a sequential system? Basically two things. Memory and time partition thanks to the synchronization external signal. Then we could associate the memory elements to variable R And S, and partition the time into intervals, and assign the time intervals to each operation, or to each group of operations. For example, during the first time interval, we could initialize the values of variables R and S. Then during the second time interval, the third time interval, and so on, we could execute those operations. Then we could also add a new variable END used here and here. And, in this way, we get the following algorithm, that is to say: we initialize the variables R and S; as long as condition S lower than or equal to X, we execute this set of operation, and the END variable is equal to "false"; once the condition doesn't hold any more, then the values of S and R wouldn't change any more, and the variable END will be equal to "true". The synchronization input is used (here or here) to modify the values of R and S, replacing the value of S by the next value of S and the value of R by the next value of R, as they are computed by the algorithm and thus by the combinational circuit. So, we get the following sequential circuit that consists of a combinational circuit and the memory. The memory stores variables R and S, and the combinational circuit computes the next values of R and S. So it's a sequential circuit whose internal state is constituted by the values of R and S. Observe that the corresponding state transition graph of this sequential circuit would include as many vertices as pairs (R, S), and for every vertex as many edges as values of X. That means a huge number of vertices and a huge number of edges. Obviously not a good way to specify the circuit. Now, some comments about algorithm implementation, either by means of a combinational circuit or by means of a sequential circuit. Our first comment: conceptually, any algorithm without time references, can be implemented by a combinational circuit. For that, you could execute the algorithm for all input variable values, and store the corresponding output values in a table. Then once you have a table that defines the circuit, you can generate the corresponding combinational circuit. But obviously in many cases, this table enormous. So that this method makes sense if you have an unbounded space to implement the algorithm, for example, unbounded silicon area, and (furthermore) if you have a lot of time to dedicate to the development of the circuit. Then a better method is to directly translate algorithm instructions into circuits. We have seen examples before: how we can translate an "if then else" instruction into a circuit with a multiplexer, or the same with the "case" instructions, with the "loops", and so on. But even with these methods, the results might be a too-large circuit, and we have seen an example with the "naive" square root algorithm. Now, what about sequential circuit? Sequential circuits compute functions, like combinational circuits, because they include a combinational circuit. But, furthermore, they are able to store data and to assign different time intervals to different operations thanks to the synchronization external signal. The result is that, in the case of loops, memory elements allow to substitute N identical components by one component that iteratively executes the loop body. So we could say that space, that generally means silicon area, has been replaced by time. Furthermore, this method is not restricted to the case of iterations. Let us see an example. Consider this 4-instruction algorithm: First instruction: compute X1 equal to some function F1 of the input X. After that, compute X2 equal to some function F2 of the first result obtained before X1, and so on, X3 is a function of X2, and the output R is a function of X3. To this algorithm corresponds this simple combinational circuit with four components, one that implements function F1, the second one function F2, function F3 and function F4. But it could also be implemented by a sequential circuit. First we slightly modify the algorithm with a new variable STEP initially equal to FIRST. Then we define the loop body: it's a "case" instruction; in the case where STEP is equal to FIRST, then R (variable R) is equal to F1 of the input X, and the next STEP will be SECOND. When STEP is equal to SECOND, R is equal to F2(R) and the new step will be THIRD. When STEP is equal to THIRD, we compute F3(R), and now STEP will be equal to FOURTH, and when STEP is equal to FOURTH, the new value of R will be F4(R), STEP will be equal to some value FINAL, and when STEP is equal to FINAL, STEP remains equal to this value FINAL. So that the circuit we obtain is this one, this sequential circuit, in which the combinational circuit executes the "case" instruction, and the memory stores both R and STEP. Conclusion: algorithms can be implemented by sequential circuits. For that, we use memory elements to store the variables; we assign time intervals to the different operations of the algorithm, and the operations are executed by the combinational part of the circuit. In this way we get this architecture, that is to say, the well known structure of sequential circuit. We have compared sequential and combinational implementation of algorithms and have given two examples of sequential implementations.