[MUSIC] So, let's take a look at another example, namely a Triple Modular Redundant System. Imagine you have an embedded system which you want to perform very reliably. So instead of a single processor, you have three. I can only show you two with my two hands. And then on top, you have a voter. This Arginu here is, performing as my voter. And this voter is used to decide which output is correct. So in case you have three of these processors such a the system called a triple modular redundant system, three processors, a single voter. And all the processors run the same program. Then the voter takes a majority vote. So, for example, two processors compute the same correct result and the third processor is off because something is broken here. And then the router can decide that two out of three have computed this result so that is going to be the most probably correct result. Each component, of course the processor and the voter can be failure prone. And we have a single repair unit that can fix processors and voters. But it can only fix one processor at the time. So this is a visualization of how this works. We have an inputs signal, all the three processors do the same calculations, forward the result of the voucher, takes the majority vote, and gives the output to the system. We need a couple of assumptions in order to model this as a CTMC. First of all, if the voter fails, the entire system is down. We cannot perform any calculations. And then once the voter is repaired, the system starts over as new. Now, if I want to model the reliability of this triple modular reduction system, I need to define how a state is going to look like. And this is what I'm going to use. A state is defined as a tuple which consist of the number of processors and the number of voters. And of course this is the number of open running processors. And, a one in case the voters is up and a zero in case the voter is down. Now if i want to model this as a CTMC, I need several states for that. Let me draw this for you. So i start with state 3, 1, meaning all three processors are up and the voucher. Now, one of these three processors can fail or each of them can fail. And then we go to state 2,1 where only two processors are up and a voucher. Now, the failure rate of a processor is given by lambda. Each of them can fail, so I put the transition with rate 3 times lambda, once for each processor. Now they all can be repaired. But also, the next processor can break. And that brings us to state 1, 1. Now, with rate two lamda. And finally, the last one can break with the great lambda, and then I'm in state 0, 1 and here in the middle I have the state down, meaning the voter is down. And in each of these states, the voter can break. And the voter can be repaired, bringing us back to state 3, 1. And also here, a processor can be repaired. Now, let me clean this up for you because now you can also see all the other failure and repair rates. So what do we have? For the processor, we have a failure rate of lambda given in failures per hour, and we have a repair rate of mu, repairs per hour. For the voter, we have a failure rate of nu failures per hour and a repair rate of delta given in repairs per hour. Now, you want to take a look at the evolution of this system over time. First of all, we'll take a look at the steady state distribution. You know how to compute this. You have to derive the q metrics and then perform the calculations p times q equals zero plus the normalization constant. You can do that by hand, but I've done this for you for this parameter set. This table states the steady-state probabilities per state. Now if we would like to know, what is the steady state probability to a b in state where two or three processors are up and the voltage is up. We have to sum The steady-state probabilities for these two states, 3, 1 and 2, 1. If we do that, we see that the probability to be in such a state is quite high, with 99.4%. We have a quite reliable system, I would say. Now, next to the steady-state probabilities, we're also interested In the transient probabilities for this module, have a short term evolution of the system. Now, you still don't know how to compute the transient probabilities but I'm going to show you. But I did the computations for you and they are visualized here. So what you see here is the probability to be in state 3,1. But everything is up and running for the first ten hours, given that we started in state 3,1. And you can see how this decreases over time. Here, on a log scale, I give you the transient probability distribution for each of these states. Again, provided that we start in state 3,1. For the same parameter sets as the steady-state probability computations. Now, of course you want to know how to compute this transient probabilities. And in a CTMC, the transient probabilities are given as a differential linear equations. The so called Chapman–Kolmogorov equations. And then the transient probabilities are the foremost solution of this set of differential equations. I'm going to spare you with the derivation of this result. But instead tell you how to solve them. Well, p(t), the vector of transient probabilities, is given by the initial probability distribution times e to the power of q t. While we can use a Taylor series expansion to solve this and then, we have to multiply the initial distribution with the infinite summation of Q times t to the power of i divided by the faculty of i. Well, in theory this is a very nice result. But in practice, we encounter a couple of problems when we try to solve this. First of all, we have an infinite summation in here. This is not nice but can be overcome. On the other hand, this computation tends to be numerically unstable because qi becomes non-sparse. So, q is a matrix with rates inside. So it's not a probability matrix and if we take the exponential, the ith exponential, will become less and less sparse. So what do we do instead? Well, we resolve to each clinic that is known as uniformization. Uniformization computes the transcend probabilities on the so called uniformized DTMC and then renormalizes them and brings them back to the level of the a CTMC. So first of all, what you have to do is you have to transform the CTMC into the uniformized DTMC that is called U. A U is also the probability matrix. And this is the identity matrix plus the Q matrix divided by small q, that is the uniformization constant. And the small q has to be chosen at least as big as the maximum of the diagonal entries. So what we do with this uniformization approach is basically we renormalize with respect to the fastest outgoing rate. So in a sense we add self loops for slower states that Increase the probability to stay in that state. So we increase the state residence times of these states. This method, uniformization, which I'll present in more detail in my next lecture, has been introduced by Jensen in 1983, sorry 1953, and then popularized about 30 years later by Gross & Miller in 1984. [MUSIC]