[MUSIC] Welcome back to this course on Quantitative Model Checking. In this 4th Module, we'll deal with continuous-time Markov chains that are abbreviated to CTMCs. And this first lecture will serve as an introduction to CTMCs. So, you have seen discrete-time Markov chains in Module 3. And time evolves in abstract steps, there. We have seen how to compute transient probabilities over time, as well as in the limit when time goes to infinity. The question now is, can we add real time? If so, how? And what can we still compute in such cases? Now first, I want to go back to DTMCs and have a look at the so-called state residence times, Ti, that are defined for each state, i, of a DTMC. Now what is a state residence time? It defines the time you spend in a certain state. Now, recall the probability matrix, P, and now the probability that Ti, the state residence time, is 1 = 1-P(i,i). So that is immediately the probability to leave this state. Now, if you want to have the probability that the state residence time Ti=2. We stay in the DTMC for 1 times that, and then leave the DTMC in the next. Understand, you can, of course, do for Ti=n, as it is down here. We have to stay in that state for n-1 times steps, and then leave it in the next. So this is P(i,i) to the power of n-1*(1-P(i,i)). So this is the state residence times Ti. And if you pay attention, you might have seen that this is a probability distribution. Namely, the geometric distribution. The expectation and the variance of the geometric distribution. It's just for further reference. We won't use that very much. And what is very important is that this distribution exhibits the memoryless property, which you also recall from Module 3. Now what does the memoryless property means for this geometric distribution? So, it means that if we have state and steps in that state, the probability to leave that state in n+m equals the probability to leave it in m steps, for any m greater or equal to 1. If you translate this to the underlying Markov chain, you see that again all the state information is encoded in the state number. And you don't need to take into account any history. So this is memoryless property, which returns, here, for the state residence times. Now, this is discrete time. As I said in the beginning, we want to move to continuous time. So we need a continuous time distribution, which also exhibits this memoryless property. And there's only one, namely, the exponential distribution. So this is what we need in CTMCs to model the state residence time. So we have a continuous random variable, x, which follows the exponential distribution with a certain rate. And normally, or very often, we refer to that rate as lambda. For the exponential distribution, we have the density and the distribution function. And again, you won't use this very much, but let's take a look either way. This is, I think, what is the most important for you, the distribution function is 1-e to the power of -lambda x. Now the probability that x is greater than t is, basically, yeah, the opposite, the counter probability. So it is 1-1-, so it gives you e to the power of -lambda t. And the expectation, here, is something we will use. It's very easy. It's just 1/lambda, so I'm sure you can remember this. Also, we have the variance, and this is something you won't need very much. But I want to show it to you just for completeness. The variance is given by 1-lambda square. So now you know a little bit about exponential distributions. And we'll use this, now, in CTMCs. But before we move on to the next lecture, I just want to illustrate for you the exponential densities on the left and the distributions on the right. And I want to show you the impact that the rate has on the evolution of the curve. So here we see the density for lambda from 0.5 to 1.5. And on the other side you see the distribution for the same parameters. Now, what we do is we use the exponential distribution in the CTMC. And we add one random variable that follows the exponential distribution to each state in a DTMC. And then we obtain the CTMC. So this is the random variable we add. And this represents the state residence time distribution in the CTMC. The operational behavior when we are in state i is now as follows. So we stay in that state i for a time that is exponentially distributed. But it also has an expected value, E[Xi] following this random variable, Xi. Then, after this time, we jump to the next state with a certain probability. And that probability comes from this probability matrix, P. And just takes the entry that corresponds to state i, where we started, and state j, where we want to jump to. And that you just repeat as long as you want. Now, the definition of a CTMC. A CTMC is a tuple, consists of states base, s, and a rate matrix. That is in contrast to the DTMCs, where we had a probability matrix. So here, we store the rates that bring us from state i to state j in the so-called rate matrix. Now of course, there's a connection between the rate matrix and probability matrix. And the entry that is in the rate matrix for state (i, j) is the same as the entry in the probability matrix for (i, j) times lambda. And lambda i is the parameter that we give with the exponential distribution to state i. Now we can define a bit more, namely, the exit rate for state s. And that accumulates all the rates that bring you out of state s, so to say. So ES is the same as R(s, S). And this we use as a shorthand notation for the sum of all states, s', that are in the state space. But without taking into account the state s, where we start. And then we just accumulate all the rates starting from s ending in any s' that is in this summation. Now this the exit rate. And you will need this later for model checking. And then recall the notion of absorbing. Yeah, a state is absorbing when you can never leave it. And that directly corresponds to an exit rate of 0. So this is the definition of the CTMC. And keep watching if you want to know how to compute with CTMCs. [MUSIC]