[SOUND] Let's talk a little bit about the complexity of this algorithm. Given an input, a formal context of a certain size, how many steps does it take to produce all formal concepts in [INAUDIBLE] order. Well, the answer is easy. It takes unfortunately exponential time in the worst case. And this is because the output may be exponential. So there may be exponentially in many formal concepts. And therefore, the algorithm is necessarily exponential in context size. So let me give you my example. Let's say that our formal context looks like this. We take numbers from one to n as objects. The same numbers as attributes and the inequality relation as the incidence relation. So to give a concrete example, for n = 5. We get a formal context with objects 1, 2, 3, 4, 5 and attributes 1, 2, 3, 4, 5. And we have crosses everywhere except for the diagonal. So 1 is not equal to 1. Well, 1 is equal to 1, therefore we don't put a cross here. But 1 is not equal to 2, it's not equal to 3, not equal to 4, not equal to 5. 2 is not equal to 1, but it's equal to 2, so we don't put a cross here and so on. So how many concepts do we have in this formal context? It turns out that formal concepts of this formal context of pairs a,b that give us a partition of numbers from 1 to 5 into two disjoined sets. So if we take Let's say 2 and 5 as an object set. Then what is 2, 5 prime? 2, 5 prime is the set of all numbers that are not equal to 2 and not equal to 5. And they are precisely all the numbers except for 2 and 5, so it's 1, 3, 4. And so this is a formal concept. 2, 5, the extent, 1, 3, 4, the intent or vice-versa. 1, 3, 4 as extent and 2, 5 as intent. So every set can be a concept extent and every set can be concept intent. And because we have, in this case, 2 to the 5 or 32 subsets. We have as many extents and as many intents and as many concepts. And in general for a context like this, which is by the way called contranormal scale or we have two to the n possible concepts. And no matter how fast we are with producing a single concept, we still need exponential time just to write them all down. So a more interesting question is how much time do we need Per one concept, because okay. We need exponential time to open all of them but it will spend exponential time to produce a single concept, now that's really bad. So we want to be efficient for producing every concept at least. So how much time does our algorithm spend to produce a single concept? So how much time per concept? Or actually, per closed set, because our algorithm is a bit more general. Then that's a previous as closed sets for an arbitrary closed operator. Let denote the time needed to compute one closure by gamma. So, let gamma be Time to complete one closure. All algorithms consist essentially of two parts, it calls first closure first and then calls the next closure several times as long as there are clause sets. That haven't been generated yet. So first closure, all that it does is just computes the closure of the intercepts. So the time that it needs is just gamma. Next closure. And one call to next closure. We have an outer loop that goes through all attributes. So, this loop makes at most, size of M iterations. And at each iteration it computes. One closure. So this is gamma and it also does something with sets. So it maybe can be a set difference. It may add an element to a set and all this takes time. O of sides of M. So this is the time we spent per one iteration. And we make size of M, at most size of M, iterations. So this is, if we multiply this and this we get the time complexity of Next Closure. But now let's suppose that our closure [INAUDIBLE] is given by formal context. What is gamma in this case? How I can compute A double prime in a formal context. Turns out this can be done in linear time. Linear in the size of the formal context. So the size of the formal context is the size of g multiplied by the size of M. And so this can be done in Time O(G M). We start, we initialize B by M. And B is going to be the closure of A, we initialize it by the set of all attributes M. And then we go through all objects. And if an object G has all attributes from A, we must make sure that B contains Only attributes from g'. So, if a is a subset of g', then we keep in B only attributes that are in g'. All because a double prime is the set of all attributes shared by all objects from A prime. G prime contains A, so G prime is in A prime. It belongs to A prime. So we must keep here only attributes that belong to g prime. And in the end we return, B. So this is the algorithm, this is an algorithm that computes a double prime. It has size of G iterations. And at each iteration, we do two set theoretic operations with Attribute subsets. So they need time off M. And so the complexity of the algorithm is O of size of G multiplied by size of M. So if gamma equals OGM, then the complexity of first closure Is also O of G M. And the complexity of next closure is O of G M. Well actually, it's O of G multiplied by M squared, because gamma is O of G and plus O of M is O of GM. So all of M is subsumed by all of GM. And we make M iterations. So the overall complexity is GM times M, or G M squared. What does it tell us about the algorithm or closures that computes all closed sets in [INAUDIBLE] order? Well, it tells us that this algorithm spends at most as much time between two subsequent generations of a closed set. This is called a polynomial delay. So our closures is an algorithm that enumerates certain data structures. And for such algorithms people measure the time between two subsequent generations of two structures. And because in our case, each [INAUDIBLE] closure produces a closed set, the polynomial delay here is just the complexity of next closure. So it's O(G M squared). This is true if we compute closed attribute sets, concept intents. If we want to compute concept extents, then the complexity or the polynomial delay is O of G squared M for computing extent. And this is for So this is the complexity of our next closure. This is not the most efficient algorithm known for computing concept lattices, neither in theory nor in practice. Well, in theory, it can really be much more efficient because of this. Because any algorithm that producing all concepts must be exponential in context size. But this algorithm is pretty good for many tasks.