[MUSIC] Let's talk a little bit about computational properties of implications. On the one hand, implications are really easy to deal with. If we have a set of implications L, conical basis or whatever, and we want to check whether x implies y follows from l, that's is it to do. We just have to check if y. Is a subset of L(x) so we have to compute the cause of x on the implications L and check if y is a subset of what we got. And this can be done in polynomial time, it can even be done in linear time if we use lin closure. So if we have the canonical bases then we can very easily and very quickly or relatively quickly answer queries like this, queries about concrete particular implications. However, how long does it take to compute L if say we start with formal context. We've just seen an algorithm that does precisely this. But this algorithm has to compute not only serial closed sets which are premises of economical bases, but also all the closed sets. So it computes pre-closed sets, which include all the closed sets. And then maybe an exponential number of closed sets, and we don't need any of them. We just want to see the basis. We just need sets. And still we have to compute them. So we spend exponential time on things we don't need. That's not good. Well, okay, that's a property of this particular algorithm. There are other algorithms that may use different strategies. They don't have to compute claw sets. Such algorithms are faster than the algorithm we started in some cases, and they're slower in other cases. But can we hope to have an algorithm that works in polynomial time? So an algorithm that takes a formal context and spends only polynomial time in the size of this formal context to produce it's implication basis. Economical basis. Well it turns out that this is impossible, and for a very simple reason. The economical basis itself, although it's in the smallest possible basis. It can still be large. It can be even expanential in the size of the formal context. Let me give you an example. Let's take a formal context that has an attribute M0, M1 and so on, up to mm. And then it also has more attributes, mm+1 and so on, m2n. Okay so it has 2m + 1 attribute. And the objects are split into two parts. First of all we have n objects. G1, Gn. And neither them has attribute M0. As for these attributes the situation is like this. I'll denote it like this what it means. Imagine, so g1 gn and m1 mn is a table of size m by n. So imagine that it's filled with ones, with crosses, except for the diagonal. So we put a cross between gi and mj If and only if i isn't equal to j. That's why I used this line, not equal to. But graphically this means that this diagonal is filled with zeros or it's just empty and the rest is filled with crosses. And we have the same situation here. So that's the first part of our object. And then we have 2n more objects, so gn + 1 and so on, up to g3n. And all these objects Have attribute m0. And they don't really make distinction between these two parts of attributes. So here we have a table of size, 2n by 2n. And again, we'll leave the diagonal empty, and the rest is filled with crosses. So that's our formal context. Now, if you think of it, and if you try to compute its attribute sets, it's pseudo-intnets. Then it's not too difficult to see the pseudo-intents are precisely sets of size m that look like this. mi1, min, so they're of size m. And for position number j In this set. We take either an attribute number j from this part or an attribute n + j from this part. So here, ij is either j Or at j + n. Well, if you look at any such set, this set cannot be a subset of any object intent from the first part of our formal context. So it belongs only to object from the second part. And therefore this set is not closed, it implies m0. All objects from the second part have m0. How many such sets do we have? Well, n positions and we have two options for each position. So 2 to the n possible pseudo-intents. Or possible 2 to the n pseudo-intents. That's the size of our implication basis. But the size of the context is just 3n by 2n +1. So you can see that the size of the bases is exponential in the size of the context. Well, this means that it's completely impossible to have a polynomial time algorithm that enumerates the canonical basis of a formal context, in general. Well, however, we may require something weaker, when I want to have an algorithm that enumerates psuedo-intents, in time polynomial not only in the context size, but also in the size of the bases. Polynomial in the number of pseudo-intents and now polynomial algorithm. Well, it's a major open question, whether such output polling on all time algorithm exists. The algorithm from the previous video doesn't have this property, because it has to go through all intents, in addition to all pseudo intents. And well, the number of intents can be much, much larger than the number of pseudo-intents in some cases. So it's a big question whether or not we can compute the canonical basis in time polynomial in the size of the bases and the size of the input context. [MUSIC]