That is, we have different factorizations, that are quite different

than their expressive power, all of which induce the exact same graph.

And we've already seen an example of that.

When we had the fully connected Pairwise Markov network, we had one

parameterization that had old n2, squared d2 squared parameters.

And we had another parameterization that had a fully, a full factor over all n

variables, so it had vn to the n parameters.

And these are two very different representations, with very different

expressive powers that never the less induce the exact same graph.

But we must ask the question of why then is the graph the same?

What does the graph really tell us? Given that given that it's not telling us

the structure and factorization. So, here's is the, here is this, going

back to the example on the previous slide.

We have these, two factorization, one of which uses triplets, factors and the

second one uses Pairwise factors. And lets think about what is the flow of

influence in these factors. So when can one variable influence

another? And we can see,

and we think about this intuitively, when can B influence D?

Is this is this different in the two graphs,

in, in the two distributions? And the answer is, well, not really.

I mean once we have a factor. Here in this case it's phi one, in this

case it's phi five, that ties b and d directly, and the fact is that D can

influence B. What about can B, can A influence C?

Well, so let's, so can A influence C? A can influence C via D by going through,

in this case, the ABD factor, and then subsequently uti, utilizing the

dependencies within the BCD factor. and in this case, it can use the AB

factor. And then the CD factor, and so the point

is although the parametrization of the two distributions are different, the

paths in the graphs, the trails in the graphs through which influence can flow

is the same regardless of this finer grain structure of the factorization,

which is why the graphs in those two cases are the same.

So let's formalize this definition. were going to define a notion of an

active trail in in a Markov network. And this is actually a very simple

definition. It's much simpler than the analogous

definition in the context of Bayesian network.

We have that a trail going from X1 up to XN is active, given, and of, a set of,

observe variable Z. If basically no XI along the trail is in

Z, because an active trail has to, only flows through variables that are

unobserved. Once we observe a variable along the

trail, influence kind of stops because that variable is now set and so you can't

really influence it. And if you can't influence it, you can't

influence anything subsequently along that, along that path.

So for example, the trail from B. The D, is active, so this is active,

but not if a is observed. So once I observe A, I can no loner

influence, B can no longer influence D via a.