the largest small s of G. And we call this the S sum x.

Why do we need to do this?'Cause Cuz then we can normalize our likelihood metric.

We would define big S of G as basically a normalized version of small s of G.

It's normalized by small s sum x. Now we uses big s of g to represent the

likelihood of drawing a graph with this no degree distribution.

From the set of all graphs satisfying the same distribution.

So now we got two things, one is P of G, the other is S of G. Given a graph will

clearly show constraint, the possible routing available and therefore

constraint, the possible sum of procession throughput.

And given a graph. You will also define the likelihood of

drawing that at random from the set of all graphs satisfying the same distribution.

So we've got P of G and S of G denote performance and likelihood respectively.

And our job is to look at their trade off. Here is a cartoon to illustrate a typical

trade off. Both points shown here satisfy the same

pareto node re-distribution. And yet, preferential attachment, which

leads to highly connected nodes in the center of the network sits here in the

trade off space between P and S. It is much more likely, it's a 80% chance,

to draw at random something that looks like professional attachment generated

topology, a scale free network with highly connected nodes to the center.

But its performance measured in bits per second is the sum of the optimized the

procession through put is much smaller ten to the say, power ten, compared to.

And different kind of topology, Internet like topology where the highly connected

nodes sitting in the edge of the network and the core is much as passer with medium

to low degree nodes. So the high, highly connected nodes are at

the lower edge. This is the kind of internet topology that

we see. They also satisfy node degree distribution

and their chance of getting joint random from that set is much smaller,

twenty percent compared to the preferential attachment kind of network.

However, the saving raises that their performance is much higher say two orders

of magnitude bigger and that matters a lot.

It's a hundred times more capable of supporting bandwidth and therefore revenue

going thru this network If you were designing the internet, you would do it

this way. Rather than saying, well let's just pick

at random. And enjoy a high likelihood of drawing it

at random. Even though it produces a performance

bottleneck with these high degree nodes hitting the center of the network,

reducing the capability of supporting a lot of bandwidth passing through it.

Because per interface bandwidth is necessarily low.

So no one will say, let's do it that way. People will say, well, let's design it

this way and make sure the performance is high.

And this illustrates the point that the internet router graph is performance

driven. And technology constraint can make routers

like this and yet have to support high performance. This is the way to go.

Small probability of being drawn but we're now drawing it at random anyway with a

high performance matrix. Now that was numbers associated with the

Internet Let's look at a very small example to work out some details now.