Okay, so now we start talking about performance of our networks, and there's
two. Main thoughts I want to get across here,
there are two really godo ways to measure our networks, bef, when we talked about
performance here we were talking about parameters rather of the toplogy now
we're going to look at the overall network performance.
First thing is, bandwidth. So bandwidth is the rate of data that can
be transmitted over a given network link, divided by amount of time.
Okay, that sounds pretty reasonable. Latency is how long it takes to
communicate. And send a completed message between a
sender and a receiver, in seconds. So, the unit on this is seconds,
the unit on this is something like bits per second, or bytes per second,
so the amount of data per second. These two things are linked.
So, if we take a look at something like bandwidth, it can actually effect our
layency. And the reason for this is, [COUGH], if
you increase the bandwidth, you are, are going to have to send fewer
pieces of data for a long message. Because you can send it in wider chunks,
or faster chunks, or something like that.
So it can actually help with latency. It can also help with latency because it
can help reduce your congestion on your network.
Now we haven't talked about congestion yet, we'll talk about it in a few more
slides. But by having more bandwidth it will, you
can effectively reduce the load of your network and that will decrease your
will decrease the load in the network, and then increase the probability you're
actually going to have two different messages contending for a same, link in
the network. Leniency can actually affect our
bandwidth, which is, interesting,
or rather it can affect our delivered bandwidth.
It's not going to make our, if we change the latency, it's not
going to make our lengths wider or our clock speed of our latency faster,
but if you make the delivered bandwidth higher.
Now how this can happen is let's say you have something like a round-trip Your
trying to communicate from point A to point B,
back to point A. And, this is pretty common, you want to
send a message from one node to another node, it's going to do some math on it.
It's going to do some work it and it's going to send back the reply, and if you
can't cover that latency, if the latency were to get longer, the
sender will sit there and just stall more,
and it will effectively increase the it will decrease the bandwidth of the amount
of data that can be sent. [COUGH] Now if you are good at hiding
the, this latency by doing other work, that may not happen.
You may not be limited by latency. But then, another good example is if you
are worried about end to end flow control.
So a good example of this is in TCP/IP networks.
So like our ethernet networks. There's actually a round trip flow
control between the two end points, which rates limits, the, the bandwidth.
And it's actually tied to the latency. Because you need to have, more, traffic
in flight to cover the round trip lanes see, and this starts to get, in j, starts
to be called, what's called the ben with delay product, where you multiply your
been with by the delay or the latency of your network, and if you increase the
latency. The bandwidth will effectively go down if
you do not allow for more traffic in flight before you wait, before you can
hear a float control response. [COUGH].
So you'll see this if you have, let's say, two points on the internet.
And you put'em farther apart. And you have the same amount of in
flight, data, or what's called, the window is the same.
[COUGH]. The bandwidth is going to go down as you
increase the latency. But, if you were to increase the window,
it would actually, stay high. because bandwidth delayed, probably.
And the reason for that is you'd be waiting for acts to come back from the,
the receive side. Okay, so let's take a look at, an example
here, to understand these different parameters.
We have a four node omega network here, with two inputs, two output, routers.
Each of these circles here, represents input nodes,
and these are the output nodes, and they basically wrap around, they're the same
sort of thing, [COUGH]. We have little slashes here, which we'll
represent as serializers and deserializers.
So what this means is, you're transmitting some long piece of data, and
it gets sent as smaller fits, if you will.
So we're setting let's say a 32 bit word, and it gets serialized into four 8-bit
chunks across our network, across the links, because the links in the network
are only four, or excuse me, eight bits wide,
we'll say. [COUGH].
And in this network we're going to have our latencies be, non, non-unit.