Today, we discuss fair queuing and random early detection. Fair queuing attempts to provide isolated and equitable access to transmission bandwidth. Each user flow has its own logical buffer. In an ideal system, the transmission bandwidth is divided equally among the buffers that have packets to transmit. This technique is also called processor sharing in the computing literature. Weighted fair queuing, WFQ, further address different users with different priority. We focus on fair queuing here. Fair queuing is fair in the following sense. In the ideal free to flow situation, the transmission bandwidth is divided equally among all non empty buffers. As shown in the figure, one approach could be to service each non-empty buffer one bit at a time in round-robin fashion. If the total number of flows in the system is n, and the overall transmission capacity is C, then each flow should be guaranteed at least C over n bits per second. In practice dividing the transmission capacity exactly equally is technically impossible. Per-bit round-robin service requires decomposing the resulting bit stream into the component networks which would it be prohibitively costly. In the case of ATM, fair queuing can be approximated in an easier way. Because in ATM all packets are the same length. In packet networks, fair queuing implementation requires approximation. Let's take an approximation example. Here, we see five head of line packets, A, B, C, D, and E. Just for illustration, assuming packet A has five bits, packet B has four bits, C has two bits, D has four bits, and in E have four bits. By per bit a round-robin service we can mark each bit of each packet it's service time. And finally we have the finished time of each packet under ideal fluid flow situation. For example packet C would be finished transmitting at a time 8. While a packet of A would finish transmission time at 20. So in a FIFO scheduling router, the packets will be sorted and scheduled according to their finish time. Although fair queuing provides a fair allocation of bandwidth among multiple flows, it is rarely implemented in a core network, where tens of thousands of flows may be present at any given time. Maintaining state information of so large a number of flows will increase the implementation overhead significantly and make the system unscalable. Buffer management involve packet drop strategies. The question is which packet to drop when the router's buffer is full. When important requirement is fairness, a router should protect behaving sources from misbehaving sources. Different level of aggregation offers various protection. A per-flow aggregation buffer protects flows from misbehaving flows. On the other hand, a full aggregation buffer, offers no protection. Aggregation into classes, provides intermediate protection. A buffer can implement a different job priorities, so packets belong into different classes can be dropped according to their priorities. Priority dropping can maximize network utilization while meeting quality of service of certain applications. Buffer management in the network core can be more efficient if the sources at the network edge can cooperate. When a buffer begins to reach a certain level, it then notify the sources to reduce the rate at which they send packets. Random early detection, RED, is a buffer management technique that drops packets if queue length exceeds a given threshold. Packet drop probability increases linearly with queue length. A dropped packet provides feedback information to the source as a missing acknowledgement, so it informs the source to reduce its transmission rate implicitly. When a given source transmitting at a higher rate than others, the source suffers from a higher packet dropping rate. In TCP protocol, packets producer will reduce input rate in response to network congestion. Random drop causes some sources to reduce the rate before others causing gradual reduction in aggregate input rate. It improve performance of cooperating TCP sources and increase loss probability or misbehaving TCP sources. Technically the RED ecolism works as follows. It maintains running average of queue length. Specifically two thresholds are defined, min threshold and max threshold. When the average queue length is below min threshold, RED doesn't drop any arriving packets. When the average queue length is between min threshold and maximum threshold RED drops an arriving packet with an increasing probability as average queue length increases. The method of early is used to notify some sources to reduce its transmission rate before the buffer becomes full. When the average queue length exceeds the max threshold, RED drops any arriving packet. This concludes today's lesson.