The earlier video on packet networking suggested that the technology was perilous and unlikely to succeed. Let's look at why. When you send packets between two hosts computers, the packets might all arrive exactly as they were sent, but not always. Here's the litany of classic patent packet problems. Fortunately, we can usually recover from them. To detect loss packets, we first need to detect successful deliveries. The receiving host must acknowledge the packets it receives. We use a timer to judge success or failure. We start the timer when the packet leaves. If the recipient acknowledges the packet within a reasonable time, we declare success. If the packet is lost somehow, the timer eventually reaches its limit, we declare the packet lost. We recover by resending the packets. This process can also produce duplicate packets. Network slowdowns might delay a packet. Once the timer expires, the sending host will transmit a duplicate packets. We detect duplicates by numbering the transmissions. Here we put a packet number in each new packet we send. If the destination host receives two packets with the same number, it knows one is a duplicate. The third problem is out-of-order packets. Each packet travels through the network independently. Each might take a different path and not all arrive in the same order we send them. The order is essential when we reconstruct a large file from a series of packets. We solved this problem too by numbering the packets. If the packets arrive out of order, we just wait for the missing ones before we reconstruct the message. Flow control path poses a challenge for any network. You need a strategy when faced with more data than your system can handle. The basic case in packet-switching is when the sending host is faster than the receiving host. The simplest strategy is to send to stop signal when the recipient is afraid of falling behind, and a go signal when it's ready for more input. A more sophisticated and reliable strategy is to report how much data the recipient can handle. The sender then keeps track of how much has been sent, how much has been acknowledged, and the sender sends more if the recipient can handle more than is currently in transit. Internet technology is highly reliable, an insane amount of data travels across the internet every day and arrives intact. Why is it that we rely so much on Ethernet style technology? Ethernet is based on the unreliable ALOHAnet model, not the notion of a reliable network like the ARPANET. The internet protocols assume packets travel across dumb networks. These are networks that can't reliably guarantee transmission. The sending and receiving hosts are then responsible for detecting and fixing transmission errors, and believe it or not, this turns out to be the best way to design a modern network. Internet hosts traditionally rely on Transmission Control Protocol, TCP, to deliver a series of packets reliably and in the correct order. The TCP header lets us detect lost, duplicated, and out-of-order packets. It provides full flow controls. Instead of numbering packets, TCP numbers every byte sent. It reports the first byte number in a packet using the sequence number. It increments the sequence number for each byte in the data field. The receiving host acknowledges data received by computing the highest consecutive byte number received and reporting the next byte number it expects. If the packets arrive out of order, the expected byte number stays constant until the missing bytes are filled in. The sender knows if it hasn't received acknowledgments for higher numbered bytes, that somehow they went missing. For flow control, the host sending a packet provides a window size, it indicates how much more data the host is ready to receive. This is a serious question, do we always need to detect and retransmit lost packets? No, we don't. Some applications don't need it, and we save resources by accepting the unreliability. Also to be serious, there are a lot of cases where, no matter how hard you try for reliability, things fail. But in fact they succeed, but in fact you're not quite sure where you are. That's one of the reasons why security and successful implementation can be challenging in a networked environment, and in particular Cloud-oriented. Let's say for instance, that you have a device that's continuously transmitting temperature. You only need the most recent temperature readings. Here's another possibility. You may have time-sensitive transmissions. Music usually sounds most natural, if we replicate the rhythm and say miss an occasional rare note. The retransmission delay would be jarring if we just stopped and then started the music again, when we caught up. The internet protocols include an unreliable transmission technique for such cases. The User Data-gram Protocol, UDP streamlines data transmission by discarding TCP's reliability features. Some application protocols provide their own reliability mechanisms and then use UDP.