language-icon Old Web
English
Sign In

Reliability (computer networking)

In computer networking, a reliable protocol is a protocol which notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum. In computer networking, a reliable protocol is a protocol which notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum. Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols. TCP, the main protocol used on the Internet, is a reliable unicast protocol. UDP is an unreliable protocol and is often used in computer games, streaming media or in other situations where speed is an issue and some data loss may be tolerated because of the transitory nature of the data. Often, a reliable unicast protocol is also connection-oriented. For example, TCP is connection-oriented, with the virtual-circuit ID consisting of source and destination IP addresses and port numbers. However, some unreliable protocols are connection-oriented, such as Asynchronous Transfer Mode and Frame Relay. In addition, some connectionless protocols, such as IEEE 802.11, are reliable. After the NPL network pioneered packet switching, the ARPANET provided a reliable packet delivery procedure to its connected hosts via its 1822 interface. A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected Interface Message Processor (IMP). Once the message was delivered to the destination host, an acknowledgement was delivered to the sending host. If the network could not deliver the message, the IMP would send an error message back to the sending host. Meanwhile, the developers of CYCLADES and of ALOHAnet demonstrated that it was possible to build an effective computer network without providing reliable packet transmission. This lesson was later embraced by the designers of Ethernet. If a network does not guarantee packet delivery, then it becomes the host's responsibility to provide reliability by detecting and retransmitting lost packets. Subsequent experience on the ARPANET indicated that the network itself could not reliably detect all packet delivery failures, and this pushed responsibility for error detection onto the sending host in any case. This led to the development of the end-to-end principle, which is one of the Internet's fundamental design principles. A reliable service is one that notifies the user if delivery fails, while an unreliable one does not notify the user if delivery fails. For example, Internet Protocol (IP) provides an unreliable service. Together, Transmission Control Protocol (TCP) and IP provide a reliable service, whereas User Datagram Protocol (UDP) and IP provide an unreliable one.

[ "Computer network", "Computer security", "Real-time computing", "Distributed computing", "telecommunication network reliability", "universal generating function", "fault tolerant parallel processor", "internet reliability" ]
Parent Topic
Child Topic
    No Parent Topic