language-icon Old Web
English
Sign In

Signed number representations

In computing, signed number representations are required to encode negative numbers in binary number systems. In computing, signed number representations are required to encode negative numbers in binary number systems. In mathematics, negative numbers in any base are represented by prefixing them with a minus ('−') sign. However, in computer hardware, numbers are represented only as sequences of bits, without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign-and-magnitude, ones' complement, two's complement, and offset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2. Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes. There is no definitive criterion by which any of the representations is universally superior. The representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement. The early days of digital computing were marked by a lot of competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's most expert people having very strong and different opinions. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where any positive value is made into its negative equivalent by inverting all of the bits in a word. A third group supported 'sign & magnitude' (sign-magnitude), where a value is changed from positive to negative simply by toggling the word's sign (high-order) bit. There were arguments for and against each of the systems. Sign & magnitude allowed for easier tracing of memory dumps (a common process 40 years ago) as small numeric values use fewer 1 bits. Internally, these systems did ones' complement math so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign-magnitude when the result was transmitted back to the register. The electronics required more gates than the other systems – a key concern when the cost and packaging of discrete transistors was critical. IBM was one of the early supporters of sign-magnitude, with their 704, 709 and 709x series computers being perhaps the best known systems to use it. Ones' complement allowed for somewhat simpler hardware designs as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign-magnitude – the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero; when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage, however, is that the existence of two forms of the same value necessitates two rather than a single comparison when checking for equality with zero. Ones' complement subtraction can also result in an end-around borrow (described below). It can be argued that this makes the addition/subtraction logic more complicated or that it makes it simpler as a subtraction requires simply inverting the bits of the second operand as it is passed to the adder. The PDP-1, CDC 160 series, CDC 3000 series, CDC 6000 series, UNIVAC 1100 series, and the LINC computer use ones' complement representation. Two's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity. Processors on the early mainframes often consisted of thousands of transistors – eliminating a significant number of transistors was a significant cost savings. Mainframes such as the IBM System/360, the GE-600 series, and the PDP-6 and PDP-10 use two's complement, as did minicomputers such as the PDP-5 and PDP-8 and the PDP-11 and VAX. The architects of the early integrated circuit-based CPUs (Intel 8080, etc.) chose to use two's complement math. As IC technology advanced, virtually all adopted two's complement technology. x86, m68k, Power ISA, MIPS, SPARC, ARM, Itanium, PA-RISC, and DEC Alpha processors are all two's complement. This representation is also called 'sign–magnitude' or 'sign and magnitude' representation. In this approach, a number's sign is represented with a sign bit: setting that bit (often the most significant bit) to 0 for a positive number or positive zero, and setting it to 1 for a negative number or negative zero. The remaining bits in the number indicate the magnitude (or absolute value). For example, in an eight-bit byte, only seven bits represent the magnitude, which can can range from 0000000 (0) to 1111111 (127). Thus numbers ranging from −12710 to +12710 can be represented once the sign bit (the eighth bit) is added. For example, −4310 encoded in an eight-bit byte is 10101011 while 4310 is 00101011. A consequence of using signed magnitude representation is that there are two ways to represent zero, 00000000 (0) and 10000000 (−0). This approach is directly comparable to the common way of showing a sign (placing a '+' or '−' next to the number's magnitude). Some early binary computers (e.g., IBM 7090) use this representation, perhaps because of its natural relation to common usage. Signed magnitude is the most common way of representing the significand in floating point values.

[ "Multiplier (economics)", "Multiplication", "Adder", "Arithmetic", "Operating system", "Signedness" ]
Parent Topic
Child Topic
    No Parent Topic