Arithmetic operations in binary Addition subtraction 01011 111
Arithmetic operations in binary Addition / subtraction 01011 + 111 ? n n “Method” exatly the same as decimal
Arithmetic operations in binary n Addition X = xn … xi … x 0 + Y = yn … yi … y 0 _________ di di = (xi + yi ) mod r + carry-in
Addition For ( i = 0…n ) do di = (xi + yi + carry-in) mod r carry-out = (xi + yi + carry-in) div r End for
Arithmetic operations in binary n Subtraction n n Can you write an equivalent method (algorithm) for subtraction? Algorithm: a systematic sequence of steps/operations that describes how to solve a problem
Arithmetic operations in binary n Subtraction X = xn … xi … x 0 Y = yn … yi … y 0 __________________________ di For ( i = 0…n ) do di = ( xi yi bi-1 ) mod r borrow-out = ( xi yi bi-1 ) div r End for
Subtraction
Arithmetic operations in binary n Multiplication For ( i = 0…n ) do di = ( xi yi + ci-1 ) mod r ci = ( xi yi + ci-1 ) div r End for Is this correct?
Negative numbers (4 traditions): Signed magnitude Radix complement Diminished radix complement Excess-b (biased) n = number of symbols (digits, bits, etc. ) r = radix complement of x is e. g. n = 4, r = 10 7216 --> 9999 - 7216 + 1 = 2784 (10 s complement) n = 4, r = 2 0101 --> 1111 - 0101 + 1 = 1011 (2 s complement)
diminished radix complement is e. g. n = 4, r = 10 7216 --> 9999 - 7216 = 2783 (9 s complement) n = 4, r = 2 0101 --> 1111 - 0101 = 1010 (1 s complement) Note: operation is reversible (ignoring overflow) i. e. complement of complement returns original pattern (for both radix and diminished radix forms) for zero and its complement: 10 s complement: 0000 --> 9999 + 1 = 10000 --> 0000 9 s complement: 0000 --> 9999 --> 0000
Arithmetic with complements n=4 r=2 Excess-8
Signed number representations [[N]] = rn – ( rn – N ) = N ( N + [N] ) mod rn = 0 n Example: (n = 4) +5 0101 -5 1011 10000 24 n N + [N] mod 24 = 0 n N = 5 r = 10 N = 32546 [N] = 105 – 32546 = (67454)10 n
Two’s complement arithmetic ( an-1, an-2, … a 0 ) in 2 -s complement is – Example: (n = 4) 0101 0 23 + 1 22 + 0 2 + 1 1 = 4 + 1 = 5 1011 1 23 + 0 22 + 1 1 = 8 + 2 + 1 = 5 – Addition/subtraction (n = 5) +10 01010 +3 00011 01101 13
Two’s complement arithmetic – Addition/subtraction (n = 5) +10 01010 +7 00111 Overflow 10001 15 +15 01010 13 10011 Discard 100010 2 When can overflow happen? Only when both operands have the same sign and the sign bit of the result is different.
Cyclic representation (n = 4, r = 2) avoid discontinuity between 0111 and 1000 Add x: move x positions clockwise Subtract x: move x positions counterclockwise move (16 - x) positions clockwise (i. e. add radix complement)
How to detect discontinuity? 5 + 6 11 0101 + 0110 1011 no overflow (in 4 -bit register) but, carry into most significant bit is 1 while carry out is 0 -5 + -6 -11 1011 + 10101 overflow but, carry into most significant bit is 0 while carry out is 1
Same circuitry signed numbers add subtract (use 2 s complement of subtrahend) Intel architecture OF (overflow flag) detects out-of-range result unsigned numbers same protocol but discontinuity is now between 0000 and 1111 detect with overflow for addition lack of overflow for subtraction Intel uses CF (carry flag) to detect out-of-range result
Codes 4 -bit codes for digits 0 through 9 (BCD : 8421 weighted) 2421 and Excess-3 are self-complementing (complement bits to get 9’s complement representation) Biquinary has two ranges 01… and 10… one-bit error produces invalid codeword
Number representation inside computer n Integers – represented in 2’s complement n Single precision – 32 bits Largest positive integer is 2, 147, 483, 647 Smallest negative integer is 2, 147, 483, 648 231 -1 = -231 =
Number representation inside computer n Floating point n Scientific notation 0. 0043271 = 0. 43271 10 -2 normalized number The digit after the decimal point is 0 Normalized notation maximizes the use of significant digits.
Floating point numbers n Floating point S E m N = (-1)S m r. E S = 0 positive S = 1 negative m normalized fraction for radix r = 2 As MSB digit is always 1, no need to explicitly store it Called hidden bit gives one extra bit of precision
Floating point formats • IEEE format: • DEC format: (-1)S (1. m) 2 E-127 (-1)S (0. m) 2 E-128
Floating point formats • Different manufacturers used different representations
Mechanical encoder with sequential sector numbering At boundaries can get a code different than either adjacent sector
Gray code:
Gray code algorithm: input: output: (binary) (Gray code) e. g. n = 3: 6 -> 110 -> 101 n = 4: 10 -> 1010 -> 1111 Alternative algorithm (n bits) If n = 1 use 0 ->0 and 1 ->1 If n > 1 first half is Gray code on (n - 1) bits preceded by 0 second half is reversed Gray code on (n - 1) bits preceded by 1
Representing non-numeric data • Code: systematic and preferably standardized way of using a set of symbols for representing information – Example: BCD (binary coded decimal) for decimal #s – It is a “weighted” code the value of a symbol is a weighted sum • Extending number of symbols to include alphabetic characters – EBCDIC (by IBM): Extended BCD Interchange Code – ASCII: American Standard Code for Information Interchange
Codes
Cyclic codes n n A circular shift of any code word produces a valid code word (maybe the same) Gray code – example of a cyclic code n Code words for consecutive numbers differ by one bit only
Ascii, ebcdic codes State codes, e. g.
Consequences of binary versus one-hot coding:
n-cubes of codewords: Hamming distance between x and y is count of positions where x-bit differs from y-bit Also equals link count in shortest path from x to y
Gray code is path that visits each vertex exactly once
Error-detecting codes concept: choose code words such that corruption generates a non-code word to detect single-bit error, code words must occupy non-adjacent cube vertices Problem Solution: parity
Error-correcting codes minimum distance between code words > 1 Example: correct one bit errors or detect two-bit errors Two-bit error from 0001011
Write minimum distance as 2 c + d + 1 bits ==> corrects c-bit errors and detects (c + d)-bit errors Example: min distance = 4 = 2(1) + 1 But also, 4 = 2(0) + 3 + 1
Why? suppose minimum distance is h c c h - 2 c d = h - 2 c -1 pair of closest nodes maximally distant from left with entering correction zone 2 c + d + 1 = 2 c + (h - 2 c - 1) + 1 = h
Hamming codes: number bit positions 1, 2, 3, . . . n from right to left bit position is a power of 2 => check bit else => information bit e. g. (n = 7) check bits in positions 1, 2, 4 information bits in positions 3, 5, 6, 7 Create group for each check bit Express check bit position in binary Group all information bits whose binary position has a one in same place e. g. (n = 7) check 1 (001) 2 (010) 4 (100) information 3 (011), 5 (101), 7 (111) 3 (011), 6 (110), 7 (111) 5 (101), 6 (110), 7 (111)
Code information packets to maintain even parity in groups e. g. (n = 7) packet is 1011 => positions 7, 6, 5, 3 7654321 101 x 1 xx Consult group memberships to compute check bits check 1 2 4 information 3, 5, 7 3, 6, 7 5, 6, 7 Code word is 1010101 => bit 1 is 1 => bit 2 is 0 => bit 4 is 0
Note: Single bit error corrupts one or more parity groups => minimum distance > 1 Two-bit error in locations x, y corrupts at least one parity group => minimum distance > 2 Three-bit error (i. e. 1, 4, 5) goes undetected => minimum distance = 3 3 = 2(1) + 0 + 1 = 2(0) + 2 + 1 => can correct 1 -bit errors or detect errors of size 1 or 2.
Group 8 Group 4 Group 2 Group 1 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Pattern generalizes to longer bit strings: • Single bit error corrupts one or more parity groups => minimum distance > 1 • Consider two-bit error in positions x and y. To avoid corrupting all groups: -- avoid group 1: bit 1 (lsb) same for both x and y -- avoid group 2: bit 2 same for both -- avoid group 4: bit 3 same for both, etc. -- forces x = y • So true two-bit error corrupts at least one parity group => min distance > 2 • Three-bit error (two msb and lsb) goes undetected => minimum distance = 3 • Conclude: process always produces a Hamming code with min distance = 3
Traditional to permute check bits to far right Used in memory protection schemes
Add traditional parity bit (for entire word) to obtain code with minimum distance = 4
For minimum distance = 4: Number of check bits grows slowly with information bits
Two-dimensional codes (product codes) min distance is product of row and column distances for simple parity on each (below) min distance is 4
Scheme for RAID storage CRC is extension of Hamming codes each disk is separate row column is disk block (say 512 characters) rows have CRC on row disk columns have even parity
Checksum codes mod-256 sum of info bytes becomes checksum byte mod-255 sum used in IP packets m-hot codes (m out of n codes) each valid codes has m ones in a frame of n bits min distance is 2 detect all unidirectional errors bit flips are all 0 to 1 or 1 to 0
Serial data transmission (simple) transmit clock and sync with data (3 lines) (complex) recover clock and/or sync from data line
Serial line codes: NRZ: clock recovery except during long sequences of zeros or ones NRZ 1: nonreturn to zero, invert on one zero => no change, one => change polarity RZ: clock recovery except during long sequences of zero DC balance Bipolar return to zero (aka alternate mark inversion: send one as +1 or -1) Manchester: zero => 0 to 1 transition, one => 1 to 0 transition at interval center
- Slides: 49