Part I Machine Architecture 4 A major process

  • Slides: 20
Download presentation
Part I: Machine Architecture 4 A major process in the development of a science

Part I: Machine Architecture 4 A major process in the development of a science is the construction of theories that are confirmed or rejected by experimentation. 4 In some cases these theories lie dormant for extended periods, waiting for technology to develop to the point that they can be tested. 4 In other cases the capabilities of current technology influence the concerns of the 1 science.

Ch. 1 Data Storage 4 4 4 4 Storage of bits. Main memory. Mass

Ch. 1 Data Storage 4 4 4 4 Storage of bits. Main memory. Mass storage. Coding information for storage. The binary system. Storing integers. Storing Fractions. Communication errors. 2

Storage of bits 4 Today’s computers represent information as patterns of bits. 4 Gates

Storage of bits 4 Today’s computers represent information as patterns of bits. 4 Gates are devices that produce the output of a Boolean operation when given the operation’s input values. 4 A flip-flop is a circuit that has one of two output values (i. e. , 0 or 1), the output will flip or flop between two values under control of external stimuli. 3

Storage of Bits 4 A flip-flop is ideal for the storage of a bit

Storage of Bits 4 A flip-flop is ideal for the storage of a bit within a computer (Fig 1. 3 and 1. 4). A flipflop loses data when its power is turned off. 4 Cores, a donut-shaped rings of magnetic material, are obsolete today due to their size and power requirements. 4 A magnetic or laser storage device is commonly used when longevity is important. 4 Hexadecimal notation (Fig. 1. 6). 4

Main Memory 4 Cells - a typical cell size is 8 or called byte.

Main Memory 4 Cells - a typical cell size is 8 or called byte. 4 MB = 1, 048, 576 (2 ** 20) bytes, KB and GB. 4 Address is used to identify individual cells in a main memory. 4 Random access memory (RAM). 4 Read only memory (ROM). 4 Most significant bit (MSB) and least significant bit (LSB). 5

Mass Storage 4 Secondary memory. 4 Storing large units of data (called files). 4

Mass Storage 4 Secondary memory. 4 Storing large units of data (called files). 4 Mass storage systems are slow due to mechanical motion requirement. 4 On-line Vs. off-line operations. 6

Mass Storage 4 Mass Storage Disk storage. – Floppy disk and hard disk –

Mass Storage 4 Mass Storage Disk storage. – Floppy disk and hard disk – Track, sector, seek time, latency time (rotation delay), access time, transfer time – Milliseconds Vs. nanoseconds 4 Compact disks and CD-ROM. – A single spiral track 4 Tape storage. 7

Mass Storage 4 4 Physical Vs. logical records. Buffer. Main memory and mass storage.

Mass Storage 4 4 Physical Vs. logical records. Buffer. Main memory and mass storage. Main memory, magnetic disk, compact disk and magnetic tape exhibit decreasing degrees of random access to data. 8

Representing Text 4 American Standard Code for Information Interchange (ASCII) - 8 -bit codes.

Representing Text 4 American Standard Code for Information Interchange (ASCII) - 8 -bit codes. – Appendix A – Figure 1. 12 4 Unicode - 16 -bit codes; allow to represent most common Chinese and Japanese symbols. 4 International Standards Organization (ISO) - 32 -bit codes. 9

Representing Numeric Values 4 Using 16 bits, the largest number we can store in

Representing Numeric Values 4 Using 16 bits, the largest number we can store in ASCII is 4 Binary notation (Figures 1. 14 and 1. 16). – Given 16 bits, the largest number we can store is - 4 A particular value may be represented by several different bit patterns; a particular bit pattern may be given several interpretations. 10

Representing Images 4 Bit map representation – An image is considered as a collection

Representing Images 4 Bit map representation – An image is considered as a collection of pixel • a pixel can be black or white, represented by a bit • a pixel can be a color, represented by three bytes – A typical photograph consists of 1280 rows of 1024 pixels • requires several megabytes of storage • image compression 4 Vector representation provides a means of scaling. 11

The Binary System 4 Binary addition. 4 Fractions in binary. – Radix point (same

The Binary System 4 Binary addition. 4 Fractions in binary. – Radix point (same as decimal point in decimal notation) – Figure 1. 18 – Example of addition 12

Storing Integers in Computers 4 Two’s complement notation. – Figure 1. 19 – Sign

Storing Integers in Computers 4 Two’s complement notation. – Figure 1. 19 – Sign bit – How to decode a bit pattern? 4 Addition in two’s complement notation. – Addition of any combination of signed numbers can be accomplished using the same algorithm • simplify circuit design – Figure 1. 21 13

Storing Integers in Computers 4 Overflow problem. – Limit to the size of the

Storing Integers in Computers 4 Overflow problem. – Limit to the size of the values that can be represented – 5 + 4 = -7 – Addition of two positive (negative) values appears to be negative (positive) 4 Excess notation. – Figures 1. 22 and 1. 23 • excess 8 (4) notation for bit patterns of length 4 (3) 14

Storing Fractions in Computers 4 Floating-point notation. – Sign bit, exponent field, mantissa field

Storing Fractions in Computers 4 Floating-point notation. – Sign bit, exponent field, mantissa field – Exponent expressed in excess notation – 01101011 = – 1. 125 = – 0. 375 = – All nonzero values have a mantissa starting with 1 15

Storing Fractions in Computers 4 Round-off errors. – Mantissa field is not large enough

Storing Fractions in Computers 4 Round-off errors. – Mantissa field is not large enough • 2. 625 = - – Order of computation • 2. 5 + 0. 125 = - – Nonterminating representation • 0. 1 = • change the unit of measure from dollar to cent for a dime 16

Data Compression 4 Run-length encoding. – A bit pattern consists of 253 1’s, followed

Data Compression 4 Run-length encoding. – A bit pattern consists of 253 1’s, followed by 118 0’s 4 Relative encoding. – Each data block is coded in terms of its relationship to the previous block 17

Data Compression 4 Frequency-dependent encoding. – More frequently used characters are represented by shorter

Data Compression 4 Frequency-dependent encoding. – More frequently used characters are represented by shorter bit patterns – Huffman codes 4 Adaptive dictionary encoding. – Lempel-Ziv encoding – ABAABQB (5, 4, A) (0, 0, D) (8, 6, B) 18

Data Compression 4 GIF. – Each pixel is represented by a single byte 4

Data Compression 4 GIF. – Each pixel is represented by a single byte 4 JPEG. – Human eyes are more sensitive to changes in brightness than color – Each four-pixel block is represented by six values rather than 12 values 4 MPEG. 19

Communication Errors 4 How can you make sure the information you received is correct?

Communication Errors 4 How can you make sure the information you received is correct? 4 Coding techniques for error detection and correction. – Parity bits. – Error-correcting codes. • Figures 1. 28 and 1. 29 • Hamming distance of at least five is able to detect up to - errors and correct up to - errors 20