Computer Architecture A Quantitative Approach Fifth Edition Chapter

  • Slides: 63
Download presentation
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 1 Fundamentals of Quantitative Design and

Computer Architecture A Quantitative Approach, Fifth Edition Chapter 1 Fundamentals of Quantitative Design and Analysis Copyright © 2012, Elsevier Inc. All rights reserved. 1

n Performance improvements: n Improvements in semiconductor technology n n n Enabled by HLL

n Performance improvements: n Improvements in semiconductor technology n n n Enabled by HLL compilers, UNIX Lead to RISC architectures Together have enabled: n n Feature size, clock speed Improvements in computer architectures n n Introduction Computer Technology Lightweight computers Productivity-based managed/interpreted programming languages Saa. S, Virtualization, Cloud Applications evolution: n Speech, sound, images, video, “augmented/extended reality”, “big data” Copyright © 2012, Elsevier Inc. All rights reserved. 2

Move to multi-processor Introduction Single Processor Performance RISC Copyright © 2012, Elsevier Inc. All

Move to multi-processor Introduction Single Processor Performance RISC Copyright © 2012, Elsevier Inc. All rights reserved. 3

n Cannot continue to leverage Instruction-Level parallelism (ILP) n n Single processor performance improvement

n Cannot continue to leverage Instruction-Level parallelism (ILP) n n Single processor performance improvement ended in 2003 New models for performance: n n Introduction Current Trends in Architecture Data-level parallelism (DLP) Thread-level parallelism (TLP) Request-level parallelism (RLP) These require explicit restructuring of the application Copyright © 2012, Elsevier Inc. All rights reserved. 4

n Personal Mobile Device (PMD) n n n Desktop Computing n n Emphasis on

n Personal Mobile Device (PMD) n n n Desktop Computing n n Emphasis on availability (very costly downtime!), scalability, throughput (20 million) Clusters / Warehouse Scale Computers n n Emphasis on price-performance (0. 35 billion) Servers n n e. g. smart phones, tablet computers (1. 8 billion sold 2010) Emphasis on energy efficiency and real-time Classes of Computers Used for “Software as a Service (Saa. S)”, Paa. S, Iaa. S, etc. Emphasis on availability ($6 M/hour-downtime at Amazon. com!) and price-performance (power=80% of TCO!) Sub-class: Supercomputers, emphasis: floating-point performance and fast internal networks, and big data analytics Embedded Computers (19 billion in 2010) n Emphasis: price Copyright © 2012, Elsevier Inc. All rights reserved. 5

n Classes of parallelism in applications: n n n Data-Level Parallelism (DLP) Task-Level Parallelism

n Classes of parallelism in applications: n n n Data-Level Parallelism (DLP) Task-Level Parallelism (TLP) Classes of Computers Parallelism Classes of architectural parallelism: n n Instruction-Level Parallelism (ILP) Vector architectures/Graphic Processor Units (GPUs) Thread-Level Parallelism Request-Level Parallelism Copyright © 2012, Elsevier Inc. All rights reserved. 6

n Single instruction stream, single data stream (SISD) n Single instruction stream, multiple data

n Single instruction stream, single data stream (SISD) n Single instruction stream, multiple data streams (SIMD) n n Vector architectures Multimedia extensions Graphics processor units Multiple instruction streams, single data stream (MISD) n n Classes of Computers Flynn’s Taxonomy No commercial implementation Multiple instruction streams, multiple data streams (MIMD) n n Tightly-coupled MIMD Loosely-coupled MIMD Copyright © 2012, Elsevier Inc. All rights reserved. 7

n “Old” view of computer architecture: n n Instruction Set Architecture (ISA) design i.

n “Old” view of computer architecture: n n Instruction Set Architecture (ISA) design i. e. decisions regarding: n n registers, memory addressing, addressing modes, instruction operands, available operations, control flow instructions, instruction encoding Defining Computer Architecture “Real” computer architecture: n n n Specific requirements of the target machine Design to maximize performance within constraints: cost, power, and availability Includes ISA, microarchitecture, hardware Copyright © 2012, Elsevier Inc. All rights reserved. 8

n Integrated circuit technology n n n Transistor density: 35%/year Die size: 10 -20%/year

n Integrated circuit technology n n n Transistor density: 35%/year Die size: 10 -20%/year Integration overall: 40 -55%/year n DRAM capacity: 25 -40%/year (slowing) n Flash capacity: 50 -60%/year n n Trends in Technology 15 -20 X cheaper/bit than DRAM Magnetic disk technology: 40%/year n n 15 -25 X cheaper/bit then Flash 300 -500 X cheaper/bit than DRAM Copyright © 2012, Elsevier Inc. All rights reserved. 9

n Bandwidth or throughput n n Total work done in a given time 10,

n Bandwidth or throughput n n Total work done in a given time 10, 000 -25, 000 X improvement for processors over the 1 st milestone 300 -1200 X improvement for memory and disks over the 1 st milestone Trends in Technology Bandwidth and Latency or response time n n n Time between start and completion of an event 30 -80 X improvement for processors over the 1 st milestone 6 -8 X improvement for memory and disks over the 1 st milestone Copyright © 2012, Elsevier Inc. All rights reserved. 10

Trends in Technology Bandwidth and Latency Log-log plot of bandwidth and latency milestones Copyright

Trends in Technology Bandwidth and Latency Log-log plot of bandwidth and latency milestones Copyright © 2012, Elsevier Inc. All rights reserved. 11

n Feature size n n n Minimum size of transistor or wire in x

n Feature size n n n Minimum size of transistor or wire in x or y dimension 10 microns in 1971 to. 032 microns in 2011 Transistor performance scales linearly n n n Trends in Technology Transistors and Wires Wire delay does not improve with feature size! Integration density scales quadratically Linear performance and quadratic density growth present a challenge and opportunity, creating the need for computer architect! Copyright © 2012, Elsevier Inc. All rights reserved. 12

n Problem: Get power in, get power out n Thermal Design Power (TDP) n

n Problem: Get power in, get power out n Thermal Design Power (TDP) n n n Characterizes sustained power consumption Used as target for power supply and cooling system Lower than peak power, higher than average power consumption Trends in Power and Energy Clock rate can be reduced dynamically to limit power consumption Energy per task is often a better measurement Copyright © 2012, Elsevier Inc. All rights reserved. 13

n Dynamic energy n n n Dynamic power n n Transistor switch from 0

n Dynamic energy n n n Dynamic power n n Transistor switch from 0 -> 1 or 1 -> 0 ½ x Capacitive load x Voltage 2 Trends in Power and Energy Dynamic Energy and Power ½ x Capacitive load x Voltage 2 x Frequency switched Reducing clock rate reduces power, not energy Copyright © 2012, Elsevier Inc. All rights reserved. 14

n n Intel 80386 consumed ~ 2 W 3. 3 GHz Intel Core i

n n Intel 80386 consumed ~ 2 W 3. 3 GHz Intel Core i 7 consumes 130 W Heat must be dissipated from 1. 5 x 1. 5 cm chip This is the limit of what can be cooled by air Copyright © 2012, Elsevier Inc. All rights reserved. Trends in Power and Energy Power 15

n Techniques for reducing power: n n Do nothing well Dynamic Voltage-Frequency Scaling Low

n Techniques for reducing power: n n Do nothing well Dynamic Voltage-Frequency Scaling Low power state for DRAM, disks Overclocking, turning off cores Copyright © 2012, Elsevier Inc. All rights reserved. Trends in Power and Energy Reducing Power 16

n Static power consumption n n Currentstatic x Voltage Scales with number of transistors

n Static power consumption n n Currentstatic x Voltage Scales with number of transistors To reduce: power gating Race-to-halt Trends in Power and Energy Static Power The new primary evaluation for design innovation n n Tasks per joule Performance per watt Copyright © 2012, Elsevier Inc. All rights reserved. 17

n Cost driven down by learning curve n n n Trends in Cost Yield

n Cost driven down by learning curve n n n Trends in Cost Yield DRAM: price closely tracks cost Microprocessors: price depends on volume n 10% less for each doubling of volume Copyright © 2012, Elsevier Inc. All rights reserved. 18

n n n Integrated circuit Trends in Cost Integrated Circuit Cost Bose-Einstein formula: Defects

n n n Integrated circuit Trends in Cost Integrated Circuit Cost Bose-Einstein formula: Defects per unit area = 0. 016 -0. 057 defects per square cm (2010) N = process-complexity factor = 11. 5 -15. 5 (40 nm, 2010) The manufacturing process dictates the wafer cost, wafer yield and defects per unit area The architect’s design affects the die area, which in turn affects the defects and cost per die Copyright © 2012, Elsevier Inc. All rights reserved. 19

n Systems alternate between two states of service with respect to SLA/SLO: 1. 2.

n Systems alternate between two states of service with respect to SLA/SLO: 1. 2. n Dependability Service accomplishment, where service is delivered as specified by SLA Service interruption, where the delivered service is different from the SLA Module reliability: “failure(F)=transition from 1 to 2” and “repair(R)=transition from 2 to 1” n n Mean time to failure (MTTF) Mean time to repair (MTTR) Mean time between failures (MTBF) = MTTF + MTTR Availability = MTTF / MTBF Copyright © 2012, Elsevier Inc. All rights reserved. 20

n Typical performance metrics: n n n Speedup of X relative to Y n

n Typical performance metrics: n n n Speedup of X relative to Y n n Execution time. Y / Execution time. X Execution time n n n Response time Throughput Measuring Performance Wall clock time: includes all system overheads CPU time: only computation time Benchmarks n n Kernels (e. g. matrix multiply) Toy programs (e. g. sorting) Synthetic benchmarks (e. g. Dhrystone) Benchmark suites (e. g. SPEC 06 fp, TPC-C) Copyright © 2012, Elsevier Inc. All rights reserved. 21

n Take Advantage of Parallelism n n e. g. multiple processors, disks, memory banks,

n Take Advantage of Parallelism n n e. g. multiple processors, disks, memory banks, pipelining, multiple functional units Principle of Locality n n Principles of Computer Design Reuse of data and instructions Focus on the Common Case n Amdahl’s Law Copyright © 2012, Elsevier Inc. All rights reserved. 22

n Principles of Computer Design The Processor Performance Equation Copyright © 2012, Elsevier Inc.

n Principles of Computer Design The Processor Performance Equation Copyright © 2012, Elsevier Inc. All rights reserved. 23

n Principles of Computer Design Different instruction types having different CPIs Copyright © 2012,

n Principles of Computer Design Different instruction types having different CPIs Copyright © 2012, Elsevier Inc. All rights reserved. 24

Chapter 1 Review & Examples Copyright © 2012, Elsevier Inc. All rights reserved. 25

Chapter 1 Review & Examples Copyright © 2012, Elsevier Inc. All rights reserved. 25

Instruction Set Architecture (ISA) • Serves as an interface between software and hardware. •

Instruction Set Architecture (ISA) • Serves as an interface between software and hardware. • Provides a mechanism by which the software tells the hardware what should be done. High level language code : C, C++, Java, Fortran, compiler Assembly language code: architecture specific statements assembler Machine language code: architecture specific bit patterns software instruction set hardware CSCE 430/830 ISA

Instruction Set Design Issues • Instruction set design issues include: – Where are operands

Instruction Set Design Issues • Instruction set design issues include: – Where are operands stored? » registers, memory, stack, accumulator – How many explicit operands are there? » 0, 1, 2, or 3 – How is the operand location specified? » register, immediate, indirect, . . . – What type & size of operands are supported? » byte, int, float, double, string, vector. . . – What operations are supported? » add, sub, mul, move, compare. . . CSCE 430/830 ISA

Classifying ISAs Accumulator (before 1960, e. g. 68 HC 11): 1 -address add A

Classifying ISAs Accumulator (before 1960, e. g. 68 HC 11): 1 -address add A acc ¬ acc + mem[A] Stack (1960 s to 1970 s): 0 -address add tos ¬ tos + next Memory-Memory (1970 s to 1980 s): 2 -address 3 -address add A, B, C mem[A] ¬ mem[A] + mem[B] mem[A] ¬ mem[B] + mem[C] Register-Memory (1970 s to present, e. g. 80 x 86): 2 -address add R 1, A load R 1, A R 1 ¬ R 1 + mem[A] R 1 ¬ mem[A] Register-Register (Load/Store, RISC) (1960 s to present, e. g. MIPS): 3 -address CSCE 430/830 add R 1, R 2, R 3 load R 1, R 2 store R 1, R 2 R 1 ¬ R 2 + R 3 R 1 ¬ mem[R 2] mem[R 1] ¬ R 2 ISA

Operand Locations in Four ISA Classes GPR CSCE 430/830 ISA

Operand Locations in Four ISA Classes GPR CSCE 430/830 ISA

Code Sequence C = A + B for Four Instruction Sets Stack Accumulator Push

Code Sequence C = A + B for Four Instruction Sets Stack Accumulator Push A Push B Add Pop C Load A Add B Store C memory CSCE 430/830 acc = acc + mem[C] Register (register-memory) Load R 1, A Add R 1, B Store C, R 1 memory R 1 = R 1 + mem[C] Register (loadstore) Load R 1, A Load R 2, B Add R 3, R 1, R 2 Store C, R 3 = R 1 + R 2 ISA

Types of Addressing Modes (VAX) Addressing Mode 1. Register direct 2. Immediate 3. Displacement

Types of Addressing Modes (VAX) Addressing Mode 1. Register direct 2. Immediate 3. Displacement 4. Register indirect 5. Indexed 6. Direct 7. Memory Indirect 8. Autoincrement Example Action Add R 4, R 3 R 4 <- R 4 + R 3 Add R 4, #3 R 4 <- R 4 + 3 Add R 4, 100(R 1) R 4 <- R 4 + M[100 + R 1] Add R 4, (R 1) R 4 <- R 4 + M[R 1] Add R 4, (R 1 + R 2) R 4 <- R 4 + M[R 1 + R 2] Add R 4, (1000) R 4 <- R 4 + M[1000] Add R 4, @(R 3) R 4 <- R 4 + M[M[R 3]] Add R 4, (R 2)+ R 4 <- R 4 + M[R 2] R 2 <- R 2 + d 9. Autodecrement Add R 4, (R 2)R 4 <- R 4 + M[R 2] R 2 <- R 2 - d 10. Scaled Add R 4, 100(R 2)[R 3] R 4 <- R 4 + M[100 + R 2 + R 3*d] • Studies by [Clark and Emer] indicate that modes 1 -4 account for 93% of all operands on the VAX. CSCE 430/830 ISA

Types of Operations • • CSCE 430/830 Arithmetic and Logic: Data Transfer: Control System

Types of Operations • • CSCE 430/830 Arithmetic and Logic: Data Transfer: Control System Floating Point Decimal String Graphics AND, ADD MOVE, LOAD, STORE BRANCH, JUMP, CALL OS CALL, VM ADDF, MULF, DIVF ADDD, CONVERT MOVE, COMPARE (DE)COMPRESS ISA

MIPS Instructions • All instructions exactly 32 bits wide • Different formats for different

MIPS Instructions • All instructions exactly 32 bits wide • Different formats for different purposes • Similarities in formats ease implementation 31 31 31 CSCE 430/830 6 bits 5 bits op rs rt rd 6 bits 5 bits 16 bits op rs rt offset 6 bits shamt funct 6 bits 26 bits op address 0 0 R-Format I-Format J-Format 0 ISA-2

MIPS Instruction Types • Arithmetic & Logical - manipulate data in registers add $s

MIPS Instruction Types • Arithmetic & Logical - manipulate data in registers add $s 1, $s 2, $s 3 $s 1 = $s 2 + $s 3 or $s 3, $s 4, $s 5 $s 3 = $s 4 OR $s 5 • Data Transfer - move register data to/from memory load & store lw $s 1, 100($s 2) $s 1 = Memory[$s 2 + 100] sw $s 1, 100($s 2) Memory[$s 2 + 100] = $s 1 • Branch - alter program flow beq $s 1, $s 2, 25 if ($s 1==$s 1) PC = PC + 4*25 else PC = PC + 4 CSCE 430/830 ISA-2

MIPS Arithmetic & Logical Instructions • Instruction usage (assembly) add dest, src 1, src

MIPS Arithmetic & Logical Instructions • Instruction usage (assembly) add dest, src 1, src 2 sub dest, src 1, src 2 and dest, src 1, src 2 dest=src 1 + src 2 dest=src 1 - src 2 dest=src 1 AND src 2 • Instruction characteristics – Always 3 operands: destination + 2 sources – Operand order is fixed – Operands are always general purpose registers • Design Principles: – Design Principle 1: Simplicity favors regularity – Design Principle 2: Smaller is faster CSCE 430/830 ISA-2

Arithmetic & Logical Instructions Binary Representation 31 6 bits 5 bits op rs rt

Arithmetic & Logical Instructions Binary Representation 31 6 bits 5 bits op rs rt rd 5 bits 6 bits shamt funct 0 • Used for arithmetic, logical, shift instructions – – – op: Basic operation of the instruction (opcode) rs: first register source operand rt: second register source operand rd: register destination operand shamt: shift amount (more about this later) funct: function - specific type of operation • Also called “R-Format” or “R-Type” Instructions CSCE 430/830 ISA-2

Arithmetic & Logical Instructions Binary Representation Example • Machine language for add $8, $17,

Arithmetic & Logical Instructions Binary Representation Example • Machine language for add $8, $17, $18 • See reference card for op, funct values 31 6 bits 5 bits op rs rt rd 0 17 18 8 5 bits 6 bits shamt funct 0 32 000000 10001 10010 01000 00000 100000 CSCE 430/830 0 Decimal Binary ISA-2

MIPS Data Transfer Instructions • Transfer data between registers and memory • Instruction format

MIPS Data Transfer Instructions • Transfer data between registers and memory • Instruction format (assembly) lw $dest, offset($addr) sw $src, offset($addr) load word store word • Uses: – Accessing a variable in main memory – Accessing an array element CSCE 430/830 ISA-2

Review: Chapter 1 • Classes of Computers and Classes of Parallelism • Technology Trend

Review: Chapter 1 • Classes of Computers and Classes of Parallelism • Technology Trend • Dependability • Performance Measurements and Benchmarks • Principles CSCE 430/830 ISA-2

5 Classes of Computers • Personal Mobile Devices – Cost is its primary concern

5 Classes of Computers • Personal Mobile Devices – Cost is its primary concern – Energy, media performance, and responsiveness • Desktop Computing – Price-Performance is its primary concern • Servers – Availability, Scalability, and Throughput • Clusters/warehouse-scale computers – Price-Performance, Energy • Embedded Computer – Price CSCE 430/830 ISA-2

Classes of Parallelism & Architectures • Data-Level Parallelism – Data items can be operated

Classes of Parallelism & Architectures • Data-Level Parallelism – Data items can be operated on at the same time • Task-Level Parallelism – Tasks can operate independently and largely in parallel • Instruction-Level Parallelism: data-level para. – Pipelining, speculative execution • Vector Architectures & GPU: data-level para. – A single instruction operates a collection of data in para. • Thread-Level Parallelism: either data-level para. or task-level para. – Exploits parallelism via parallel threads • Request-Level Parallelism: task-level para. CSCE 430/830 – Exploits parallelism via decoupled tasks ISA-2

4 ways for hardware to support parallelism • Single Instruction stream, Single Data stream

4 ways for hardware to support parallelism • Single Instruction stream, Single Data stream – SISD • Single Instruction stream, Multiple Data streams – SIMD, e. g. , GPU, targets data-level parallelism • Multiple Instruction streams, Single Data stream – MISD, no commercial multiprocessor of this type • Multiple Instruction streams, Multiple Data streams – MIMD, e. g. , multi-core processors, targets task-level parallelism CSCE 430/830 ISA-2

Trend in Technology • Integrated Circuit (IC) logic technology – Moore’s Law: a growth

Trend in Technology • Integrated Circuit (IC) logic technology – Moore’s Law: a growth rate in transistor count on a chip of about 40%-55% per year, or doubling every 18 or 24 months. • Semiconductor DRAM – In 2011, a growth rate in capacity: 25%-40% per year • Flash – A growth rate in capacity: 50%-60% per year • Magnetic Disk – Since 2004, it has dropped back to 40% per year. CSCE 430/830 ISA-2

Trend in Performance • Bandwidth vs. Latency – The improvement on Bandwidth is much

Trend in Performance • Bandwidth vs. Latency – The improvement on Bandwidth is much significant than that on Latency. CSCE 430/830 ISA-2

Growth in Processor Performance Move to multi-processor Hurdle: Power Wall Lack: Instructionlevel Parallelism: via

Growth in Processor Performance Move to multi-processor Hurdle: Power Wall Lack: Instructionlevel Parallelism: via Pipelining RISC CSCE 430/830 Locality: using Cache ISA-2

An example of Intel 486 CPU released in 1992,66 MHz, w/ L 2 Cache,

An example of Intel 486 CPU released in 1992,66 MHz, w/ L 2 Cache, 4. 9 -6. 3 W CSCE 430/830 http: //www. cpu-world. com/CPUs/80486/Intel. A 80486 DX 2 -66. html ISA-2

A CPU fan for Intel 486 CPU CSCE 430/830 http: //www. cnaweb. com/486 -ball-bearing-cpufan.

A CPU fan for Intel 486 CPU CSCE 430/830 http: //www. cnaweb. com/486 -ball-bearing-cpufan. aspx ISA-2

An example of Intel Pentium 4 CPU released in 2002, 2. 8 GHz, w/

An example of Intel Pentium 4 CPU released in 2002, 2. 8 GHz, w/ 512 KB Cache, 68. 4 W CSCE 430/830 http: //www. pcplanetsystems. com/abc/product_detail s. php? item_id=146&category_id=61 ISA-2

A typical CPU fan for Intel Pentium 4 CSCE 430/830 http: //www. dansdata. com/p

A typical CPU fan for Intel Pentium 4 CSCE 430/830 http: //www. dansdata. com/p 4 coc. htm ISA-2

A special CPU fan for gaming/multimedia users CSCE 430/830 http: //www. pcper. com/reviews/Cases-and. Cooling/Asus-Star-Ice-CPU-Cooler-Review

A special CPU fan for gaming/multimedia users CSCE 430/830 http: //www. pcper. com/reviews/Cases-and. Cooling/Asus-Star-Ice-CPU-Cooler-Review ISA-2

Trend in Power and Energy in IC • Energydynamic – ½ X Capacitive Load

Trend in Power and Energy in IC • Energydynamic – ½ X Capacitive Load X Voltage 2 • Powerdynamic – ½ X Capacitive Load X Voltage 2 X Freq. switched • Example – Intel 486 66 MHz Voltage: 5 V – Intel Pentium 4 2. 8 GHz Voltage: 1. 5 V – Intel Core 990 x 3. 4 GHz Voltage: 0. 8 -1. 375 V • Improving Energy Efficiency – Do nothing well; Dynamic Voltage-Frequency Scaling(DVFS); Design for typical case; Overclocking • Powerstatic – CSCE 430/830 Currentstatic X Voltage ISA-2

Dependability • Service Accomplishment & Service Interruption • Transitions between 2 states: Failure &

Dependability • Service Accomplishment & Service Interruption • Transitions between 2 states: Failure & Restoration • Measurements – Reliability: a measure of the continuous service accomplishment from a reference initial instant. » MTTF: Mean time to failure » FIT: failures per billion hours, 1/MTTF X 109 » MTTR: Mean time to repair » MTBF: Mean time between failures = MTTF + MTTR – Availability: a measure of the service accomplishment with respect to the alternation between the two states. » MTTF/(MTTF+MTTR) » Upper bound: 100% CSCE 430/830 ISA-2

Performance Measurements and Benchmarks • Metrics – Throughput: a total amount of work done

Performance Measurements and Benchmarks • Metrics – Throughput: a total amount of work done in a given time – Response time (Execution time): the time between the start and the completion of an event • Speedup of X relative to Y – Execution time. Y / Execution time. X • Execution time – Wall clock time: a latency to complete a task – CPU time: only computation time • Benchmarks – Kernels, Toy programs, Synthetic benchmarks – Benchmark suites: SPEC [CPU] & TPC [Transaction Processing] – Spec. Ratio = Execution Timereference / Execution Timetarget CSCE 430/830 ISA-2

Design Principles • Take Advantage of Parallelism • Principle of Locality • Focus on

Design Principles • Take Advantage of Parallelism • Principle of Locality • Focus on the Common Case – Amdahl’s Law – Upper bound of the speedup: ? CSCE 430/830 ISA-2

Example: Laundry Room Dirty Laundry Washing Machine 30 minutes washing CSCE 430/830 Drying Machine

Example: Laundry Room Dirty Laundry Washing Machine 30 minutes washing CSCE 430/830 Drying Machine Clean Laundry 90 minutes drying Total Execution Time: 30+90 = 120 minutes Washing Portion: 30/120 = ¼ Drying Portion: 90/120 = ¾ ISA-2

If we can have two drying machines Dirty Laundry Washing Machine 30 minutes washing

If we can have two drying machines Dirty Laundry Washing Machine 30 minutes washing CSCE 430/830 2 Drying Machines Clean Laundry 90/2=45 minutes drying ISA-2

Speedup: (30+90)/(30+45)=1. 6 Dirty Laundry Washing Machine 30 minutes washing CSCE 430/830 2 Drying

Speedup: (30+90)/(30+45)=1. 6 Dirty Laundry Washing Machine 30 minutes washing CSCE 430/830 2 Drying Machines Clean Laundry 90/2=45 minutes drying ISA-2

If we can have unlimited drying machines Dirty Laundry Washing Machine 30 minutes washing

If we can have unlimited drying machines Dirty Laundry Washing Machine 30 minutes washing CSCE 430/830 ∞ Drying Machines Clean Laundry ? minutes drying ISA-2

Speedup: (30+90)/(30+0)=4 Dirty Laundry Washing Machine 30 minutes washing CSCE 430/830 ∞ Drying Machines

Speedup: (30+90)/(30+0)=4 Dirty Laundry Washing Machine 30 minutes washing CSCE 430/830 ∞ Drying Machines Clean Laundry 90/∞ ≈ 0 minutes drying ISA-2

Design Principles • Take Advantage of Parallelism • Principle of Locality • Focus on

Design Principles • Take Advantage of Parallelism • Principle of Locality • Focus on the Common Case – Amdahl’s Law – Upper bound of the speedup: » 1 / (1 - Fractionenhanced) CSCE 430/830 ISA-2

Exercise 1 • If the new processor is 10 times faster than the original

Exercise 1 • If the new processor is 10 times faster than the original process, and we assume that the original processor is busy with computation 40% of the time and is waiting for I/O 60% of the time, what is the overall speedup gained by incorporating the enhancement? • Fractionenhanced = 0. 4, Speedupenhanced = 10 • Speedupoverall = 1/(0. 6+0. 4/10) = 1. 56 • What is the upper bound of the overall speedup? • Upper bound = 1/0. 6 = 1. 67 CSCE 430/830 ISA-2

Exercise 2 • In a disk subsystem: – – – 10 disks, each rated

Exercise 2 • In a disk subsystem: – – – 10 disks, each rated at 1, 000 -hour MTTF 1 ATA controller, 500, 000 -hour MTTF 1 power supply, 200, 000 -hour MTTF 1 fan, 200, 000 -hour MTTF 1 ATA cable, 1, 000 -hour MTTF • Assuming the lifetimes are exponentially distributed and that failures are independent, compute the MTTF of the system as a whole CSCE 430/830 ISA-2

Exercise 2 • Because the overall failure rate of the collection is the sum

Exercise 2 • Because the overall failure rate of the collection is the sum of the failure rates of the modules, the failure rate of the system – = 10*(1/1, 000) + 1/500, 000 + 1/200, 000 + 1/1, 000 – = 23/1, 000 or 23, 000 FIT • Because MTTF is the inverse of the failure rate – MTTFsystem = 1/(23/1, 000) = 43, 500 hours CSCE 430/830 ISA-2