William Stallings Computer Organization and Architecture Chapter 12

  • Slides: 56
Download presentation
William Stallings Computer Organization and Architecture Chapter 12 CPU Structure and Function Rev. (2008

William Stallings Computer Organization and Architecture Chapter 12 CPU Structure and Function Rev. (2008 -09) by Luciano Gualà 11 - 1

CPU Functions • CPU must: § § § Fetch instructions Decode instructions Fetch operands

CPU Functions • CPU must: § § § Fetch instructions Decode instructions Fetch operands Execute instructions / Process data Store data Check (and possibly serve) interrupts Rev. (2008 -09) by Luciano Gualà 11 - 2

Control Lines PC IR Address Lines ALU CPU Internal Bus Registri Data Lines CPU

Control Lines PC IR Address Lines ALU CPU Internal Bus Registri Data Lines CPU Components AC MBR MAR Control Unit Control Signals Rev. (2008 -09) by Luciano Gualà 11 - 3

Kind of Registers • User visible and modifiable § General Purpose § Data (e.

Kind of Registers • User visible and modifiable § General Purpose § Data (e. g. accumulator) § Address (e. g. base addressing, index addressing) • Control registers (not visible to user) § § Program Counter (PC) Instruction Decoding Register (IR) Memory Address Register (MAR) Memory Buffer Register (MBR) • State register (visible to user but not directly modifiable) § Program Status Word (PSW) Rev. (2008 -09) by Luciano Gualà 11 - 4

Kind of General Purpose Registers • May be used in a general way or

Kind of General Purpose Registers • May be used in a general way or be restricted to contains only data or only addresses • address registers: § e. g. segment pointers, indexing registers, stack pointer • Advantages of general purpose registers § Increase flexibility and programmer options § Increase instruction size & complexity • Advantages of specialized (data/address) registers § Smaller (faster) instructions § Less flexibility Rev. (2008 -09) by Luciano Gualà 11 - 5

How Many General Purposes Registers? • Between 8 - 32 • Fewer = more

How Many General Purposes Registers? • Between 8 - 32 • Fewer = more memory references • More does not reduce memory references Rev. (2008 -09) by Luciano Gualà 11 - 6

How many bits per register? • Large enough to hold full address value •

How many bits per register? • Large enough to hold full address value • Large enough to hold full data value • Often possible to combine two data registers to obtain a single register with a double length Rev. (2008 -09) by Luciano Gualà 11 - 7

State Registers • Sets of individual bits § e. g. store if result of

State Registers • Sets of individual bits § e. g. store if result of last operation was zero or not • Can be read (implicitly) by programs § e. g. Jump if zero • Can not (usually) be set by programs • There is always a Program Status Word (see later) • Possibly (for operating system purposes): § Memory page table (virtual memory) § Process control blocks (multitasking) • How to store control information? § registers VS main memory Rev. (2008 -09) by Luciano Gualà 11 - 8

Program Status Word • A set of bits, including condition code bits, giving the

Program Status Word • A set of bits, including condition code bits, giving the status of the program § § § § Sign of last result Zero Carry Equal Overflow Interrupt enable/disable Supervisor mode (allow to execute privileged instructions) • Used by operating system (not available to user programs) Rev. (2008 -09) by Luciano Gualà 11 - 9

Example Register Organizations Rev. (2008 -09) by Luciano Gualà 11 - 10

Example Register Organizations Rev. (2008 -09) by Luciano Gualà 11 - 10

Instruction Cycle (with Interrupt) Fetch Phase Execute Phase Rev. (2008 -09) by Luciano Gualà

Instruction Cycle (with Interrupt) Fetch Phase Execute Phase Rev. (2008 -09) by Luciano Gualà Interrupt Phase 11 - 11

Instruction Cycle (with Indirect Addressing) Rev. (2008 -09) by Luciano Gualà 11 - 12

Instruction Cycle (with Indirect Addressing) Rev. (2008 -09) by Luciano Gualà 11 - 12

A closer look at the execution phase • Execute • Decode – Fetch Operand

A closer look at the execution phase • Execute • Decode – Fetch Operand – Execute • Decode – Calculate Address – Fetch Operand – Execute • Decode – Calculate … – … Operand – Execute – Write Result Rev. (2008 -09) by Luciano Gualà 11 - 13

Instruction Cycle State Diagram (with Indirection) Rev. (2008 -09) by Luciano Gualà 11 -

Instruction Cycle State Diagram (with Indirection) Rev. (2008 -09) by Luciano Gualà 11 - 14

Data Flow for Instruction Fetch • • PC contains address of next instruction Sequence

Data Flow for Instruction Fetch • • PC contains address of next instruction Sequence of actions needed to execute instruction fetch: 1. 2. 3. 4. 5. 6. 7. • PC is moved to MAR content is placed on address bus Control unit requests memory read Memory read address bus and places result on data bus Data bus is copied to MBR is copied to IR Meanwhile, PC is incremented by 1 Action 7 can be executed in parallel with any other action after the first Rev. (2008 -09) by Luciano Gualà 11 - 15

Diagrams representing Data Flows • The previous example shows 7 distinct actions, each corresponding

Diagrams representing Data Flows • The previous example shows 7 distinct actions, each corresponding to a DF (= data flow) • Distinct DFs are not necessarily executed at distinct time steps (i. e. : DFn and DFn+1 might be executed during the same time step) • Large arrows in white represents DFs with a true flow of data • Large hatched arrows represents DFs where flow of data acts as a control: only the more relevant controls are shown Rev. (2008 -09) by Luciano Gualà 11 - 16

Data Flow Diagram for Instruction Fetch CPU PC 3 2 1 4 MAR 4

Data Flow Diagram for Instruction Fetch CPU PC 3 2 1 4 MAR 4 Control Unit 7 3 5 6 IR Memory MBR MAR = Memory Address Register MBR = Memory Buffer Register IR = Instruction Register PC = Program Counter Address Bus Data Bus Rev. (2008 -09) by Luciano Gualà Control Bus 11 - 17

Data Flow for Data Fetch: Immediate and Register Addressing • ALWAYS: § IR is

Data Flow for Data Fetch: Immediate and Register Addressing • ALWAYS: § IR is examined to determine addressing mode • Immediate addressing: § The operand is already in IR • Register addressing: § Control unit requests read from register selected according to value in IR Rev. (2008 -09) by Luciano Gualà 11 - 18

Data Flow Diagram for Data Fetch with Register Addressing CPU Registers Control Unit IR

Data Flow Diagram for Data Fetch with Register Addressing CPU Registers Control Unit IR MAR = Memory Address Register MBR = Memory Buffer Register IR = Instruction Register PC = Program Counter Address Bus Rev. (2008 -09) by Luciano Gualà Data Bus Control Bus 11 - 19

Data Flow for Data Fetch: Direct Addressing • Direct addressing: 1. 2. 3. 4.

Data Flow for Data Fetch: Direct Addressing • Direct addressing: 1. 2. 3. 4. Address field is moved to MAR content is placed on address bus Control unit requests memory read Memory reads address bus and places result on data bus 5. Data bus (= operand) is copied to MBR Rev. (2008 -09) by Luciano Gualà 11 - 20

Data Flow Diagram for Data Fetch with Direct Addressing CPU 3 2 4 MAR

Data Flow Diagram for Data Fetch with Direct Addressing CPU 3 2 4 MAR 4 Control Unit 1 Memory 3 5 IR MBR MAR = Memory Address Register MBR = Memory Buffer Register IR = Instruction Register PC = Program Counter Address Bus Data Bus Rev. (2008 -09) by Luciano Gualà Control Bus 11 - 21

Data Flow for Data Fetch: Register Indirect Addressing • Register indirect addressing: 1. Control

Data Flow for Data Fetch: Register Indirect Addressing • Register indirect addressing: 1. Control unit requests read from register selected according to value in IR 2. Selected register value is moved to MAR 3. MAR content is placed on address bus 4. Control unit requests memory read 5. Memory reads address bus and places result on data bus 6. Data bus (= operand) is moved to MBR Rev. (2008 -09) by Luciano Gualà 11 - 22

Data Flow Diagram for Data Fetch with Register Indirect Addressing CPU 4 3 5

Data Flow Diagram for Data Fetch with Register Indirect Addressing CPU 4 3 5 MAR 2 5 1 Registers Control Unit Memory 4 1 6 IR MBR MAR = Memory Address Register MBR = Memory Buffer Register IR = Instruction Register PC = Program Counter Address Bus Data Bus Rev. (2008 -09) by Luciano Gualà Control Bus 11 - 23

Data Flow for Data Fetch: Indirect Addressing • Indirect addressing: 1. 2. 3. 4.

Data Flow for Data Fetch: Indirect Addressing • Indirect addressing: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Address field is moved to MAR content is placed on address bus Control unit requests memory read Memory reads address bus and places result on data bus Data bus (= address of operand) is moved to MBR is transferred to MAR content is placed on address bus Control unit requests memory read Memory reads address bus and places result on data bus Data bus (= operand) is copied to MBR Rev. (2008 -09) by Luciano Gualà 11 - 24

Data Flow Diagram for Data Fetch with Indirect Addressing CPU 3 -8 2 -7

Data Flow Diagram for Data Fetch with Indirect Addressing CPU 3 -8 2 -7 4 -9 MAR 4 -9 1 6 IR Control Unit MBR MAR = Memory Address Register MBR = Memory Buffer Register IR = Instruction Register PC = Program Counter Memory 3 -8 5 - 10 Address Bus Data Bus Rev. (2008 -09) by Luciano Gualà Control Bus 11 - 25

Data Flow for Data Fetch: Relative Addressing • Relative addressing (a form of displacement):

Data Flow for Data Fetch: Relative Addressing • Relative addressing (a form of displacement): 1. 2. 3. 4. 5. 6. 7. Address field is moved to ALU PC is moved to ALU Control unit requests sum to ALU Result from ALU is moved to MAR content is placed on address bus Control unit requests memory read Memory reads address bus and places result on data bus 8. Data bus (= operand) is copied to MBR Rev. (2008 -09) by Luciano Gualà 11 - 26

Data Flow Diagram for Data Fetch with Relative Addressing CPU 6 4 PC 2

Data Flow Diagram for Data Fetch with Relative Addressing CPU 6 4 PC 2 5 7 MAR 7 Memory ALU 3 Control Unit 6 1 8 IR MAR = Memory Address Register MBR = Memory Buffer Register IR = Instruction Register PC = Program Counter ALU = Arithmetic Logic Unit MBR Address Bus Rev. (2008 -09) by Luciano Gualà Data Bus Control Bus 11 - 27

Data Flow for Data Fetch: Base Addressing • Base addressing (a form of displacement):

Data Flow for Data Fetch: Base Addressing • Base addressing (a form of displacement): 1. Control unit requests read from register selected according to value in IR (explicit selection) 2. Selected register value is moved to ALU 3. Address field is moved to ALU 4. Control unit requests sum to ALU 5. Result from ALU is moved to MAR 6. MAR content is placed on address bus 7. Control unit requests memory read 8. Memory reads address bus and places result on data bus 9. Result (= operand) is moved to MBR Rev. (2008 -09) by Luciano Gualà 11 - 28

Data Flow Diagram for Data Fetch with Base Addressing CPU 7 5 6 8

Data Flow Diagram for Data Fetch with Base Addressing CPU 7 5 6 8 MAR ALU 8 Memory 2 4 3 Registers Control Unit 7 1 1 9 IR MAR = Memory Address Register MBR = Memory Buffer Register IR = Instruction Register PC = Program Counter ALU = Arithmetic Logic Unit MBR Address Bus Rev. (2008 -09) by Luciano Gualà Data Bus Control Bus 11 - 29

Data Flow for Data Fetch: Indexed Addressing • • Same data flow as Base

Data Flow for Data Fetch: Indexed Addressing • • Same data flow as Base addressing Indexed addressing (a form of displacement): 1. Control unit requests read from register selected according to value in IR (explicit selection) 2. Selected register value is moved to ALU 3. Address field is moved to ALU 4. Control unit requests sum to ALU 5. Result from ALU is moved to MAR 6. MAR content is placed on address bus 7. Control unit requests memory read 8. Memory reads address bus and places result on data bus 9. Result (= operand) is moved to MBR Rev. (2008 -09) by Luciano Gualà 11 - 30

Data Flow Diagram for Data Fetch with Indexed Addressing • The diagram is the

Data Flow Diagram for Data Fetch with Indexed Addressing • The diagram is the same as for Base addressing Rev. (2008 -09) by Luciano Gualà 11 - 31

Data Flow for Data Fetch with indirection and displacement • Two different combinations of

Data Flow for Data Fetch with indirection and displacement • Two different combinations of displacement and indirection (pre-index and post-index) • See chapter 11 for the logical diagrams • The data flow is a combination of what happens with the two techniques • Try drawing the data flow diagrams yourself ! Rev. (2008 -09) by Luciano Gualà 11 - 32

Data Flow for Execute • May take many forms • Depends on the actual

Data Flow for Execute • May take many forms • Depends on the actual instruction being executed • May include § § Memory read/write Input/Output Register transfers ALU operations Rev. (2008 -09) by Luciano Gualà 11 - 33

Data Flow for Interrupt • • Current PC has to be saved (usually to

Data Flow for Interrupt • • Current PC has to be saved (usually to stack) to allow resumption after interrupt and execution has to continue at the interrupt handler routine 1. Save the content of PC 2. PC is loaded with address of the handling routine for the specific interrupt a. b. c. d. e. Contents of PC is copied to MBR Special memory location (e. g. stack pointer) is loaded to MAR Contents of MAR and MBR are placed, respectively, on address and data bus Control unit requests memory write Memory reads address and data bus and store to memory location a. b. c. d. e. f. Move to MAR the address into the interrupt vector for the specific interrupt MAR content is placed on address bus Control unit requests memory read Memory reads address bus and places result on data bus Data bus is copied to MBR is moved to PC Next instruction (first of the specific interrupt handler) can now be fetched Rev. (2008 -09) by Luciano Gualà 11 - 34

Data Flow Diagram for Interrupt CPU 1 d - 2 c 2 d 1

Data Flow Diagram for Interrupt CPU 1 d - 2 c 2 d 1 c - 2 b MAR Memory 1 b 1 e 2 a 1 b Registers Control Unit 1 e - 2 d 1 d - 2 c 2 e 2 f MBR PC 1 a MAR = Memory Address Register MBR = Memory Buffer Register IR = Instruction Register PC = Program Counter 1 c Address Bus Data Bus Rev. (2008 -09) by Luciano Gualà Control Bus 11 - 35

Istruction pipelining • similar to the use in an assembly line in a manufacturing

Istruction pipelining • similar to the use in an assembly line in a manufacturing plant § product goes through various stages of production § products at various stages can be worked on simultaneously • in a pipeline, new inputs are accepted at one end before previously accepted inputs appear as outputs at the other end Rev. (2008 -09) by Luciano Gualà 11 - 36

Prefetch • Fetch accesses main memory • Execution usually does not access main memory

Prefetch • Fetch accesses main memory • Execution usually does not access main memory • CPU could fetch next instruction during the execution of current instruction • Requires two sub-parts in CPU able to operate independently • Called instruction prefetch How much does the performance improve? Rev. (2008 -09) by Luciano Gualà 11 - 37

Improved Performance • But performance is not doubled: § Fetch usually shorter than execution

Improved Performance • But performance is not doubled: § Fetch usually shorter than execution • Prefetch more than one instruction? § Any conditional jump or branch means that prefetched instructions may be useless • Performance can be improved by adding more stages in instruction processing. . . § … and more independent sub-parts in CPU Rev. (2008 -09) by Luciano Gualà 11 - 38

Pipelining • Instruction cycle can be decomposed in elementary phases, for example: § §

Pipelining • Instruction cycle can be decomposed in elementary phases, for example: § § § FI: Fetch instruction DI: Decode instruction CO: Calculate operands (i. e. calculate EAs) FO: Fetch operands EI: Execute instructions WO: Write output • Pipelining improves performance by overlapping these phases (ideally can all be overlapped) Rev. (2008 -09) by Luciano Gualà 11 - 39

Timing of Pipeline Set up time Rev. (2008 -09) by Luciano Gualà Time 11

Timing of Pipeline Set up time Rev. (2008 -09) by Luciano Gualà Time 11 - 40

Some remarks • An instruction can have only some of the six phases •

Some remarks • An instruction can have only some of the six phases • We are assuming that all the phases can be performed in parallel § e. g. , no bus conflicts, no memory conflicts… • the maximum improvement is obtained when the phases take more or less the same time Rev. (2008 -09) by Luciano Gualà 11 - 41

A general principle • The more overlapping phases are in a pipeline the more

A general principle • The more overlapping phases are in a pipeline the more additional processing is needed to manage each phase and synchronization among phases § Logical dependencies between phases • There is a trade-off between number of phases and speed-up of instruction execution Rev. (2008 -09) by Luciano Gualà 11 - 42

Control flow (1) Rev. (2008 -09) by Luciano Gualà 11 - 43

Control flow (1) Rev. (2008 -09) by Luciano Gualà 11 - 43

Branch in a Pipeline (1) • Instruction 3 is a conditional branch to instruction

Branch in a Pipeline (1) • Instruction 3 is a conditional branch to instruction 15 Time Branch Penalty WO Rev. (2008 -09) by Luciano Gualà 11 - 44

Control flow (2) • But an unconditional branch might be managed earlier than EI

Control flow (2) • But an unconditional branch might be managed earlier than EI phase Empty Back Pipe Rev. (2008 -09) by Luciano Gualà 11 - 45

Branch in a Pipeline (2) • The unconditional branch is managed after CO phase

Branch in a Pipeline (2) • The unconditional branch is managed after CO phase Time Branch Penalty Branch FO EI WO Rev. (2008 -09) by Luciano Gualà 11 - 46

Control flow (3) • But conditional branches still have a large penalty Empty Back

Control flow (3) • But conditional branches still have a large penalty Empty Back Pipe Rev. (2008 -09) by Luciano Gualà 11 - 47

Branch in a Pipeline (3) • Here instruction 3 is a conditional branch to

Branch in a Pipeline (3) • Here instruction 3 is a conditional branch to instruction 15 Time Branch Penalty WO Rev. (2008 -09) by Luciano Gualà 11 - 48

Dealing with Branches • • • Multiple Streams Prefetch Branch Target Loop buffer Branch

Dealing with Branches • • • Multiple Streams Prefetch Branch Target Loop buffer Branch prediction Delayed branching Rev. (2008 -09) by Luciano Gualà 11 - 49

Multiple Streams • Have two pipelines • Prefetch each branch into a separate pipeline

Multiple Streams • Have two pipelines • Prefetch each branch into a separate pipeline • Use appropriate pipeline • Leads to bus & register contention (only the sub -parts making up the pipeline are doubled) • Additional branches entering the pipeline lead to further pipelines being needed Rev. (2008 -09) by Luciano Gualà 11 - 50

Prefetch Branch Target • Target of branch is prefetched in addition to instructions following

Prefetch Branch Target • Target of branch is prefetched in addition to instructions following branch and stored in an additional dedicated register • Keep target until branch is executed • Used by IBM 360/91 Rev. (2008 -09) by Luciano Gualà 11 - 51

Loop Buffer • • Very fast memory internal to CPU Record the last n

Loop Buffer • • Very fast memory internal to CPU Record the last n fetched instructions Maintained by fetch stage of pipeline Check loop buffer before fetching from memory Very good for small loops or close jumps The same concept as cache memory Used by CRAY-1 Rev. (2008 -09) by Luciano Gualà 11 - 52

Branch Prediction (1) • Predict never taken § § Assume that jump will not

Branch Prediction (1) • Predict never taken § § Assume that jump will not happen Always fetch next instruction 68020 & VAX 11/780 VAX will not prefetch after branch if a page fault would result (O/S v CPU design) • Predict always taken § Assume that jump will happen § Always fetch target instruction Rev. (2008 -09) by Luciano Gualà 11 - 53

Branch Prediction (2) • Predict by Opcode § Some instructions are more likely to

Branch Prediction (2) • Predict by Opcode § Some instructions are more likely to result in a jump than others § Can get up to 75% success • Taken/Not taken switch § Based on previous history of the instruction § Good for loops § Where do we store the history of the instruction? • cache Rev. (2008 -09) by Luciano Gualà 11 - 54

Branch Prediction State Diagram Rev. (2008 -09) by Luciano Gualà 11 - 55

Branch Prediction State Diagram Rev. (2008 -09) by Luciano Gualà 11 - 55

Delayed Branch • Do not take jump until you have to • Rearrange instructions

Delayed Branch • Do not take jump until you have to • Rearrange instructions • Used for RISC architectures RISC: reduced instructions set computer CISC: complex instructions set computer Rev. (2008 -09) by Luciano Gualà 11 - 56