CS 252 Graduate Computer Architecture Lecture 17 Caches

  • Slides: 63
Download presentation
CS 252 Graduate Computer Architecture Lecture 17 Caches (continued) and Memory Systems October 29

CS 252 Graduate Computer Architecture Lecture 17 Caches (continued) and Memory Systems October 29 nd, 2003 Prof. John Kubiatowicz http: //www. cs. berkeley. edu/~kubitron/courses/cs 252 -F 03 10/29/03 CS 252/Kubiatowicz Lec 17. 1

Review: Who Cares About the Memory Hierarchy? • Processor Only Thus Far in Course:

Review: Who Cares About the Memory Hierarchy? • Processor Only Thus Far in Course: – CPU cost/performance, ISA, Pipelined Execution CPU-DRAM Gap “Moore’s Law” 100 10 1 “Less’ Law? ” µProc 60%/yr. Processor-Memory Performance Gap: (grows 50% / year) DRAM 7%/yr. 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Performance 1000 • 1980: no cache in µproc; 1995 2 -level cache on chip (1989 first Intel µproc with a cache on chip) 10/29/03 CS 252/Kubiatowicz Lec 17. 2

Review: Cache performance • Miss-oriented Approach to Memory Access: • Separating out Memory component

Review: Cache performance • Miss-oriented Approach to Memory Access: • Separating out Memory component entirely – AMAT = Average Memory Access Time 10/29/03 CS 252/Kubiatowicz Lec 17. 3

Review: Harvard Architecture • Unified vs Separate I&D (Harvard) Proc Unified Cache-1 I-Cache-1 Unified

Review: Harvard Architecture • Unified vs Separate I&D (Harvard) Proc Unified Cache-1 I-Cache-1 Unified Cache-2 Proc D-Cache-1 Unified Cache-2 • Statistics (given in H&P): – 16 KB I&D: Inst miss rate=0. 64%, Data miss rate=6. 47% – 32 KB unified: Aggregate miss rate=1. 99% • Which is better (ignore L 2 cache)? – Assume 33% data ops 75% accesses from instructions (1. 0/1. 33) – hit time=1, miss time=50 – Note that data hit has 1 stall for unified cache (only one port) AMATHarvard=75%x(1+0. 64%x 50)+25%x(1+6. 47%x 50) = 2. 05 AMATUnified=75%x(1+1. 99%x 50)+25%x(1+1+1. 99%x 50)= 2. 24 10/29/03 CS 252/Kubiatowicz Lec 17. 4

Review: Reducing Misses via a “Victim Cache” • How to combine fast hit time

Review: Reducing Misses via a “Victim Cache” • How to combine fast hit time of direct mapped yet still avoid conflict misses? • Add buffer to place data discarded from cache • Jouppi [1990]: 4 -entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache • Used in Alpha, HP machines TAGS DATA Tag and Comparator One Cache line of Data To Next Lower Level In Hierarchy 10/29/03 CS 252/Kubiatowicz Lec 17. 5

Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the time to

Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the time to hit in the cache. 3. Reduce the miss penalty 10/29/03 CS 252/Kubiatowicz Lec 17. 6

0. Faster Memory 6 -Transistor SRAM Cell 0 0 bit word (row select) 1

0. Faster Memory 6 -Transistor SRAM Cell 0 0 bit word (row select) 1 1 bit • Write: 1. Drive bit lines (bit=1, bit=0) 2. . Select row bit • Read: replaced with pullup to save area 1. Precharge bit and bit to Vdd or Vdd/2 => make sure equal! 2. . Select row 3. Cell pulls one line low 4. Sense amp on column detects difference between bit and bit 10/29/03 CS 252/Kubiatowicz Lec 17. 7

1. Fast Hit times via Small and Simple Caches • Why Alpha 21164 has

1. Fast Hit times via Small and Simple Caches • Why Alpha 21164 has 8 KB Instruction and 8 KB data cache + 96 KB second level cache? – Small data cache and clock rate • Direct Mapped, on chip 10/29/03 CS 252/Kubiatowicz Lec 17. 8

2. Fast hits by Avoiding Address Translation • Send virtual address to cache? Called

2. Fast hits by Avoiding Address Translation • Send virtual address to cache? Called Virtually Addressed Cache or just Virtual Cache vs. Physical Cache – Every time process is switched logically must flush the cache; otherwise get false hits » Cost is time to flush + “compulsory” misses from empty cache – Dealing with aliases (sometimes called synonyms); Two different virtual addresses map to same physical address – I/O must interact with cache, so need virtual address • Solution to aliases – HW guaranteess covers index field & direct mapped, they must be unique; called page coloring • Solution to cache flush – Add process identifier tag that identifies process as well as address within process: can’t get a hit if wrong process 10/29/03 CS 252/Kubiatowicz Lec 17. 9

Virtually Addressed Caches CPU VA Tags $ TB VA PA L 2 $ TB

Virtually Addressed Caches CPU VA Tags $ TB VA PA L 2 $ TB PA 10/29/03 PA Tags $ PA $ VA VA VA TB CPU PA MEM Conventional Organization Virtually Addressed Cache Translate only on miss Synonym Problem MEM Overlap $ access with VA translation: requires $ index to remain invariant CS 252/Kubiatowicz across translation Lec 17. 10

2. Fast Cache Hits by Avoiding Translation: Process ID impact • Black is uniprocess

2. Fast Cache Hits by Avoiding Translation: Process ID impact • Black is uniprocess • Light Gray is multiprocess when flush cache • Dark Gray is multiprocess when use Process ID tag • Y axis: Miss Rates up to 20% • X axis: Cache size from 2 KB to 1024 KB 10/29/03 CS 252/Kubiatowicz Lec 17. 11

2. Fast Cache Hits by Avoiding Translation: Index with Physical Portion of Address •

2. Fast Cache Hits by Avoiding Translation: Index with Physical Portion of Address • If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag 31 Page Address Tag 12 11 Page Offset Index 0 Block Offset • Limits cache to page size: what if want bigger caches and uses same trick? – Higher associativity moves barrier to right – Page coloring 10/29/03 CS 252/Kubiatowicz Lec 17. 12

3. Fast Hit Times Via Pipelined Writes • Pipeline Tag Check and Update Cache

3. Fast Hit Times Via Pipelined Writes • Pipeline Tag Check and Update Cache as separate stages; current write tag check & previous write cache update • Only STORES in the pipeline; empty during a miss Store r 2, (r 1) Add -Sub -Store r 4, (r 3) Check r 1 M[r 1]<-r 2& check r 3 • In shade is “Delayed Write Buffer”; must be checked on reads; either complete write or read from buffer 10/29/03 CS 252/Kubiatowicz Lec 17. 13

4. Fast Writes on Misses Via Small Subblocks • If most writes are 1

4. Fast Writes on Misses Via Small Subblocks • If most writes are 1 word, subblock size is 1 word, & write through then always write subblock & tag immediately – Tag match and valid bit already set: Writing the block was proper, & nothing lost by setting valid bit on again. – Tag match and valid bit not set: The tag match means that this is the proper block; writing the data into the subblock makes it appropriate to turn the valid bit on. – Tag mismatch: This is a miss and will modify the data portion of the block. Since write-through cache, no harm was done; memory still has an up-to-date copy of the old value. Only the tag to the address of the write and the valid bits of the other subblock need be changed because the valid bit for this subblock has already been set • Doesn’t work with write back due to last case 10/29/03 CS 252/Kubiatowicz Lec 17. 14

Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the time to

Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the time to hit in the cache. 3. Reduce the miss penalty 10/29/03 CS 252/Kubiatowicz Lec 17. 15

0. Faster Memory • This requires a bit of discussion. • Hold a bit

0. Faster Memory • This requires a bit of discussion. • Hold a bit until we discuss memory. 10/29/03 CS 252/Kubiatowicz Lec 17. 16

1. Reducing Miss Penalty: Read Priority over Write on Miss • Write through with

1. Reducing Miss Penalty: Read Priority over Write on Miss • Write through with write buffers offer RAW conflicts with main memory reads on cache misses – If simply wait for write buffer to empty, might increase read miss penalty (old MIPS 1000 by 50% ) – Check write buffer contents before read; if no conflicts, let the memory access continue • Alternative: Write Back – Read miss replacing dirty block – Normal: Write dirty block to memory, and then do the read – Instead copy the dirty block to a write buffer, then do the read, and then do the write – CPU stall less since restarts as soon as do read 10/29/03 CS 252/Kubiatowicz Lec 17. 17

1. Reducing Penalty: Read Priority over Write on Miss Processor Cache DRAM Write Buffer

1. Reducing Penalty: Read Priority over Write on Miss Processor Cache DRAM Write Buffer • Write Buffer is needed between the Cache and Memory – Processor: writes data into the cache and the write buffer – Memory controller: write contents of the buffer to memory • Write buffer is just a FIFO: – Typical number of entries: 4 – Works fine if: Store frequency (w. r. t. time) << 1 / DRAM write cycle – Must handle burst behavior as well! 10/29/03 CS 252/Kubiatowicz Lec 17. 18

RAW Hazards from Write Buffer! • Write-Buffer Issues: Could introduce RAW Hazard with memory!

RAW Hazards from Write Buffer! • Write-Buffer Issues: Could introduce RAW Hazard with memory! – Write buffer may contain only copy of valid data Reads to memory may get wrong result if we ignore write buffer • Solutions: – Simply wait for write buffer to empty before servicing reads: » Might increase read miss penalty (old MIPS 1000 by 50% ) – Check write buffer contents before read (“fully associative”); » If no conflicts, let the memory access continue » Else grab data from buffer • Can Write Buffer help with Write Back? – Read miss replacing dirty block » Copy dirty block to write buffer while starting read to memory 3 8 RAS/ CAS Write DATA RAS/ CAS Read DATA Processor + DRAM 10/29/03 DRAM Proc 3 8 RAS/ CAS Read DATA RAS/ CAS Write DATA 8 Read DATA CS 252/Kubiatowicz Lec 17. 19

2. Reduce Miss Penalty: Subblock Placement • Don’t have to load full block on

2. Reduce Miss Penalty: Subblock Placement • Don’t have to load full block on a miss • Have valid bits per subblock to indicate valid • (Originally invented to reduce tag storage) Valid Bits 10/29/03 Subblocks CS 252/Kubiatowicz Lec 17. 20

3. Reduce Miss Penalty: Early Restart and Critical Word First • Don’t wait for

3. Reduce Miss Penalty: Early Restart and Critical Word First • Don’t wait for full block to be loaded before restarting CPU – Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution – Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first • Generally useful only in large blocks, • Spatial locality a problem; tend to want next sequential word, so not clear if benefit by early restart block 10/29/03 CS 252/Kubiatowicz Lec 17. 21

4. Reduce Miss Penalty: Non-blocking Caches to reduce stalls on misses • Non-blocking cache

4. Reduce Miss Penalty: Non-blocking Caches to reduce stalls on misses • Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss – requires F/E bits on registers or out-of-order execution – requires multi-bank memories • “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests • “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses – Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses – Requires multiple memory banks (otherwise cannot support) – Penium Pro allows 4 outstanding memory misses 10/29/03 CS 252/Kubiatowicz Lec 17. 22

Value of Hit Under Miss for SPEC 0 ->1 1 ->2 2 ->64 Base

Value of Hit Under Miss for SPEC 0 ->1 1 ->2 2 ->64 Base “Hit under n Misses” Integer Floating Point • FP programs on average: AMAT= 0. 68 -> 0. 52 -> 0. 34 -> 0. 26 • Int programs on average: AMAT= 0. 24 -> 0. 20 -> 0. 19 • 8 KB Data Cache, Direct Mapped, 32 B block, 16 cycle miss. CS 252/Kubiatowicz 10/29/03 Lec 17. 23

5. Second level cache • L 2 Equations AMAT = Hit Time. L 1

5. Second level cache • L 2 Equations AMAT = Hit Time. L 1 + Miss Rate. L 1 x Miss Penalty. L 1 = Hit Time. L 2 + Miss Rate. L 2 x Miss Penalty. L 2 AMAT = Hit Time. L 1 + Miss Rate. L 1 x (Hit Time. L 2 + Miss Rate. L 2 + Miss Penalty. L 2) • Definitions: – Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rate. L 2) – Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss Rate. L 1 x Miss Rate. L 2) – Global Miss Rate is what matters 10/29/03 CS 252/Kubiatowicz Lec 17. 24

Comparing Local and Global Miss Rates • 32 KByte 1 st level cache; Increasing

Comparing Local and Global Miss Rates • 32 KByte 1 st level cache; Increasing 2 nd level cache • Global miss rate close to single level cache rate provided L 2 >> L 1 • Don’t use local miss rate • L 2 not tied to CPU clock cycle! • Cost & A. M. A. T. • Generally Fast Hit Times and fewer misses • Since hits are few, target miss reduction 10/29/03 Linear Cache Size Log Cache Size CS 252/Kubiatowicz Lec 17. 25

Reducing Misses: Which apply to L 2 Cache? • Reducing Miss Rate 1. 2.

Reducing Misses: Which apply to L 2 Cache? • Reducing Miss Rate 1. 2. 3. 4. 5. 6. 7. 10/29/03 Reduce Misses via Larger Block Size Reduce Conflict Misses via Higher Associativity Reducing Conflict Misses via Victim Cache Reducing Conflict Misses via Pseudo-Associativity Reducing Misses by HW Prefetching Instr, Data Reducing Misses by SW Prefetching Data Reducing Capacity/Conf. Misses by Compiler Optimizations CS 252/Kubiatowicz Lec 17. 26

L 2 cache block size & A. M. A. T. • 32 KB L

L 2 cache block size & A. M. A. T. • 32 KB L 1, 8 byte path to memory 10/29/03 CS 252/Kubiatowicz Lec 17. 27

Reducing Miss Penalty Summary • Five techniques – – – Read priority over write

Reducing Miss Penalty Summary • Five techniques – – – Read priority over write on miss Subblock placement Early Restart and Critical Word First on miss Non-blocking Caches (Hit under Miss, Miss under Miss) Second Level Cache • Can be applied recursively to Multilevel Caches – Danger is that time to DRAM will grow with multiple levels in between – First attempts at L 2 caches can make things worse, since increased worst case is worse 10/29/03 CS 252/Kubiatowicz Lec 17. 28

hit time miss penalty miss rate Cache Optimization Summary 10/29/03 Technique MR MP HT

hit time miss penalty miss rate Cache Optimization Summary 10/29/03 Technique MR MP HT Larger Block Size + – Higher Associativity + Victim Caches + Pseudo-Associative Caches + HW Prefetching of Instr/Data + Compiler Controlled Prefetching + Compiler Reduce Misses + Priority to Read Misses + Subblock Placement + Early Restart & Critical Word 1 st 2 Non-Blocking Caches + Second Level Caches + Small & Simple Caches – Avoiding Address Translation Pipelining Writes Complexity 0 – 1 2 + + + 0 1 1 + 3 2 0 + 1 2 2 3 2 CS 252/Kubiatowicz Lec 17. 29

What is the Impact of What You’ve Learned About Caches? • 1960 -1985: Speed

What is the Impact of What You’ve Learned About Caches? • 1960 -1985: Speed = ƒ(no. operations) • 1990 – Pipelined Execution & Fast Clock Rate – Out-of-Order execution – Superscalar Instruction Issue • 1998: Speed = ƒ(non-cached memory accesses) • What does this mean for – Compilers? , Operating Systems? , Algorithms? Data Structures? 10/29/03 CS 252/Kubiatowicz Lec 17. 30

Cache Cross Cutting Issues • Superscalar CPU & Number Cache Ports must match: number

Cache Cross Cutting Issues • Superscalar CPU & Number Cache Ports must match: number memory accesses/cycle? • Speculative Execution and non-faulting option on memory/TLB • Parallel Execution vs. Cache locality – Want far separation to find independent operations vs. want reuse of data accesses to avoid misses • I/O and consistency. Caches => multiple copies of data – Consistency 10/29/03 CS 252/Kubiatowicz Lec 17. 31

Alpha 21064 • Separate Instr & Data TLB & Caches • TLBs fully associative

Alpha 21064 • Separate Instr & Data TLB & Caches • TLBs fully associative • TLB updates in SW (“Priv Arch Libr”) • Caches 8 KB direct mapped, write thru • Critical 8 bytes first • Prefetch instr. stream buffer • 2 MB L 2 cache, direct mapped, WB (off-chip) • 256 bit path to main memory, 4 x 64 -bit modules • Victim Buffer: to give read priority over write • 4 entry write buffer between D$ & L 2$ 10/29/03 Instr Data Write Buffer Stream Buffer Victim Buffer CS 252/Kubiatowicz Lec 17. 32

Alpha Memory Performance: Miss Rates of SPEC 92 I$ miss = 6% D$ miss

Alpha Memory Performance: Miss Rates of SPEC 92 I$ miss = 6% D$ miss = 32% L 2 miss = 10% 8 K 8 K 2 M I$ miss = 2% D$ miss = 13% L 2 miss = 0. 6% 10/29/03 I$ miss = 1% D$ miss = 21% L 2 miss = 0. 3% CS 252/Kubiatowicz Lec 17. 33

Alpha CPI Components • Instruction stall: branch mispredict (green); • Data cache (blue); Instruction

Alpha CPI Components • Instruction stall: branch mispredict (green); • Data cache (blue); Instruction cache (yellow); L 2$ (pink) Other: compute + reg conflicts, structural conflicts 10/29/03 CS 252/Kubiatowicz Lec 17. 34

Pitfall: Predicting Cache Performance from Different Prog. (ISA, compiler, . . . ) D$,

Pitfall: Predicting Cache Performance from Different Prog. (ISA, compiler, . . . ) D$, Tom • 4 KB Data cache miss rate 8%, 12%, or 28%? • 1 KB Instr cache miss rate 0%, 3%, or 10%? • Alpha vs. MIPS for 8 KB Data $: 17% vs. 10% • Why 2 X Alpha v. MIPS? D$, gcc D$, esp I$, gcc I$, esp I$, Tom 10/29/03 CS 252/Kubiatowicz Lec 17. 35

Pitfall: Simulating Too Small an Address Trace I$ = 4 KB, B=16 B D$

Pitfall: Simulating Too Small an Address Trace I$ = 4 KB, B=16 B D$ = 4 KB, B=16 B L 2 = 512 KB, B=128 B MP = 12, 200 10/29/03 CS 252/Kubiatowicz Lec 17. 36

Main Memory Background • Performance of Main Memory: – Latency: Cache Miss Penalty »

Main Memory Background • Performance of Main Memory: – Latency: Cache Miss Penalty » Access Time: time between request and word arrives » Cycle Time: time between requests – Bandwidth: I/O & Large Block Miss Penalty (L 2) • Main Memory is DRAM: Dynamic Random Access Memory – Dynamic since needs to be refreshed periodically (8 ms, 1% time) – Addresses divided into 2 halves (Memory as a 2 D matrix): » RAS or Row Access Strobe » CAS or Column Access Strobe • Cache uses SRAM: Static Random Access Memory – No refresh (6 transistors/bit vs. 1 transistor Size: DRAM/SRAM 4 -8, Cost/Cycle time: SRAM/DRAM 8 -16 10/29/03 CS 252/Kubiatowicz Lec 17. 37

Main Memory Deep Background • • • “Out-of-Core”, “In-Core, ” “Core Dump”? “Core memory”?

Main Memory Deep Background • • • “Out-of-Core”, “In-Core, ” “Core Dump”? “Core memory”? Non-volatile, magnetic Lost to 4 Kbit DRAM (today using 64 Kbit DRAM) Access time 750 ns, cycle time 1500 -3000 ns 10/29/03 CS 252/Kubiatowicz Lec 17. 38

1 -Transistor Memory Cell (DRAM) • Write: row select – 1. Drive bit line

1 -Transistor Memory Cell (DRAM) • Write: row select – 1. Drive bit line – 2. . Select row • Read: – 1. Precharge bit line to Vdd/2 – 2. . Select row bit – 3. Cell and bit line share charges » Very small voltage changes on the bit line – 4. Sense (fancy sense amp) » Can detect changes of ~1 million electrons – 5. Write: restore the value • Refresh – 1. Just do a dummy read to every cell. 10/29/03 CS 252/Kubiatowicz Lec 17. 39

DRAM Capacitors: more capacitance in a small area • Trench capacitors: – – 10/29/03

DRAM Capacitors: more capacitance in a small area • Trench capacitors: – – 10/29/03 • Stacked capacitors Logic ABOVE capacitor Gain in surface area of capacitor Better Scaling properties Better Planarization – Logic BELOW capacitor – Gain in surface area of capacitor – 2 -dim cross-section quite small CS 252/Kubiatowicz Lec 17. 40

Classical DRAM Organization (square) bit (data) lines r o w d e c o

Classical DRAM Organization (square) bit (data) lines r o w d e c o d e r row address Each intersection represents a 1 -T DRAM Cell Array word (row) select Column Selector & I/O Circuits data 10/29/03 Column Address • Row and Column Address together: – Select 1 bit a time CS 252/Kubiatowicz Lec 17. 41

DRAM Read Timing • Every DRAM access begins at: RAS_L – The assertion of

DRAM Read Timing • Every DRAM access begins at: RAS_L – The assertion of the RAS_L – 2 ways to read: early or late v. CAS A CAS_L WE_L 256 K x 8 DRAM 9 OE_L D 8 DRAM Read Cycle Time RAS_L CAS_L A Row Address Col Address Junk WE_L OE_L D High Z Junk Read Access Time Data Out Early Read Cycle: OE_L asserted before CAS_L 10/29/03 High Z Output Enable Delay Data Out Late Read Cycle: OE_L asserted after CAS_L CS 252/Kubiatowicz Lec 17. 42

4 Key DRAM Timing Parameters • t. RAC: minimum time from RAS line falling

4 Key DRAM Timing Parameters • t. RAC: minimum time from RAS line falling to the valid data output. – Quoted as the speed of a DRAM when buy – A typical 4 Mb DRAM t. RAC = 60 ns – Speed of DRAM since on purchase sheet? • t. RC: minimum time from the start of one row access to the start of the next. – t. RC = 110 ns for a 4 Mbit DRAM with a t. RAC of 60 ns • t. CAC: minimum time from CAS line falling to valid data output. – 15 ns for a 4 Mbit DRAM with a t. RAC of 60 ns • t. PC: minimum time from the start of one column access to the start of the next. – 35 ns for a 4 Mbit DRAM with a t. RAC of 60 ns 10/29/03 CS 252/Kubiatowicz Lec 17. 43

Main Memory Performance Cycle Time Access Time • DRAM (Read/Write) Cycle Time >> DRAM

Main Memory Performance Cycle Time Access Time • DRAM (Read/Write) Cycle Time >> DRAM (Read/Write) Access Time – 2: 1; why? • DRAM (Read/Write) Cycle Time : – How frequent can you initiate an access? – Analogy: A little kid can only ask his father for money on Saturday • DRAM (Read/Write) Access Time: – How quickly will you get what you want once you initiate an access? – Analogy: As soon as he asks, his father will give him the money • DRAM Bandwidth Limitation analogy: – What happens if he runs out of money on Wednesday? 10/29/03 CS 252/Kubiatowicz Lec 17. 44

Increasing Bandwidth - Interleaving Access Pattern without Interleaving: D 1 available Start Access for

Increasing Bandwidth - Interleaving Access Pattern without Interleaving: D 1 available Start Access for D 1 CPU Memory Start Access for D 2 Memory Bank 0 Access Pattern with 4 -way Interleaving: CPU Memory Bank 1 Access Bank 0 Memory Bank 2 10/29/03 Access Bank 1 Access Bank 2 Access Bank 3 We can Access Bank 0 again Memory Bank 3 CS 252/Kubiatowicz Lec 17. 45

Main Memory Performance • Wide: • Simple: – CPU/Mux 1 word; Mux/Cache, Bus, Memory

Main Memory Performance • Wide: • Simple: – CPU/Mux 1 word; Mux/Cache, Bus, Memory N words (Alpha: 64 bits & 256 bits) • Interleaved: – CPU, Cache, Bus 1 word: Memory N Modules (4 Modules); example is word interleaved – CPU, Cache, Bus, Memory same width (32 bits) 10/29/03 CS 252/Kubiatowicz Lec 17. 46

Main Memory Performance • Timing model – 1 to send address, – 4 for

Main Memory Performance • Timing model – 1 to send address, – 4 for access time, 10 cycle time, 1 to send data – Cache Block is 4 words • Simple M. P. = 4 x (1+10+1) = 48 • Wide M. P. = 1 + 10 + 1 = 12 • Interleaved M. P. = 1+10+1 + 3 =15 address 0 4 8 12 Bank 0 10/29/03 address 1 5 9 13 Bank 1 address 2 6 10 14 3 7 11 15 Bank 2 Bank 3 CS 252/Kubiatowicz Lec 17. 47

Avoiding Bank Conflicts • Lots of banks int x[256][512]; for (j = 0; j

Avoiding Bank Conflicts • Lots of banks int x[256][512]; for (j = 0; j < 512; j = j+1) for (i = 0; i < 256; i = i+1) x[i][j] = 2 * x[i][j]; • Even with 128 banks, since 512 is multiple of 128, conflict on word accesses • SW: loop interchange or declaring array not power of 2 (“array padding”) • HW: Prime number of banks – – – 10/29/03 bank number = address mod number of banks address within bank = address / number of words in bank modulo & divide per memory access with prime no. banks? address within bank = address mod number words in bank number? easy if 2 N words per bank CS 252/Kubiatowicz Lec 17. 48

Fast Bank Number • Chinese Remainder Theorem As long as two sets of integers

Fast Bank Number • Chinese Remainder Theorem As long as two sets of integers ai and bi follow these rules and that ai and aj are co-prime if i j, then the integer x has only one solution (unambiguous mapping): – bank number = b 0, number of banks = a 0 (= 3 in example) – address within bank = b 1, number of words in bank = a 1 (= 8 in example) – N word address 0 to N-1, prime no. banks, words power of 2 Bank Number: Address within Bank: 0 3 2 9 12 15 18 21 10/29/03 Seq. Interleaved 0 1 2 0 4 6 10 13 16 19 22 1 5 7 11 14 17 20 23 2 9 8 3 12 21 6 15 Modulo Interleaved 0 1 2 0 1 18 19 4 13 22 7 16 17 10 11 20 5 14 23 81 23 4 5 6 7 CS 252/Kubiatowicz Lec 17. 49

Independent Memory Banks • Memory banks for independent accesses vs. faster sequential accesses –

Independent Memory Banks • Memory banks for independent accesses vs. faster sequential accesses – Multiprocessor – I/O – CPU with Hit under n Misses, Non-blocking Cache • Superbank: all memory active on one block transfer (or Bank) • Bank: portion within a superbank that is word interleaved (or Subbank) … Superbank Number 10/29/03 Bank Superbank Offset Bank Number Bank Offset CS 252/Kubiatowicz Lec 17. 50

Independent Memory Banks • How many banks? number banks number clocks to access word

Independent Memory Banks • How many banks? number banks number clocks to access word in bank – For sequential accesses, otherwise will return to original bank before it has next word ready – (like in vector case) • Increasing DRAM => fewer chips => harder to have banks 10/29/03 CS 252/Kubiatowicz Lec 17. 51

Fast Memory Systems: DRAM specific • Multiple CAS accesses: several names (page mode) –

Fast Memory Systems: DRAM specific • Multiple CAS accesses: several names (page mode) – Extended Data Out (EDO): 30% faster in page mode • New DRAMs to address gap; what will they cost, will they survive? – RAMBUS: startup company; reinvent DRAM interface » Each Chip a module vs. slice of memory » Short bus between CPU and chips » Does own refresh » Variable amount of data returned » 1 byte / 2 ns (500 MB/s per chip) – Synchronous DRAM: 2 banks on chip, a clock signal to DRAM, transfer synchronous to system clock (66 - 150 MHz) – Intel claims RAMBUS Direct (16 b wide) is future PC memory • Niche memory or main memory? 10/29/03 – e. g. , Video RAM for frame buffers, DRAM + fast serial output CS 252/Kubiatowicz Lec 17. 52

Fast Page Mode Operation – N rows x N column x M-bit – Read

Fast Page Mode Operation – N rows x N column x M-bit – Read & Write M-bit at a time – Each M-bit access requires a RAS / CAS cycle • Fast Page Mode DRAM – N x M “SRAM” to save a row Column Address N cols DRAM Row Address N rows • Regular DRAM Organization: • After a row is read into the register – Only CAS is needed to access other M-bit blocks on that row – RAS_L remains asserted while CAS_L is toggled 1 st M-bit Access N x M “SRAM” M bits M-bit Output 2 nd M-bit 3 rd M-bit 4 th M-bit Col Address RAS_L CAS_L A 10/29/03 Row Address Col Address CS 252/Kubiatowicz Lec 17. 53

SDRAM timing CAS RAS (New Bank) x CAS Latency End RAS Burst READ •

SDRAM timing CAS RAS (New Bank) x CAS Latency End RAS Burst READ • Micron 128 M-bit dram (using 2 Meg 16 bit 4 bank ver) – Row (12 bits), bank (2 bits), column (9 bits) 10/29/03 CS 252/Kubiatowicz Lec 17. 54

DRAM History • DRAMs: capacity +60%/yr, cost – 30%/yr – 2. 5 X cells/area,

DRAM History • DRAMs: capacity +60%/yr, cost – 30%/yr – 2. 5 X cells/area, 1. 5 X die size in 3 years • ‘ 98 DRAM fab line costs $2 B – DRAM only: density, leakage v. speed • Rely on increasing no. of computers & memory per computer (60% market) – SIMM or DIMM is replaceable unit => computers use any generation DRAM • Commodity, second source industry => high volume, low profit, conservative – Little organization innovation in 20 years • Order of importance: 1) Cost/bit 2) Capacity – First RAMBUS: 10 X BW, +30% cost => little impact 10/29/03 CS 252/Kubiatowicz Lec 17. 55

DRAM Future: 1 Gbit+ DRAM • • Mitsubishi Samsung Blocks 512 x 2 Mbit

DRAM Future: 1 Gbit+ DRAM • • Mitsubishi Samsung Blocks 512 x 2 Mbit 1024 x 1 Mbit Clock 200 MHz 250 MHz Data Pins 64 16 Die Size 24 x 24 mm 31 x 21 mm – Sizes will be much smaller in production • Metal Layers • Technology 10/29/03 3 4 0. 15 micron 0. 16 micron CS 252/Kubiatowicz Lec 17. 56

Minimum Memory Size DRAMs per PC over Time 10/29/03 ‘ 86 1 Mb 32

Minimum Memory Size DRAMs per PC over Time 10/29/03 ‘ 86 1 Mb 32 4 MB 8 MB 16 MB 32 MB 64 MB DRAM Generation ‘ 89 ‘ 92 ‘ 96 ‘ 99 ‘ 02 4 Mb 16 Mb 64 Mb 256 Mb 1 Gb 8 16 4 8 2 4 1 8 2 128 MB 4 1 256 MB 8 2 CS 252/Kubiatowicz Lec 17. 57

Potential DRAM Crossroads? • After 20 years of 4 X every 3 years, running

Potential DRAM Crossroads? • After 20 years of 4 X every 3 years, running into wall? (64 Mb - 1 Gb) • How can keep $1 B fab lines full if buy fewer DRAMs per computer? • Cost/bit – 30%/yr if stop 4 X/3 yr? • What will happen to $40 B/yr DRAM industry? 10/29/03 CS 252/Kubiatowicz Lec 17. 58

Something new: Structure of Tunneling Magnetic Junction • Tunneling Magnetic Junction RAM (TMJ-RAM) –

Something new: Structure of Tunneling Magnetic Junction • Tunneling Magnetic Junction RAM (TMJ-RAM) – Speed of SRAM, density of DRAM, non-volatile (no refresh) – “Spintronics”: combination quantum spin and electronics CS 252/Kubiatowicz 10/29/03 – Same technology used in high-density disk-drives Lec 17. 59

Main Memory Summary • Wider Memory • Interleaved Memory: for sequential or independent accesses

Main Memory Summary • Wider Memory • Interleaved Memory: for sequential or independent accesses • Avoiding bank conflicts: SW & HW • DRAM specific optimizations: page mode & Specialty DRAM • DRAM future less rosy? 10/29/03 CS 252/Kubiatowicz Lec 17. 60

Big storage (such as DRAM/DISK): Potential for Errors! • Next major topic: Errors! 10/29/03

Big storage (such as DRAM/DISK): Potential for Errors! • Next major topic: Errors! 10/29/03 CS 252/Kubiatowicz Lec 17. 61

Main Memory Summary • Wider Memory • Interleaved Memory: for sequential or independent accesses

Main Memory Summary • Wider Memory • Interleaved Memory: for sequential or independent accesses • Avoiding bank conflicts: SW & HW • DRAM specific optimizations: page mode & Specialty DRAM • DRAM future less rosy? 10/29/03 CS 252/Kubiatowicz Lec 17. 62

hit time miss penalty miss rate Cache Optimization Summary 10/29/03 Technique MR MP HT

hit time miss penalty miss rate Cache Optimization Summary 10/29/03 Technique MR MP HT Larger Block Size + – Higher Associativity + Victim Caches + Pseudo-Associative Caches + HW Prefetching of Instr/Data + Compiler Controlled Prefetching + Compiler Reduce Misses + Priority to Read Misses + Subblock Placement + Early Restart & Critical Word 1 st 2 Non-Blocking Caches + Second Level Caches + Small & Simple Caches – Avoiding Address Translation Pipelining Writes Complexity 0 – 1 2 + + + 0 1 1 + 3 2 0 + 1 2 2 3 2 CS 252/Kubiatowicz Lec 17. 63