Improving memory with caches CPU Onchip cache Offchip

  • Slides: 28
Download presentation
Improving memory with caches CPU On-chip cache Off-chip cache DRAM memory Disk memory Winter

Improving memory with caches CPU On-chip cache Off-chip cache DRAM memory Disk memory Winter 2002 CSE 141 - Cache

The five components Computer Control Input Memory Datapath 2 Output CSE 141 - Cache

The five components Computer Control Input Memory Datapath 2 Output CSE 141 - Cache

Memory technologies • SRAM – access time: 3 -10 ns. (on-processor SRAM can be

Memory technologies • SRAM – access time: 3 -10 ns. (on-processor SRAM can be 1 -2 ns. ) – cost: $100 per MByte (? ? ). • DRAM – access times: 30 - 60 ns – cost: $0. 50 per MByte. Disclaimer: Access times and prices are approximate and constantly changing. (2/2002) • Disk – access times: 5 to 20 million ns – cost of $0. 01 per MByte. We want SRAM’s access time and disk’s capacity. 3 CSE 141 - Cache

The problem with memory • It’s expensive (and perhaps impossible) to build a large,

The problem with memory • It’s expensive (and perhaps impossible) to build a large, fast memory – “fast” meaning “low latency” • why is low latency important? • To access data quickly: – it must be physically close – there can’t be too many layers of logic • Solution: Move data you are about to access to a nearby, smaller, memory cache – Assuming you can make good guesses about what you will access soon. 4 CSE 141 - Cache

A typical memory hierarchy CPU small, fast big, slower, cheaper/bit huge, very slow, very

A typical memory hierarchy CPU small, fast big, slower, cheaper/bit huge, very slow, very cheap 5 SRAM memory DRAM memory Disk memory on-chip “level 1” cache off-chip “level 2” cache main memory disk CSE 141 - Cache

Cache basics • In running program, main memory is data’s “home location”. – Addresses

Cache basics • In running program, main memory is data’s “home location”. – Addresses refer to location in main memory. – “Virtual memory” allows disk to extend DRAM • We’ll study virtual memory later • When data is accessed, it is automatically moved into cache – Processor (or smaller cache) uses cache’s copy – Data in main memory may (temporarily) get out-of-date • But hardware must keep everything consistent. – Unlike registers, cache is not part of ISA • Different models can have totally different cache design 6 CSE 141 - Cache

The principle of locality Memory hierarchies take advantage of memory locality. – The principle

The principle of locality Memory hierarchies take advantage of memory locality. – The principle that future memory accesses are near past accesses. Two types of locality (the following are “fuzzy” terms): – Temporal locality - near in time: we will often access the same data again very soon – Spatial locality - near in space/distance: our next access is often very close to recent accesses. This sequence of addresses has both types of locality 1, 2, 3, 8, 8, 47, 9, 10, 8, 8. . . 7 spatial temporal non-local CSE 141 - Cache

How does HW decide what to cache? Taking advantage of temporal locality: bring data

How does HW decide what to cache? Taking advantage of temporal locality: bring data into cache whenever its referenced kick out something that hasn’t been used recently Taking advantage of spatial locality: bring in a block of contiguous data (cacheline), not just the requested data. Some processors have instructions that let software influence cache: Prefetch instruction (“bring location x into cache”) “Never cache x” or “keep x in cache” instructions This font (Helvetica italics) means “won’t be on test” 8 CSE 141 - Cache

Cache Vocabulary • cache hit: an access where data is already in cache •

Cache Vocabulary • cache hit: an access where data is already in cache • cache miss: an access where data isn’t in cache • cache block size or cache line size: the amount of data that gets transferred on a cache miss. like the single • instruction cache (I-cache): cache that cycle and pipelined designs can only hold instructions. • data cache (D-cache): cache that can only like the multicycle design hold data. • unified cache: cache that holds both data & instructions. A typical processor today has separate “Level 1” I- and D-caches on the same chip as the processor (and possibly a larger, unified “L 2” on-chip cache), and larger L 2 (or L 3) unified cache on a separate chip. 9 CSE 141 - Cache

Cache Issues On a memory access - • How does hardware know if it

Cache Issues On a memory access - • How does hardware know if it is a hit or miss? On a cache miss - • where to put the new data? • what data to throw out? • how to remember what data is where? 10 CSE 141 - Cache

A simple cache • Fully associative: any line of data can go anywhere in

A simple cache • Fully associative: any line of data can go anywhere in cache • LRU replacement strategy: make room by throwing out the least recently used data. A very small cache: 4 entries, each holds a four-byte word, any entry can hold any word. the tag identifies the addresses of the cached data 11 tag We’ll use this field to help decide what entry to replace data time since last reference CSE 141 - Cache

Simple cache in action Sequence of memory references: 24, 20, 04, 12, 20, 44,

Simple cache in action Sequence of memory references: 24, 20, 04, 12, 20, 44, 04, 24, 44 tag 12 data time since last reference 24 - 27 - data - 3 20 - 23 - more data - 2 04 - 07 - etc- 1 12 - 15 - etc - 0 24 - 27 - data - 3 20 - 23 - more data - 0 04 - 07 - etc- 2 12 - 15 - etc - 1 44 - 47 - new data - 0 20 - 23 - more data - 1 04 - 07 - etc- 3 12 - 15 - etc - 2 The first four reference are all misses – they fill up cache. The next reference (“ 20”) is a hit. Times are updated. “ 44” is a miss – oldest data (24 -27) is replaced. Now what happens ? ? CSE 141 - Cache

An even simpler cache • Keeping track of when cache entries were last used

An even simpler cache • Keeping track of when cache entries were last used (for LRU replacement) in big cache needs lots of hardware and can be slow. • In a direct mapped cache, each memory location is assigned a single location in cache. – Usually* done by using a few bits of the address – We’ll let bits 2 and 3 (counting from LSB = “ 0”) of the address be the index * Some machines use a pseudo-random hash of the address 13 CSE 141 - Cache

Direct mapped cache in action Sequence of memory references: 24, 20, 04, 12, 20,

Direct mapped cache in action Sequence of memory references: 24, 20, 04, 12, 20, 44, 04, 24, 44 index 14 tag data 00 - - 01 20 - 23 data 10 24 - 27 data 11 - - 00 - - 01 04 - 08 data 10 24 - 27 data 11 12 - 15 data 24 = 0110002 ; so index is 10. 20 = 0101002 ; so index is 01. (remember: index is bits 2 -3 of address) 04 = 0001002 ; so index is 01. (kicks 20 -23 out of the cache) 12 = 0011002 ; so index is 11. your turn. . . 20 = 0101002 44 = 1011002 04 = 0001002 CSE 141 - Cache

A Better Cache Design • Direct mapped caches are simpler – Less hardware; possibly

A Better Cache Design • Direct mapped caches are simpler – Less hardware; possibly faster • Fully associative caches usually have fewer misses. • Set associative caches try to get best of both. – An index is computed from the address – In a “k-way set associative cache”, the index specifies a set of k cache locations where the data can be kept. • k=1 is direct mapped. • k=cache size (in lines) is fully associative. – Use LRU replacement (or something else) within the set. 2 -way set associative cache index 0 tag data Two places to look for data with index “ 0” 1 2 3 15 . . . CSE 141 - Cache

2 -way set associative cache in action Sequence of memory references: 24, 20, 04,

2 -way set associative cache in action Sequence of memory references: 24, 20, 04, 12, 20, 44, 04, 24, 44 index tag data 0 24 - 27 data - - 1 20 – 23 data - - 24 = 0110002 ; index is 0. 20 = 0101002 ; index is 1. (index is bit 2 of address) 0 24 - 27 data - - 1 20 – 23 data 04 – 07 data 0 24 - 27 data - - 1 12 – 15 data 04 – 07 data 16 04 = 0001002 ; index is 1. (goes in 2 nd slot of “ 01” set) 12 = 0011002 ; index is 1. (kicks out older item in “ 01” set) your turn. . . 20 = 0101002 44 = 1011002 04 = 0001002 CSE 141 - Cache

Cache Associativity An observation (? ): 4 -way cache has about the same hit

Cache Associativity An observation (? ): 4 -way cache has about the same hit rate as a directmapped cache of twice the size 17 CSE 141 - Cache

Longer Cache Blocks tag data (room for big block) • Large cache blocks take

Longer Cache Blocks tag data (room for big block) • Large cache blocks take advantage of spatial locality. • Less tag space is needed (for a given capacity cache) • Too large block size can waste cache space. • Large blocks require longer transfer times. Good design requires compromise! 18 CSE 141 - Cache

Larger block size in action Sequence of memory references: 24, 20, 28, 12, 20,

Larger block size in action Sequence of memory references: 24, 20, 28, 12, 20, 08, 44, 04, . . . index tag 8 Bytes of data 24 = 0110002 ; index is 1. 0 - - 1 24 – 31 data (index is bit 3 of address) 0 16 - 23 data 1 20 = 0101002 ; index is 0. 24 – 31 data (notice that line is Bytes 16 -23 – line starts with multiple of length) 28 = 0111002 ; index is 1. A hit - even though we haven’t referenced 28 before! 0 16 - 23 data 1 24 – 31 data 19 your turn. . . 12 = 0011002 08 = 0010002 44 = 1011002 CSE 141 - Cache

Block Size and Miss Rate 20 Rule of thumb: block size should be less

Block Size and Miss Rate 20 Rule of thumb: block size should be less than square root of cache size. CSE 141 - Cache

Cache Parameters Cache size = Number of sets * block size * associativity 128

Cache Parameters Cache size = Number of sets * block size * associativity 128 blocks, 32 -byte blocks, direct mapped, size = ? 128 KB cache, 64 -byte blocks, 512 sets, associativity = ? 21 CSE 141 - Cache

Details • What bits should we use for the index? • How do we

Details • What bits should we use for the index? • How do we know if a cache entry is empty? • Are stores and loads treated the same? • What if a word overlaps two cache lines? ? • How does this all work, anyway? ? ? 22 CSE 141 - Cache

Choosing bits for the index If line length is n Bytes, the low-order log

Choosing bits for the index If line length is n Bytes, the low-order log 2 n bits of a Byte-address give the offset of address within a line. The next group of bits is the index -- this ensures that if the cache holds X bytes, then any block of X contiguous Byte addresses can co-reside in the cache. (Provided the block starts on a cache line boundary. ) The remaining bits are the tag. Anatomy of an address: tag 23 index offset CSE 141 - Cache

Is a cache entry empty? • Problem: when a program starts up, cache is

Is a cache entry empty? • Problem: when a program starts up, cache is empty. – It might contain stuff left from previous user. – How do you make sure you don’t match an invalid tag? • Solution: an extra “valid” bit per cacheline – Entire cache can be marked “invalid” on context switch. 24 CSE 141 - Cache

Putting it all together 64 KB cache, direct-mapped, 32 -byte cache block 31 30

Putting it all together 64 KB cache, direct-mapped, 32 -byte cache block 31 30 29 28 27. . . 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 tag index word offset 11 16 valid tag data 64 KB / 32 bytes = 2 K cache blocks/sets 0 1 2. . . 2045 2046 2047 256 = hit/miss 25 32 CSE 141 - Cache

A set associative cache 32 KB cache, 2 -way set-associative, 16 -byte blocks 31

A set associative cache 32 KB cache, 2 -way set-associative, 16 -byte blocks 31 30 29 28 27. . . 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 tag index word offset 10 18 valid tag data valid tag 32 KB / 16 bytes / 2 = 1 K cache sets 0 1 2. . . 1021 1022 1023 = hit/miss 26 data = This picture doesn’t show the “most recent” bit (need one bit per set) CSE 141 - Cache

Key Points • Caches give illusion of a large, cheap memory with the access

Key Points • Caches give illusion of a large, cheap memory with the access time of a fast, expensive memory. • Caches take advantage of memory locality, specifically temporal locality and spatial locality. • Cache design presents many options (block size, cache size, associativity) that an architect must combine to minimize miss rate and access time to maximize performance. 27 CSE 141 - Cache

Computer of the Day • Integrated Circuits (IC’s) – Single chip has transistors, resistors,

Computer of the Day • Integrated Circuits (IC’s) – Single chip has transistors, resistors, “wires”. – Invented in 1958 at Texas Instruments, – used in “third generation” computers of late 60’s; (1 st = tubes, 2 nd = transistors). Some computers using IC technology. . . • Apollo guidance system (first computer on the moon) – ~5000 IC’s: each with 3 transistors, 4 resistors. • Illiac IV – “The most infamous computer” (at that time) – designed late 60’s, built in early 70’s, actually used 1976 -82 – Plan: 1000 MFLOP/s. Reality: 15 MFLOP/s (200 MIPS). Cost: $31 M – First “massively parallel” computer: • four groups of 64 processors, • Each group is “SIMD” (Single Instruction, Multiple Data) 28 CSE 141 - Cache