Cache Performance Metrics Miss Rate Fraction of memory

  • Slides: 62
Download presentation
Cache Performance Metrics ¢ Miss Rate § Fraction of memory references not found in

Cache Performance Metrics ¢ Miss Rate § Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate § Typical numbers (in percentages): 3 -10% for L 1 § Can be quite small (e. g. , < 1%) for L 2, depending on size, etc. § ¢ Hit Time § Time to deliver a line in the cache to the processor § Includes time to determine whether the line is in the cache § Typical numbers: 1 -2 clock cycle for L 1 § 5 -20 clock cycles for L 2 § ¢ Miss Penalty § Additional time required because of a miss § typically 50 -200 cycles for main memory (Trend: increasing!)

Average memory access time ¢ The average memory access time, or AMAT, can then

Average memory access time ¢ The average memory access time, or AMAT, can then be computed. AMAT = Hit time + (Miss rate x Miss penalty) ¢ This is just averaging the amount of time for cache hits and the amount of time for cache misses. How can we improve the average memory access time of a system? § Obviously, a lower AMAT is better. § Miss penalties are usually much greater than hit times, so the best way to lower AMAT is to reduce the miss penalty or the miss rate. ¢ However, AMAT should only be used as a general guideline. Remember that execution time is still the best performance metric.

Lets think about those numbers ¢ Huge difference between a hit and a miss

Lets think about those numbers ¢ Huge difference between a hit and a miss § Could be 100 x, if just L 1 and main memory ¢ Would you believe 99% hits is twice as good as 97%? § Consider: cache hit time of 1 cycle miss penalty of 100 cycles § Average memory access time (AMAT): 97% hits: 1 cycle + 0. 03 * 100 cycles = 4 cycles 99% hits: 1 cycle + 0. 01 * 100 cycles = 2 cycles ¢ This is why “miss rate” is used instead of “hit rate”

Types of Cache Misses ¢ Cold (compulsory) miss § Occurs on first access to

Types of Cache Misses ¢ Cold (compulsory) miss § Occurs on first access to a block ¢ Conflict miss § Most hardware caches limit block placement to a small subset (sometimes a singleton) of the available cache slots § e. g. , block i must be placed in slot (i mod 4) § Conflict misses occur when the cache is large enough, but multiple data objects all map to the same slot § e. g. , referencing blocks 0, 8, . . . would miss every time ¢ Capacity miss § Occurs when the set of active cache blocks (working set) is larger than the cache

Four important questions 1. When we copy a block of data from main memory

Four important questions 1. When we copy a block of data from main memory to the cache, where exactly should we put it? 2. How can we tell if a word is already in the cache (hit), or if it has to be fetched from main memory first (miss)? 3. Eventually, the small cache memory might fill up. To load a new block from main RAM, we’d have to replace one of the existing blocks in the cache. . . which one? 4. How can write operations be handled by the memory system?

General Cache Organization (S, E, B) E = 2 e lines/blocks per set line

General Cache Organization (S, E, B) E = 2 e lines/blocks per set line S = 2 s sets Nominal cache size: S x E x B data bytes v valid bit tag 0 1 2 B-1 B = 2 b bytes per cache block (the data)

Cache Read E = 2 e lines/blocks per set • Locate set • Check

Cache Read E = 2 e lines/blocks per set • Locate set • Check if any line in set has matching tag • Yes + line valid: hit • Locate data starting at offset Address of word: t bits S = 2 s sets tag s bits b bits set block index offset data begins at this offset v tag 0 1 2 B-1 B = 2 b bytes per cache block (the data)

Where should we put data in the cache? ¢ ¢ ¢ A direct-mapped cache

Where should we put data in the cache? ¢ ¢ ¢ A direct-mapped cache is the simplest approach: each main memory address maps to exactly one cache line/block. For example, on the right Memory Address is a 16 -byte main memory and a 4 -byte cache (four 0 1 -byte blocks). 1 2 Memory locations 0, 4, 8 3 and 12 all map to cache Index 4 5 block 0. 0 6 1 7 Addresses 1, 5, 9 and 13 2 8 map to cache block 1, etc. 3 9 10 How can we compute this 11 mapping? 12 13 14 15

It’s all divisions… ¢ ¢ One way to figure out which cache block a

It’s all divisions… ¢ ¢ One way to figure out which cache block a particular memory address should go to is to use the mod (remainder) operator. If the cache contains 2 k Memory Address blocks, then the data at memory address i would 0 go to cache block index 1 i mod 2 k ¢ For instance, with the four-block cache here, address 14 would map to cache block 2. 14 mod 4 = 2 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Index 0 1 2 3

…or least-significant bits ¢ ¢ An equivalent way to find the placement of a

…or least-significant bits ¢ ¢ An equivalent way to find the placement of a memory address in the cache is to look at the least significant k bits of the address. With our four-byte cache Memory Address we would inspect the two least significant bits of 0000 our memory addresses. 0001 0010 Again, you can see that 0011 Index 0100 address 14 (1110 in binary) 0101 maps to cache block 2 00 0110 01 0111 (10 in binary). 10 1000 11 Taking the least k bits of 1001 1010 a binary value is the same 1011 as computing that value 1100 k 1101 mod 2. 1110 1111

How can we find data in the cache? ¢ ¢ The second question was

How can we find data in the cache? ¢ ¢ The second question was how to determine whether or not the data we’re interested in is already stored in the cache (hit or miss). Memory If we want to read memory Address address i, we can use the mod trick to determine 0 1 which cache block would 2 contain i. 3 4 But other addresses might 5 6 also map to the same cache 7 block. How can we 8 9 distinguish between them? 10 For instance, cache block 11 12 2 could contain data from 13 addresses 2, 6, 10 or 14. 14 15 Index 0 1 2 3

Adding tags ¢ We need to add tags to the cache, which supply the

Adding tags ¢ We need to add tags to the cache, which supply the rest of the address bits to let us distinguish between different memory locations that map to the same cache block. 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Index Tag 00 01 10 11 00 ? ? 01 01 Data

Adding tags ¢ We need to add tags to the cache, which supply the

Adding tags ¢ We need to add tags to the cache, which supply the rest of the address bits to let us distinguish between different memory locations that map to the same cache block. 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Index Tag 00 01 10 11 01 01 Data

Figuring out what’s in the cache ¢ Now we can tell exactly which addresses

Figuring out what’s in the cache ¢ Now we can tell exactly which addresses of main memory are stored in the cache, by concatenating the cache block tags with the block indices. Index Tag 00 01 10 11 01 01 Data Main memory address in cache block 00 + 00 = 0000 11 + 01 = 1101 01 + 10 = 0110 01 + 11 = 0111

One more detail: the valid bit ¢ ¢ When started, the cache is empty

One more detail: the valid bit ¢ ¢ When started, the cache is empty and does not contain valid data. We should account for this by adding a valid bit for each cache block. § When the system is initialized, all the valid bits are set to 0. § When data is loaded into a particular cache block, the corresponding valid bit is set to 1. ¢ Index Valid Bit Tag 00 01 10 11 1 0 0 1 00 11 01 01 Data Main memory address in cache block 00 + 00 = 0000 Invalid ? ? ? So the cache contains more than just copies of the data in memory; it also has bits to help us find data within the cache and verify its validity.

One more detail: the valid bit ¢ ¢ When started, the cache is empty

One more detail: the valid bit ¢ ¢ When started, the cache is empty and does not contain valid data. We should account for this by adding a valid bit for each cache block. § When the system is initialized, all the valid bits are set to 0. § When data is loaded into a particular cache block, the corresponding valid bit is set to 1. ¢ Index Valid Bit Tag 00 01 10 11 1 0 0 1 00 11 01 01 Data Main memory address in cache block 00 + 00 = 0000 Invalid 01 + 11 = 0111 So the cache contains more than just copies of the data in memory; it also has bits to help us find data within the cache and verify its validity.

What happens on a cache hit ¢ When the CPU tries to read from

What happens on a cache hit ¢ When the CPU tries to read from memory, the address will be sent to a cache controller. § The lowest k bits of the address will index a block in the cache. § If the block is valid and the tag matches the upper (m - k) bits of the m-bit address, then that data will be sent to the CPU. ¢ Here is a diagram of a 32 -bit memory address and a 210 -byte cache. Index Valid Address (32 bits) 22 10 Index Tag 0 1 2 3. . . 1022 1023 Data To CPU = Hit

What happens on a cache miss ¢ The delays that we’ve been assuming for

What happens on a cache miss ¢ The delays that we’ve been assuming for memories (e. g. , 2 ns) are really assuming cache hits. § If our CPU implementations accessed main memory directly, their cycle times would have to be much larger. § Instead we assume that most memory accesses will be cache hits, which allows us to use a shorter cycle time. ¢ However, a much slower main memory access is needed on a cache miss. The simplest thing to do is to stall the pipeline until the data from main memory can be fetched (and also copied into the cache). 18

Loading a block into the cache ¢ After data is read from main memory,

Loading a block into the cache ¢ After data is read from main memory, putting a copy of that data into the cache is straightforward. § The lowest k bits of the address specify a cache block. § The upper (m - k) address bits are stored in the block’s tag field. § The data from main memory is stored in the block’s data field. § The valid bit is set to 1. Index Valid Address (32 bits) 22 10 Index Tag Data 0 1 2 3. . 1 Tag Data

What if the cache fills up? ¢ ¢ Our third question was what to

What if the cache fills up? ¢ ¢ Our third question was what to do if we run out of space in our cache, or if we need to reuse a block for a different memory address. We answered this question implicitly on the last page! § A miss causes a new block to be loaded into the cache, automatically overwriting any previously stored data. § This is a least recently used replacement policy, which assumes that older data is less likely to be requested than newer data. ¢ We’ll see a few other policies next.

Spatial locality ¢ ¢ One-byte cache blocks don’t take advantage of spatial locality, which

Spatial locality ¢ ¢ One-byte cache blocks don’t take advantage of spatial locality, which predicts that an access to one address will be followed by an access to a nearby address. What can we do?

Spatial locality ¢ What we can do is make the cache block size larger

Spatial locality ¢ What we can do is make the cache block size larger than one byte. Memory Address ¢ ¢ Here we use twobyte blocks, so we can load the cache with two bytes at a time. If we read from address 12, the data in addresses 12 and 13 would both be copied to cache block 2. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Index 0 1 2 3

Block addresses ¢ ¢ ¢ Now how can we figure out where data should

Block addresses ¢ ¢ ¢ Now how can we figure out where data should be placed in the cache? It’s time for block addresses! If the cache block size is 2 n bytes, we can conceptually split the main memory into 2 n-byte chunks too. Byte Block To determine the block address of a byte Address address i, you can do the integer division i/2 ¢ ¢ n Our example has two-byte cache blocks, so we can think of a 16 -byte main memory as an “ 8 -block” main memory instead. For instance, memory addresses 12 and 13 both correspond to block address 6, since 12 / 2 = 6 and 13 / 2 = 6. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7

Cache mapping ¢ ¢ ¢ Once you know the block address, you can map

Cache mapping ¢ ¢ ¢ Once you know the block address, you can map it to the cache as before: find the remainder when the block address is divided by the number of cache blocks. Byte Block In our example, Address memory block 6 belongs in cache 0 0 1 block 2, since 2 1 6 mod 4 = 2. 3 4 2 This corresponds 5 6 to placing data 3 7 from memory 8 4 9 byte addresses 10 5 12 and 13 into 11 cache block 2. 12 6 13 14 15 7 Index 0 1 2 3

Data placement within a block ¢ ¢ When we access one byte of data

Data placement within a block ¢ ¢ When we access one byte of data in memory, we’ll copy its entire block into the cache, to hopefully take advantage of spatial locality. In our example, if a program reads from byte address 12 we’ll load all of memory block 6 (both addresses 12 and 13) into cache block 2. Note byte address 13 corresponds to the same memory block address! So a read from address 13 will also cause memory block 6 (addresses 12 and 13) to be loaded into cache block 2. To make things simpler, byte i of a memory block is always stored in byte i of the corresponding cache block. Byte Address 12 13 Byte 0 Cache Byte 1 Block 2

Locating data in the cache ¢ ¢ Let’s say we have a cache with

Locating data in the cache ¢ ¢ Let’s say we have a cache with 2 k blocks, each containing 2 n bytes. We can determine where a byte of data belongs in this cache by looking at its address in main memory. § k bits of the address will select one of the 2 k cache blocks. § The lowest n bits are now a block offset that decides which of the 2 n bytes in the cache block will store the data. m-bit Address ¢ (m-k-n) bits k bits Tag Index n-bit Block Offset Our example used a 22 -block cache with 21 bytes per block. Thus, memory address 13 (1101) would be stored in byte 1 of cache block 2. 4 -bit Address 1 bit 2 bits 1 10 1 1 -bit Block Offset

A picture Tag Index (2 bits) 1 Address 13 (4 bits) 10 1 2

A picture Tag Index (2 bits) 1 Address 13 (4 bits) 10 1 2 Index Valid Tag Data 0 1 2 3 = 8 8 0 Mux 8 Hit Data 1 Block offset

An exercise Tag Address (4 bits) Index (2 bits) n nn n Block offset

An exercise Tag Address (4 bits) Index (2 bits) n nn n Block offset 2 Index Valid 1 0 1 1 1 2 0 3 Tag 0 1 = Data 0 x. CA 0 x. DE 0 x. BE 0 x. FE 0 x. AD 0 x. EF 0 x. ED 8 0 8 Mux 8 Hit For the addresses below, what byte (value) is read from the cache (or is there a miss)? Data 1 § § 1010 1110 0001 1101

An exercise Tag Address (4 bits) Index (2 bits) n nn n Block offset

An exercise Tag Address (4 bits) Index (2 bits) n nn n Block offset 2 Index Valid 1 0 1 1 1 2 0 3 Tag 0 1 = Data 0 x. CA 0 x. DE 0 x. BE 0 x. FE 0 x. AD 0 x. EF 0 x. ED 8 0 8 Mux 8 Hit For the addresses below, what byte (value) is read from the cache (or is there a miss)? Data 1 § § 1010 1110 0001 1101 (0 x. DE) (miss, invalid) (0 x. FE) (miss, bad tag)

Using arithmetic ¢ An equivalent way to find the right location within the cache

Using arithmetic ¢ An equivalent way to find the right location within the cache is to use arithmetic again. m-bit Address ¢ ¢ ¢ (m-k-n) bits k bits Tag Index n-bit Block Offset We can find the index in two steps, as outlined earlier. § Do integer division of the address by 2 n to find the block address. § Then mod the block address with 2 k to find the index. The block offset is just the memory address mod 2 n. For example, we can find address 13 in a 4 -block, 2 -byte per block cache. § The block address is 13 / 2 = 6, so the index is then 6 mod 4 = 2. § The block offset would be 13 mod 2 = 1. 30

A diagram of a larger example cache ¢ Here is a cache with 1,

A diagram of a larger example cache ¢ Here is a cache with 1, 024 blocks of 4 bytes each, and 32 -bit memory addresses. Index Valid ? bits Address (32 bits) ? ? Tag Data 0 1 2 3. . . 1022 1023 8 Tag 8 8 = Mux 8 Hit Data 8

A diagram of a larger example cache ¢ Here is a cache with 1,

A diagram of a larger example cache ¢ Here is a cache with 1, 024 blocks of 4 bytes each, and 32 -bit memory addresses. Index Valid 2 bits Address (32 bits) 20 10 Tag Data 0 1 2 3. . . 1022 1023 8 Tag 8 8 = Mux 8 Hit Data 8

Example: Direct Mapped Cache (E = 1) Direct mapped: One line/block per set Assume:

Example: Direct Mapped Cache (E = 1) Direct mapped: One line/block per set Assume: cache block size 8 bytes v tag 0 1 2 3 4 5 6 7 S = 2 s sets Address of int: t bits 0… 01 find set 100

Example: Direct Mapped Cache (E = 1) Direct mapped: One line/block per set Assume:

Example: Direct Mapped Cache (E = 1) Direct mapped: One line/block per set Assume: cache block size 8 bytes Address of int: valid? + match: assume yes = hit v tag 0 1 2 3 4 5 t bits 6 7 block offset 0… 01 100

Example: Direct Mapped Cache (E = 1) Direct mapped: One line/block per set Assume:

Example: Direct Mapped Cache (E = 1) Direct mapped: One line/block per set Assume: cache block size 8 bytes Address of int: valid? + match: assume yes = hit v tag 0 1 2 3 4 5 t bits 6 7 block offset int (4 Bytes) is here No match: old line is evicted and replaced 0… 01 100

A larger example cache mapping ¢ ¢ Where would the byte from 32 -bit

A larger example cache mapping ¢ ¢ Where would the byte from 32 -bit memory address 6146 be stored in this direct-mapped (one block per set) 1024(210)-set cache with 4(22)byte blocks? What are the § Block offset? (which byte within the block? ) § Set index? (which set? ) § Tag?

A larger example cache mapping ¢ ¢ Where would the byte from 32 -bit

A larger example cache mapping ¢ ¢ Where would the byte from 32 -bit memory address 6146 be stored in this direct-mapped (one block per set) 1024(210)-set cache with 4(22)byte blocks? What are the § Block offset? (which byte within the block? ) § Set index? (which set? ) § Tag? We can determine this with the binary force. § 6146 in binary is 0000 0001 1000 0010. § The lowest 2 bits, 10, mean this is the second byte in its block. § The next 10 bits, 100000, are the block number itself (512). § Tag is 1. Equivalently, you could use arithmetic instead. § The block offset is 6146 mod 4, which equals 2. § The block address is 6146/4 = 1536, so the index is 1536 mod 1024, or 512.

Example int sum_array_rows(double a[16]) { int i, j; double sum = 0; } Ignore

Example int sum_array_rows(double a[16]) { int i, j; double sum = 0; } Ignore the variables sum, i, j assume: cold (empty) cache, a[0][0] goes here for (i = 0; i < 16; i++) for (j = 0; j < 16; j++) sum += a[i][j]; return sum; int sum_array_cols(double a[16]) { int i, j; double sum = 0; } for (j = 0; i < 16; i++) for (i = 0; j < 16; j++) sum += a[i][j]; return sum; 32 B = 4 doubles

Disadvantage of direct mapping ¢ ¢ The direct-mapped cache is easy: indices and offsets

Disadvantage of direct mapping ¢ ¢ The direct-mapped cache is easy: indices and offsets can be computed with bit operators or simple arithmetic, because each memory address belongs in exactly one block. But, what happens if a Memory program uses addresses Address 2, 6, 2, …? 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Index 00 01 10 11

Disadvantage of direct mapping ¢ ¢ The direct-mapped cache is easy: indices and offsets

Disadvantage of direct mapping ¢ ¢ The direct-mapped cache is easy: indices and offsets can be computed with bit operators or simple arithmetic, because each memory address belongs in exactly one block. However, this isn’t really Memory flexible. If a program uses Address addresses 2, 6, 2, . . . , 0000 then each access will result 0001 in a cache miss and a load 0010 0011 into cache block 2. Index 0100 This cache has four blocks, 0101 00 0110 but direct mapping might 01 0111 10 not let us use all of them. 1000 11 1001 This can result in more 1010 1011 misses than we might like. 1100 1101 1110 1111

A fully associative cache ¢ A fully associative cache permits data to be stored

A fully associative cache ¢ A fully associative cache permits data to be stored in any cache block, instead of forcing each memory address into one particular block. § When data is fetched from memory, it can be placed in any unused block of the cache. § This way we’ll never have a conflict between two or more memory addresses which map to a single cache block. ¢ ¢ ¢ In the previous example, we might put memory address 2 in cache block 2, and address 6 in block 3. Then subsequent repeated accesses to 2 and 6 would all be hits instead of misses. If all the blocks are already in use, it’s usually best to replace the least recently used one, assuming that if it hasn’t used it in a while, it won’t be needed again anytime soon. Locality?

The price of full associativity ¢ However, a fully associative cache is expensive to

The price of full associativity ¢ However, a fully associative cache is expensive to implement. § Because there is no index field in the address anymore, the entire address must be used as the tag, increasing the total cache size. § Data could be anywhere in the cache, so we must check the tag of every cache block. That’s a lot of comparators! Address (32 bits) 32 Index Valid Tag (32 bits) Data . . Tag = = = Hit

Set Associativity E = 2 e lines/blocks per set line S = 2 s

Set Associativity E = 2 e lines/blocks per set line S = 2 s sets Nominal cache size: S x E x B data bytes v valid bit tag 0 1 2 B-1 B = 2 b bytes per cache block (the data)

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes Address of short int: t bits v tag 0 1 2 3 4 5 6 7 v tag 0 1 2 3 4 5 6 7 0… 01 100 find set

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes Address of short int: t bits compare both valid? + match: yes = hit v tag 0 1 2 3 4 5 6 7 block offset 0… 01 100

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per

E-way Set Associative Cache (Here: E = 2) E = 2: Two lines per set Assume: cache block size 8 bytes Address of short int: t bits match both valid? + match: yes = hit v tag 0 1 2 3 4 5 6 7 block offset short int (2 Bytes) is here No match: • One line in set is selected for eviction and replacement • Replacement policies: random, least recently used (LRU), … 0… 01 100

Example int sum_array_rows(double a[16]) { int i, j; double sum = 0; } for

Example int sum_array_rows(double a[16]) { int i, j; double sum = 0; } for (i = 0; i < 16; i++) for (j = 0; j < 16; j++) sum += a[i][j]; return sum; int sum_array_rows(double a[16]) { int i, j; double sum = 0; } for (j = 0; i < 16; i++) for (i = 0; j < 16; j++) sum += a[i][j]; return sum; Ignore the variables sum, i, j assume: cold (empty) cache, a[0][0] goes here 32 B = 4 doubles

What about writes? ¢ Multiple copies of data exist: § L 1, L 2,

What about writes? ¢ Multiple copies of data exist: § L 1, L 2, Main Memory, Disk ¢ What to do one a write-hit? § Write-through (write immediately to memory) § Write-back (defer write to memory until replacement of line) § ¢ Need a dirty bit (line different from memory or not) What to do on a write-miss? § Write-allocate (load into cache, update line in cache) Good if more writes to the location follow § No-write-allocate (writes immediately to memory) § ¢ Typical § Write-through + No-write-allocate § Write-back + Write-allocate

Important Cache Topics ¢ Replacement algorithms § Which block is picked for replacement? For

Important Cache Topics ¢ Replacement algorithms § Which block is picked for replacement? For direct-mapped cache: only one choice § For set associative cache: multiple choices § – Candidate algorithms: LRU, MRU, random – What information must be stored to implement these? ¢ Cache consistency: copies of data not identical § Write-back cache has only valid copy of block in system § Problem: cache and memory have different versions § What if I/O device does DMA transfer to/from memory? § Problem: caches have different versions What if a write modifies an instruction? § What if system has multiple CPUs, each with its own caches? § ¢ Cache size (nominal) § Includes only data: not tag, dirty, valid, or replacement bits § Power of 2: Number of sets? Block/line size? Associativity?

Software Caches are More Flexible ¢ Examples § File system buffer caches, web browser

Software Caches are More Flexible ¢ Examples § File system buffer caches, web browser caches, etc. ¢ Some design differences § Almost always fully associative so, no placement restrictions § index structures like hash tables are common § Often use complex replacement policies § misses are very expensive when disk or network involved § worth thousands of cycles to avoid them § Not necessarily constrained to single “block” transfers § may fetch or write-back in larger units, opportunistically §

Direct-Mapped Cache Simulation t=1 s=2 x xx M= 4 bit addresses (16 bytes total),

Direct-Mapped Cache Simulation t=1 s=2 x xx M= 4 bit addresses (16 bytes total), B=2 bytes/block, S=4 sets, E=1 entry/set b=1 x Address trace (single byte reads): 0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002] v 11 0 [00002] (miss) tag data 0 m[1] m[0] M[0 -1] (1) (3) v (4) 13 [11012] (miss) v tag data 8 [10002] (miss) tag data 11 1 m[9] m[8] M[8 -9] 1 1 M[12 -13] 1 1 0 m[1] m[0] M[0 -1] 1 1 1 m[13] m[12] M[12 -13] v (5) 0 [00002] (miss) tag data 11 0 m[1] m[0] M[0 -1] 11 1 m[13] m[12] M[12 -13]

Why Use Middle Bits as Index? 4 -line cache 00 01 10 11 ¢

Why Use Middle Bits as Index? 4 -line cache 00 01 10 11 ¢ ¢ 0000 0001 0010 0011 0100 0101 High-order bit indexing § Adjacent memory lines would map to 0110 0111 same cache entry 1000 § Poor use of spatial locality 1001 Middle-order bit indexing 1010 § Consecutive memory lines map to consecutive cache lines 1011 § Can hold C-byte region of address 1100 space in cache at one time 1101 1110 1111 High-order bit indexing Middle-order bit indexing 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Cache Organization: Another View § View cache as 2 D array: # sets *

Cache Organization: Another View § View cache as 2 D array: # sets * associativity § Cache size = associativity * # sets * block size § Index bits pick row; all tags in row compared in parallel tag set associativity index offset address # sets to CPU =? =? 4: 1 Mux =?

Cache Organizations § For caches of the same size (capacity) set-associative (associativity > 1,

Cache Organizations § For caches of the same size (capacity) set-associative (associativity > 1, #sets > 1) direct mapped (set associativity = 1) fully associative (#sets = 1)

Understanding caches: a quiz ¢ Essential relationships: Cache size = size of block *

Understanding caches: a quiz ¢ Essential relationships: Cache size = size of block * # sets * set associativity Block size = 2 ^ (# offset bits) Number of sets = 2 ^ (# index bits) # tag bits + # index bits + # offset bits = address size (assuming machine is byte-addressable) ¢ See if you can answer these questions: 1: Suppose we have a 64 KB, 4 -way set-associative cache with 32 byte blocks. How many index bits are used from the address? 2: A 16 KB direct-mapped cache is accessed with a 32 bit address. If the block size is 8 bytes, how many bits wide is the tag? 3: Suppose we have an 8 KB fully-associative cache with 16 byte blocks. How big is the tag for each entry, assuming a 32 -bit address?

Question 1 ¢ ? ? ? Suppose we have a 64 KB, 4 -way

Question 1 ¢ ? ? ? Suppose we have a 64 KB, 4 -way set-associative cache with 32 byte blocks. How many index bits are used from the address? Since blocks are 32 -bytes each, 5 offset bits are needed. Determine total blocks in cache. 64 KB/32 bytes = 2^16/2^5 = 2^11 blocks. Cache is 4 -way set-associative so there are four columns in cache, so each column has 2^11/2^2 = 2^9 blocks. There are therefore 2^9 sets, so we need 9 index bits. tag 18 index offset 9 5

Question 2 ¢ ? ? ? A 16 KB direct-mapped cache is accessed with

Question 2 ¢ ? ? ? A 16 KB direct-mapped cache is accessed with a 32 bit address. If the block size is 8 bytes, how many bits wide is the tag? Since blocks are 8 bytes each, 3 offset bits required. (2^3 = 8) Total blocks in the cache = 16 KB/8 bytes = 2^14/2^3 = 2^11. There is only one column in this cache (set associativity = 1) so the number of sets in the cache = 2^11. Number of index bits required is therefore 11. The size of the tag is therefore 32 - 11 - 3 = 18 bits/tag. tag 18 index offset 11 3

Question 3 ¢ ? ? ? Suppose we have an 8 KB fully-associative cache

Question 3 ¢ ? ? ? Suppose we have an 8 KB fully-associative cache with 16 byte blocks. How big is the tag for each entry, assuming a 32 -bit address? Note first that there is only one set in a fully-associative cache. This means no index bits are used to select the set. The 16 -byte blocks require 4 offset bits (2^4 = 16). The tag is therefore 32 - 4 = 28 bits in length. tag 28 offset 4

Intel Core Duo Each core includes L 1 i- and d-caches Shared L 2

Intel Core Duo Each core includes L 1 i- and d-caches Shared L 2 cache facilitates data sharing Intel calls this L 2 a “smart cache Power can be turned off to unused L 2 portions In Pentium D 900 (released 2005) each core has its own 2 MB L 2 cache Core Duo released January 2006

Intel Core i 7 First Intel CPU to have integrated memory controller: 3 channel

Intel Core i 7 First Intel CPU to have integrated memory controller: 3 channel DDR 3, over 25 GB/s memory throughput High end of Intel “core” brand, 731 M transistors, 1366 pins. Each core has 32 KB i-cache, 32 KB d-cache, and a 256 KB L 2. 8 MB L 3 cache is shared by all cores. Quadcore Core i 7 announced late 2008, six-core addition to launch March 2010

Intel Core i 7 Cache Hierarchy Processor package Core 0 Core 3 Regs L

Intel Core i 7 Cache Hierarchy Processor package Core 0 Core 3 Regs L 1 d-cache Regs L 1 i-cache … L 2 unified cache L 1 d-cache L 1 i-cache L 2 unified cache L 3 unified cache (shared by all cores) Main memory

Writing Cache Friendly Code ¢ ¢ ¢ Repeated references to variables are good (temporal

Writing Cache Friendly Code ¢ ¢ ¢ Repeated references to variables are good (temporal locality) Stride-1 reference patterns are good (spatial locality) Examples: § Cold cache, 4 -byte words, 4 -word blocks, large arrays int sumarrayrows(int a[M][N]) { int i, j, sum = 0; int sumarraycols(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; } for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; } Miss rate = 1/4 = 25% Miss rate = 100%