Cache Memories Topics n n Cache memory organization

  • Slides: 42
Download presentation
Cache Memories Topics n n Cache memory organization Direct mapped caches Set associative caches

Cache Memories Topics n n Cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware. n

Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware. n Hold frequently accessed blocks of main memory CPU looks first for data in L 1, then in L 2, then in main memory. Typical bus structure: CPU chip register file cache bus L 2 cache L 1 cache bus interface ALU system bus memory bus I/O bridge main memory

Inserting an L 1 Cache Between the CPU and Main Memory The tiny, very

Inserting an L 1 Cache Between the CPU and Main Memory The tiny, very fast CPU register file has room for four 4 -byte words. The transfer unit between the CPU register file and the cache is a 1 -word block. line 0 The small fast L 1 cache has room for two 4 -word blocks. line 1 The transfer unit between the cache and main memory is a 4 -word block 10 abcd . . . block 21 pqrs . . . block 30 wxyz . . . The big slow main memory has room for many 4 -word blocks.

Cache Memory Organization (overview) Cache is an array of sets. Each set contains one

Cache Memory Organization (overview) Cache is an array of sets. Each set contains one or more lines. 1 valid bit t tag bits per line valid S= sets 0 1 • • • B– 1 • • • set 0: Each line holds a block of data. 2 s tag B = 2 b bytes per cache block valid tag 0 1 • • • B– 1 1 • • • B– 1 • • • set 1: valid tag 0 • • • set S-1: valid tag 0 Cache size: C = B x E x S data bytes E lines per set

Addressing Caches Address A: t bits set 0: set 1: v tag 0 •

Addressing Caches Address A: t bits set 0: set 1: v tag 0 • • • 0 1 • • • B– 1 1 • • • B– 1 • • • set S-1: v tag 0 • • • 0 m-1 s bits b bits 0 <tag> <set index> <block offset> The word at address A is in the cache if the tag bits in one of the <valid> lines in set <set index> match <tag>. The word contents begin at offset <block offset> bytes from the beginning of the block.

Direct-Mapped Cache Simplest kind of cache Characterized by exactly one line per set 0:

Direct-Mapped Cache Simplest kind of cache Characterized by exactly one line per set 0: valid tag cache block set 1: valid tag cache block • • • set S-1: valid tag cache block E=1 lines per set

Direct-Mapped Cache First we select the Set n Use the set index bits to

Direct-Mapped Cache First we select the Set n Use the set index bits to determine the set of interest. selected set 0: valid tag cache block set 1: valid tag cache block • • • t bits m-1 tag s bits b bits 00 001 set index block offset 0 set S-1: valid tag cache block

Direct-Mapped Cache (animation) Then we match the Line and word selection n n Line

Direct-Mapped Cache (animation) Then we match the Line and word selection n n Line matching: Find a valid line in the selected set with a matching tag (trivial for direct-mapped caches) Word selection: Then extract the word =1? (1) The valid bit must be set 0 selected set (i): 1 0110 1 2 3 4 w 0 5 w 1 w 2 (2) The tag bits in the cache =? line must match the tag bits in the address m-1 t bits 0110 tag 6 s bits b bits i 100 set index block offset 0 7 w 3 (3) If (1) and (2), then cache hit, and block offset selects starting byte.

Direct-Mapped Cache A scaled-down example: • 4 -bit address 16 memory locations • set

Direct-Mapped Cache A scaled-down example: • 4 -bit address 16 memory locations • set = 2 bits 4 sets • block = 1 bit 2 -byte blocks Each block holds two bytes, or data for two memory locations. For example, address 0000 and 0001 both map to set 00, but at different offsets. Addr 0000 0001 0010 0011 0100 0101 0110 0111 Set 00 00 01 01 10 10 11 11 Addr 1000 1001 1010 1011 1100 1101 1110 1111 Set 00 00 01 01 10 10 11 11 t=1 s=2 x xx v tag b=1 x block 00 01 10 11 Note that address 0000 and 1000 both map to set 00 at offset 0. How do we know which we have?

Direct-Mapped Cache (animation) t=1 s=2 x xx Address trace (reads): 0 [00002], 1 [00012],

Direct-Mapped Cache (animation) t=1 s=2 x xx Address trace (reads): 0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002] b=1 x Hint: evaluate set, then valid, then tag v 11 0 [00002] tag data 0 m[1] m[0] M[0 -1] (1) (3) v (4) 13 [11012] v tag data 8 [10002] tag data 11 1 m[9] m[8] M[8 -9] 1 1 M[12 -13] 1 1 0 m[1] m[0] M[0 -1] 1 1 1 m[13] m[12] M[12 -13] v (5) 0 [00002] tag data 11 0 m[1] m[0] M[0 -1] 11 1 m[13] m[12] M[12 -13]

Why Use Middle Bits as Set Index? Does it matter where we place “set”

Why Use Middle Bits as Set Index? Does it matter where we place “set” bits? t=1 s=2 x xx b=1 x middle-order indexing t=1 x b=1 x high-order indexing s=2 xx

Why Use Middle Bits as Set Index? High-Order Bit Indexing 4 -line Cache 00

Why Use Middle Bits as Set Index? High-Order Bit Indexing 4 -line Cache 00 01 10 11 High-Order Bit Indexing n n Adjacent memory lines would map to same cache entry Poor use of spatial locality Middle-Order Bit Indexing n n Consecutive memory lines map to different cache lines Can hold C consecutive bytes in cache at one time 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Middle-Order Bit Indexing 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Practice problem #6. 6, p. 490

Practice problem #6. 6, p. 490

Set Associative Caches Characterized by more than one line per set (E > 1)

Set Associative Caches Characterized by more than one line per set (E > 1) set 0: set 1: valid tag cache block • • • set S-1: n n n valid tag cache block E > 1 reduces line replacement (LRU, LFU) Less expensive than fully associative caches Special hardware operates in parallel E=2 lines per set

Set Associative Caches Set selection n identical to direct-mapped cache set 0: Selected set

Set Associative Caches Set selection n identical to direct-mapped cache set 0: Selected set 1: valid tag cache block • • • t bits m-1 tag set S-1: s bits b bits 00 001 set index block offset 0 valid tag cache block

Set Associative Caches (animation) Line matching and word selection n must compare the tag

Set Associative Caches (animation) Line matching and word selection n must compare the tag in each valid line in the selected set. =1? (1) The valid bit must be set. 0 selected set (i): 1 1001 1 0110 (2) The tag bits in one of the cache lines must match the tag bits in the address 1 2 3 4 w 0 m-1 6 w 1 w 2 7 w 3 (3) If (1) and (2), then cache hit, and block offset selects starting byte. =? t bits 0110 tag 5 s bits b bits i 100 set index block offset 0

Practice problem #6. 9, p. 501 (demo)

Practice problem #6. 9, p. 501 (demo)

Practice problem #6. 10, p. 501 (demo)

Practice problem #6. 10, p. 501 (demo)

Practice problem #6. 11, p. 502

Practice problem #6. 11, p. 502

Practice problem #6. 13, p. 503

Practice problem #6. 13, p. 503

Multi-Level Caches Two Options: separate data and instruction caches, or a unified cache Processor

Multi-Level Caches Two Options: separate data and instruction caches, or a unified cache Processor Regs L 1 d-cache L 1 i-cache size: speed: $/Mbyte: line size: 200 B 3 ns 8 -64 KB 3 ns 8 B 32 B larger, slower, cheaper Unified L 2 Cache 1 -4 MB SRAM 6 ns $100/MB 32 B Memory 128 MB DRAM 60 ns $1. 50/MB 8 KB disk 30 GB 8 ms $0. 05/MB

Intel Pentium Cache Hierarchy Regs. L 1 Data 1 cycle latency 16 KB 4

Intel Pentium Cache Hierarchy Regs. L 1 Data 1 cycle latency 16 KB 4 -way assoc 32 B lines L 1 Instruction 16 KB, 4 -way 32 B lines Processor Chip L 2 Unified 128 KB--2 MB 4 -way assoc 32 B lines Main Memory Up to 4 GB

Cache Performance Metrics Miss Rate n n Fraction of memory references not found in

Cache Performance Metrics Miss Rate n n Fraction of memory references not found in cache (misses/references) Typical numbers: l L 1: 3 -10% l L 2: can be quite small (e. g. , < 1%) depending on size, etc. Hit Time n n Time to deliver a line in the cache to the processor (set selection, line identification, word selection) Typical numbers: l L 1: 1 clock cycle l L 2: 3 -8 clock cycles Miss Penalty n Additional time required because of a miss l Typically 25 -100 cycles for main memory

Writing Cache Friendly Code Repeated references to variables are good (temporal locality) Stride-1 reference

Writing Cache Friendly Code Repeated references to variables are good (temporal locality) Stride-1 reference patterns are good (spatial locality) Examples: n n cold cache, 4 -byte words, 4 -word cache lines one cache line good for 4 array elements in sequence int sumarrayrows(int a[M][N]) { int i, j, sum = 0; int sumarraycols(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; } } Miss rate = 1/4 = 25% Miss rate = 100%

Measuring Performance // mountain. c - Generate the memory mountain. #define MINBYTES (1 <<

Measuring Performance // mountain. c - Generate the memory mountain. #define MINBYTES (1 << 10) // Working set size ranges from 1 KB #define MAXBYTES (1 << 23) //. . . up to 8 MB #define MAXSTRIDE 16 // Strides range from 1 to 16 #define MAXELEMS MAXBYTES/sizeof(int) int data[MAXELEMS]; int main() { int size; int stride; double Mhz; /* The array we'll be traversing */ // Working set size (in bytes) // Stride (in array elements) // Clock frequency */ init_data(data, MAXELEMS); // Initialize each element in data to 1 Mhz = mhz(0); // Estimate the clock frequency for (size = MAXBYTES; size >= MINBYTES; size >>= 1) { for (stride = 1; stride <= MAXSTRIDE; stride++) printf("%. 1 ft", run(size, stride, Mhz)); printf("n"); } exit(0); }

Measuring Performance // Run test(elems, stride) and return read throughput (MB/s) double run(int size,

Measuring Performance // Run test(elems, stride) and return read throughput (MB/s) double run(int size, int stride, double Mhz) { double cycles; int elems = size / sizeof(int); test(elems, stride); // warm up the cache cycles = fcyc 2(test, elems, stride, 0); // call test(elems, stride) return (size / stride) / (cycles / Mhz); // convert cycles to MB/s } // The test function void test(int elems, int stride) { int i, result = 0; volatile int sink; // why is this volatile? for (i = 0; i < elems; i += stride) result += data[i]; sink = result; }

The Memory Mountain (p. 514)

The Memory Mountain (p. 514)

Ridges of Temporal Locality Slice through the memory mountain with stride=1 n shows throughput

Ridges of Temporal Locality Slice through the memory mountain with stride=1 n shows throughput for different caches (L 1 = 16 k, L 2 = 512 k) n L 2 is unified cache, drop-off at 512 k due to i-cache/d-cache

A Slope of Spatial Locality Slice through memory mountain with size=256 KB n top

A Slope of Spatial Locality Slice through memory mountain with size=256 KB n top of L 2 ridge shows L 2 cache line size (8 words/line)

Matrix Multiplication Example (p. 518) // ijk for (i=0; i<n; i++) { for (j=0;

Matrix Multiplication Example (p. 518) // ijk for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0. 0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } Variable sum held in register Description: n Multiply N x N matrices O(N 3) total operations n Arrays elements are doubles (8 bytes) n Cache can hold 4 8 -byte lines Cache can hold 4 array elements Cache is not large enough to hold multiple rows n n n

Matrix Multiplication, ijk // ijk for (i=0; i<n; i++) { for (j=0; j<n; j++)

Matrix Multiplication, ijk // ijk for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0. 0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } Inner loop: (*, j) (i, *) A B Row-wise Columnwise Misses per Inner Loop Iteration: A B C Total 0. 25 1. 0 0. 0 1. 25 C Fixed

Matrix Multiplication, jik (same as ijk) // jik for (j=0; j<n; j++) { for

Matrix Multiplication, jik (same as ijk) // jik for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0. 0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } } Inner loop: (*, j) (i, *) A Misses per Inner Loop Iteration: B Row-wise Columnwise A B C Total 0. 25 1. 0 0. 0 1. 25 C Fixed

Matrix Multiplication, kij // kij for (k=0; k<n; k++) { for (i=0; i<n; i++)

Matrix Multiplication, kij // kij for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop: (i, k) A Fixed Misses per Inner Loop Iteration: A B C Total 0. 0 0. 25 0. 50 (k, *) B (i, *) C Row-wise

Matrix Multiplication, ikj (same as kij) // ikj for (i=0; i<n; i++) { for

Matrix Multiplication, ikj (same as kij) // ikj for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop: (i, k) A Fixed Misses per Inner Loop Iteration: A B C Total 0. 0 0. 25 0. 50 (k, *) B (i, *) C Row-wise

Matrix Multiplication, jki // jki for (j=0; j<n; j++) { for (k=0; k<n; k++)

Matrix Multiplication, jki // jki for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } Inner loop: (*, k) (k, j) A Misses per Inner Loop Iteration: A B C 1. 0 0. 0 1. 0 (*, j) Column wise Total 2. 0 B Fixed C Columnwise

Matrix Multiplication, kji (same as jki) // kji for (k=0; k<n; k++) { for

Matrix Multiplication, kji (same as jki) // kji for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } Inner loop: (*, k) (k, j) A Columnwise Misses per Inner Loop Iteration: A B C 1. 0 0. 0 1. 0 (*, j) Total 2. 0 B Fixed C Columnwise

Summary of Matrix Multiplication ijk (& jik): kij (& ikj): • 2 loads, 0

Summary of Matrix Multiplication ijk (& jik): kij (& ikj): • 2 loads, 0 stores • 2 loads, 1 store • misses/iter = 1. 25 • misses/iter = 0. 5 • misses/iter = 2. 0 for (i=0; i<n; i++) { for (k=0; k<n; k++) { for (j=0; j<n; j++) { for (k=0; k<n; k++) { sum = 0. 0; r = a[i][k]; r = b[k][j]; for (k=0; k<n; k++) for (j=0; j<n; j++) for (i=0; i<n; i++) c[i][j] += r * b[k][j]; c[i][j] = sum; } for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum += a[i][k] * b[k][j]; } jki (& kji): c[i][j] += a[i][k] * r; } }

Pentium Matrix Multiply Performance Miss rates are helpful but not perfect predictors. l Code

Pentium Matrix Multiply Performance Miss rates are helpful but not perfect predictors. l Code scheduling also matters l ijk (1. 25 misses/iter) is faster than ikj (0. 5 misses/iter) Misses/Iteration kij = 0. 50 2 L/1 S ikj = 0. 50 2 L/1 S ** jik = 1. 25 2 L/0 S ijk = 1. 25 2 L/0 S ** kji = 2. 00 2 L/1 S jki = 2. 00 2 L/1 S

Improving Temporal Locality by Blocking Example: Blocked matrix multiplication n “block” (in this context)

Improving Temporal Locality by Blocking Example: Blocked matrix multiplication n “block” (in this context) does not mean “cache block”. n Instead, it mean a sub-block within the matrix. A 11 A 12 A 21 A 22 B 11 B 12 X B 21 B 22 = C 11 C 12 C 21 C 22 Key idea: Sub-blocks are smaller and stay cache resident. C 11 = A 11 B 11 + A 12 B 21 C 12 = A 11 B 12 + A 12 B 22 C 21 = A 21 B 11 + A 22 B 21 C 22 = A 21 B 12 + A 22 B 22

Blocked Matrix Performance

Blocked Matrix Performance

Cache Write Issues Cache hit policies n Write through l Write directly to memory

Cache Write Issues Cache hit policies n Write through l Write directly to memory n Write back l Write to cache, write to memory when evicted Cache miss policies n Write allocate l Load line into cache, then write to cache n No write allocate l Bypass cache, write directly to memory Common pairings n Write through + No write allocate - cheapest l Immediate memory update n Write back + write allocate – fastest l Delayed memory update

Concluding Observations Programmer can optimize for cache performance n How data structures are organized

Concluding Observations Programmer can optimize for cache performance n How data structures are organized n How data is accessed l Nested loop structure l Blocking to improve performance All systems favor “cache friendly code” n Absolute optimum performance is very platform specific l Cache sizes, line sizes, associativities, etc. n Most of advantage is gained with generic code l Keep working set reasonably small (temporal locality) l Use small strides (spatial locality) Lab Assignment #2 Be sure to read carefully. Hints are included.