The Memory Hierarchy CENG 331 Introduction to Computer

  • Slides: 90
Download presentation
The Memory Hierarchy CENG 331: Introduction to Computer Systems 10 th Lecture Instructor: Erol

The Memory Hierarchy CENG 331: Introduction to Computer Systems 10 th Lecture Instructor: Erol Sahin Acknowledgement: Most of the slides are adapted from the ones prepared by R. E. Bryant, D. R. O’Hallaron of Carnegie-Mellon Univ.

Overview Topics n n n Storage technologies and trends Locality of reference Caching in

Overview Topics n n n Storage technologies and trends Locality of reference Caching in the memory hierarchy – 2 –

Random-Access Memory (RAM) Key features n n n RAM is packaged as a chip.

Random-Access Memory (RAM) Key features n n n RAM is packaged as a chip. Basic storage unit is a cell (one bit per cell). Multiple RAM chips form a memory. Static RAM (SRAM) n n Each cell stores bit with a six-transistor circuit. Retains value indefinitely, as long as it is kept powered. Relatively insensitive to disturbances such as electrical noise. Faster and more expensive than DRAM. Dynamic RAM (DRAM) n n Each cell stores bit with a capacitor and transistor. Value must be refreshed every 10 -100 ms. Sensitive to disturbances. Slower and cheaper than SRAM. – 3 –

SRAM Each bit in an SRAM is stored on four transistors that form two

SRAM Each bit in an SRAM is stored on four transistors that form two cross-coupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and write operations. A typical SRAM uses six MOSFETs to store each memory bit. SRAM is more expensive, but faster and significantly less power hungry (especially idle) than DRAM. It is therefore used where either bandwidth or low power, or both, are principal considerations. SRAM is also easier to control (interface to) and generally more truly random access than modern types of DRAM. Due to a more complex internal structure, SRAM is less dense than DRAM and is therefore not used for high-capacity, low-cost applications such as the main memory in personal computers. – 4 –

DRAM is usually arranged in a square array of one capacitor and transistor per

DRAM is usually arranged in a square array of one capacitor and transistor per data bit storage cell. The illustrations to the right show a simple example with only 4 by 4 cells Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory. The main memory (the "RAM") in personal computers is Dynamic RAM (DRAM), as is the "RAM" of home game consoles (Play. Station, Xbox 360 and Wii), laptop, notebook and workstation computers. The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high density. Unlike flash memory, it is volatile memory (cf. nonvolatile memory), since it loses its data when power is removed. The transistors and capacitors used are extremely small—millions can fit on a single memory chip. – 5 –

SRAM vs DRAM Summary Tran. per bit Access time Persist? Sensitive? Cost Applications SRAM

SRAM vs DRAM Summary Tran. per bit Access time Persist? Sensitive? Cost Applications SRAM 6 1 X Yes No 100 x cache memories DRAM 1 10 X No Yes 1 X Main memories, frame buffers – 6 –

Conventional DRAM Organization d x w DRAM: n dw total bits organized as d

Conventional DRAM Organization d x w DRAM: n dw total bits organized as d supercells of size w bits 16 x 8 DRAM chip 0 2 bits / 2 3 0 addr (to CPU) 1 cols 1 rows memory controller supercell (2, 1) 2 8 bits / 3 data internal row buffer – 7 –

Reading DRAM Supercell (2, 1) Step 1(a): Row access strobe (RAS) selects row 2.

Reading DRAM Supercell (2, 1) Step 1(a): Row access strobe (RAS) selects row 2. Step 1(b): Row 2 copied from DRAM array to row buffer. 16 x 8 DRAM chip 0 RAS = 2 2 / 1 cols 2 3 0 addr 1 rows memory controller 2 8 / 3 data internal row buffer – 8 –

Reading DRAM Supercell (2, 1) Step 2(a): Column access strobe (CAS) selects column 1.

Reading DRAM Supercell (2, 1) Step 2(a): Column access strobe (CAS) selects column 1. Step 2(b): Supercell (2, 1) copied from buffer to data lines, and eventually back to the CPU. 16 x 8 DRAM chip 0 CAS = 1 2 / 2 3 0 addr To CPU 1 rows memory controller supercell (2, 1) 1 cols 2 8 / 3 data supercell (2, 1) internal row buffer – 9 –

Memory Modules addr (row = i, col = j) : supercell (i, j) DRAM

Memory Modules addr (row = i, col = j) : supercell (i, j) DRAM 0 64 MB memory module consisting of eight 8 Mx 8 DRAMs DRAM 7 bits bits 56 -63 48 -55 40 -47 32 -39 24 -31 16 -23 8 -15 63 56 55 48 47 40 39 32 31 24 23 16 15 8 7 bits 0 -7 0 64 -bit doubleword at main memory address A Memory controller 64 -bit doubleword – 10 –

Enhanced DRAMs All enhanced DRAMs are built around the conventional DRAM core. n Fast

Enhanced DRAMs All enhanced DRAMs are built around the conventional DRAM core. n Fast page mode DRAM (FPM DRAM) l Access contents of row with [RAS, CAS, CAS] instead of [(RAS, CAS), (RAS, CAS)]. n Extended data out DRAM (EDO DRAM) l Enhanced FPM DRAM with more closely spaced CAS signals. n Synchronous DRAM (SDRAM) l Driven with rising clock edge instead of asynchronous control signals. n Double data-rate synchronous DRAM (DDR SDRAM) l Enhancement of SDRAM that uses both clock edges as control signals. n Video RAM (VRAM) l Like FPM DRAM, but output is produced by shifting row buffer l Dual ported (allows concurrent reads and writes) – 11 –

Nonvolatile Memories DRAM and SRAM are volatile memories n Lose information if powered off.

Nonvolatile Memories DRAM and SRAM are volatile memories n Lose information if powered off. Nonvolatile memories retain value even if powered off. n n Generic name is read-only memory (ROM). Misleading because some ROMs can be read and modified. Types of ROMs n n Programmable ROM (PROM) Erasable programmable ROM (EPROM) Electrically erase PROM (EEPROM) Flash memory Firmware n Program stored in a ROM l Boot time code, BIOS (basic input/ouput system) l graphics cards, disk controllers. – 12 –

Typical Bus Structure Connecting CPU and Memory A bus is a collection of parallel

Typical Bus Structure Connecting CPU and Memory A bus is a collection of parallel wires that carry address, data, and control signals. Buses are typically shared by multiple devices. CPU chip register file ALU system bus interface I/O bridge memory bus main memory – 13 –

Memory Read Transaction (1) CPU places address A on the memory bus. register file

Memory Read Transaction (1) CPU places address A on the memory bus. register file %eax Load operation: movl A, %eax ALU I/O bridge bus interface A main memory 0 x A – 14 –

Memory Read Transaction (2) Main memory reads A from the memory bus, retrieves word

Memory Read Transaction (2) Main memory reads A from the memory bus, retrieves word x, and places it on the bus. register file %eax Load operation: movl A, %eax ALU I/O bridge bus interface x main memory 0 x A – 15 –

Memory Read Transaction (3) CPU read word x from the bus and copies it

Memory Read Transaction (3) CPU read word x from the bus and copies it into register %eax. register file %eax x Load operation: movl A, %eax ALU I/O bridge bus interface main memory 0 x A – 16 –

Memory Write Transaction (1) CPU places address A on bus. Main memory reads it

Memory Write Transaction (1) CPU places address A on bus. Main memory reads it and waits for the corresponding data word to arrive. register file %eax y Store operation: movl %eax, A ALU I/O bridge bus interface A main memory 0 A – 17 –

Memory Write Transaction (2) CPU places data word y on the bus. register file

Memory Write Transaction (2) CPU places data word y on the bus. register file %eax y Store operation: movl %eax, A ALU I/O bridge bus interface y main memory 0 A – 18 –

Memory Write Transaction (3) Main memory read data word y from the bus and

Memory Write Transaction (3) Main memory read data word y from the bus and stores it at address A. register file %eax y Store operation: movl %eax, A ALU I/O bridge bus interface main memory 0 y A – 19 –

Disk Geometry Disks consist of platters, each with two surfaces. Each surface consists of

Disk Geometry Disks consist of platters, each with two surfaces. Each surface consists of concentric rings called tracks. Each track consists of sectors separated by gaps. tracks surface track k gaps spindle sectors – 20 –

Disk Geometry (Muliple-Platter View) Aligned tracks form a cylinder k surface 0 platter 0

Disk Geometry (Muliple-Platter View) Aligned tracks form a cylinder k surface 0 platter 0 surface 1 surface 2 platter 1 surface 3 surface 4 platter 2 surface 5 spindle – 21 –

Disk Capacity: maximum number of bits that can be stored. n Vendors express capacity

Disk Capacity: maximum number of bits that can be stored. n Vendors express capacity in units of gigabytes (GB), where 1 GB = 10^9. Capacity is determined by these technology factors: n n n Recording density (bits/in): number of bits that can be squeezed into a 1 inch segment of a track. Track density (tracks/in): number of tracks that can be squeezed into a 1 inch radial segment. Areal density (bits/in 2): product of recording and track density. Modern disks partition tracks into disjoint subsets called recording zones n n Each track in a zone has the same number of sectors, determined by the circumference of innermost track. Each zone has a different number of sectors/track – 22 –

Computing Disk Capacity = (# bytes/sector) x (avg. # sectors/track) x (# tracks/surface) x

Computing Disk Capacity = (# bytes/sector) x (avg. # sectors/track) x (# tracks/surface) x (# surfaces/platter) x (# platters/disk) Example: n n n 512 bytes/sector 300 sectors/track (on average) 20, 000 tracks/surface 2 surfaces/platter 5 platters/disk Capacity = 512 x 300 x 20000 x 2 x 5 = 30, 720, 000 = 30. 72 GB – 23 –

Disk Operation (Single-Platter View) The disk surface spins at a fixed rotational rate The

Disk Operation (Single-Platter View) The disk surface spins at a fixed rotational rate The read/write head is attached to the end of the arm and flies over the disk surface on a thin cushion of air. spindle By moving radially, the arm can position the read/write head over any track. – 24 –

Disk Operation (Multi-Platter View) read/write heads move in unison from cylinder to cylinder arm

Disk Operation (Multi-Platter View) read/write heads move in unison from cylinder to cylinder arm spindle – 25 –

Disk Access Time Average time to access some target sector approximated by : n

Disk Access Time Average time to access some target sector approximated by : n Taccess = Tavg seek + Tavg rotation + Tavg transfer Seek time (Tavg seek) n n Time to position heads over cylinder containing target sector. Typical Tavg seek = 9 ms Rotational latency (Tavg rotation) n n Time waiting for first bit of target sector to pass under r/w head. Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min Transfer time (Tavg transfer) n n Time to read the bits in the target sector. Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60 secs/1 min. – 26 –

Disk Access Time Example Given: n n n Rotational rate = 7, 200 RPM

Disk Access Time Example Given: n n n Rotational rate = 7, 200 RPM Average seek time = 9 ms. Avg # sectors/track = 400. Derived: n n n Tavg rotation = 1/2 x (60 secs/7200 RPM) x 1000 ms/sec = 4 ms. Tavg transfer = 60/7200 RPM x 1/400 secs/track x 1000 ms/sec = 0. 02 ms Taccess = 9 ms + 4 ms + 0. 02 ms Important points: n n n Access time dominated by seek time and rotational latency. First bit in a sector is the most expensive, the rest are free. SRAM access time is about 4 ns/doubleword, DRAM about 60 ns l Disk is about 40, 000 times slower than SRAM, l 2, 500 times slower then DRAM. – 27 –

Logical Disk Blocks Modern disks present a simpler abstract view of the complex sector

Logical Disk Blocks Modern disks present a simpler abstract view of the complex sector geometry: n The set of available sectors is modeled as a sequence of b-sized logical blocks (0, 1, 2, . . . ) Mapping between logical blocks and actual (physical) sectors n n Maintained by hardware/firmware device called disk controller. Converts requests for logical blocks into (surface, track, sector) triples. Allows controller to set aside spare cylinders for each zone. n Accounts for the difference in “formatted capacity” and “maximum capacity”. – 28 –

I/O Bus CPU chip register file ALU system bus memory bus main memory I/O

I/O Bus CPU chip register file ALU system bus memory bus main memory I/O bridge bus interface I/O bus USB controller mouse keyboard graphics adapter disk controller Expansion slots for other devices such as network adapters. monitor disk – 29 –

Reading a Disk Sector (1) CPU chip register file ALU CPU initiates a disk

Reading a Disk Sector (1) CPU chip register file ALU CPU initiates a disk read by writing a command, logical block number, and destination memory address to a port (address) associated with disk controller. main memory bus interface I/O bus USB controller mouse keyboard graphics adapter disk controller monitor disk – 30 –

Reading a Disk Sector (2) CPU chip register file ALU Disk controller reads the

Reading a Disk Sector (2) CPU chip register file ALU Disk controller reads the sector and performs a direct memory access (DMA) transfer into main memory bus interface I/O bus USB controller mouse keyboard graphics adapter disk controller monitor disk – 31 –

Reading a Disk Sector (3) CPU chip register file ALU When the DMA transfer

Reading a Disk Sector (3) CPU chip register file ALU When the DMA transfer completes, the disk controller notifies the CPU with an interrupt (i. e. , asserts a special “interrupt” pin on the CPU) main memory bus interface I/O bus USB controller mouse keyboard graphics adapter disk controller monitor disk – 32 –

Storage Trends SRAM Disk metric 1980 1985 1990 1995 2000: 1980 $/MB access (ns)

Storage Trends SRAM Disk metric 1980 1985 1990 1995 2000: 1980 $/MB access (ns) 19, 200 300 2, 900 150 320 35 256 15 100 2 190 100 metric 1980 1985 1990 1995 2000: 1980 $/MB 8, 000 access (ns) 375 typical size(MB) 0. 064 880 200 0. 256 100 4 30 70 16 1 60 64 8, 000 6 1, 000 metric 1985 1990 1995 2000: 1980 100 75 10 8 28 160 0. 30 10 1, 000 0. 05 8 9, 000 10, 000 11 9, 000 1980 $/MB 500 access (ms) 87 typical size(MB) 1 (Culled from back issues of Byte and PC Magazine) – 33 –

CPU Clock Rates processor clock rate(MHz) cycle time(ns) 1980 8080 1 1, 000 1985

CPU Clock Rates processor clock rate(MHz) cycle time(ns) 1980 8080 1 1, 000 1985 286 6 166 1990 386 20 50 1995 Pent 150 6 2000 P-III 750 1. 6 2000: 1980 750 – 34 –

The CPU-Memory Gap The increasing gap between DRAM, disk, and CPU speeds. – 35

The CPU-Memory Gap The increasing gap between DRAM, disk, and CPU speeds. – 35 –

Locality Principle of Locality: n n n Programs tend to reuse data and instructions

Locality Principle of Locality: n n n Programs tend to reuse data and instructions near those they have used recently, or that were recently referenced themselves. Temporal locality: Recently referenced items are likely to be referenced in the near future. Spatial locality: Items with nearby addresses tend to be referenced close together in time. Locality Example: sum = 0; for (i = 0; i < n; i++) sum += a[i]; return sum; • Data – Reference array elements in succession (stride-1 reference pattern): Spatial locality – Reference sum each iteration: Temporal locality • Instructions – Reference instructions in sequence: Spatial locality – Cycle through loop repeatedly: Temporal locality – 36 –

Locality Example Claim: Being able to look at code and get a qualitative sense

Locality Example Claim: Being able to look at code and get a qualitative sense of its locality is a key skill for a professional programmer. Question: Does this function have good locality? int sumarrayrows(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum } – 37 –

Locality Example Question: Does this function have good locality? int sumarraycols(int a[M][N]) { int

Locality Example Question: Does this function have good locality? int sumarraycols(int a[M][N]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum } – 38 –

Locality Example Question: Can you permute the loops so that the function scans the

Locality Example Question: Can you permute the loops so that the function scans the 3 -d array a[] with a stride-1 reference pattern (and thus has good spatial locality)? int sumarray 3 d(int a[M][N][N]) { int i, j, k, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) for (k = 0; k < N; k++) sum += a[k][i][j]; return sum } – 39 –

Memory Hierarchies Some fundamental and enduring properties of hardware and software: n n n

Memory Hierarchies Some fundamental and enduring properties of hardware and software: n n n Fast storage technologies cost more per byte and have less capacity. The gap between CPU and main memory speed is widening. Well-written programs tend to exhibit good locality. These fundamental properties complement each other beautifully. They suggest an approach for organizing memory and storage systems known as a memory hierarchy. – 40 –

An Example Memory Hierarchy Smaller, faster, and costlier (per byte) storage devices Larger, slower,

An Example Memory Hierarchy Smaller, faster, and costlier (per byte) storage devices Larger, slower, and cheaper (per byte) storage devices L 5: L 0: registers CPU registers hold words retrieved from L 1 cache. L 1: on-chip L 1 cache (SRAM) L 2: L 3: L 4: off-chip L 2 cache (SRAM) L 1 cache holds cache lines retrieved from the L 2 cache memory. L 2 cache holds cache lines retrieved from main memory (DRAM) local secondary storage (local disks) Main memory holds disk blocks retrieved from local disks. Local disks hold files retrieved from disks on remote network servers. remote secondary storage (distributed file systems, Web servers) – 41 –

Caches Cache: A smaller, faster storage device that acts as a staging area for

Caches Cache: A smaller, faster storage device that acts as a staging area for a subset of the data in a larger, slower device. Fundamental idea of a memory hierarchy: n For each k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1. Why do memory hierarchies work? n n n Programs tend to access the data at level k more often than they access the data at level k+1. Thus, the storage at level k+1 can be slower, and thus larger and cheaper bit. Net effect: A large pool of memory that costs as much as the cheap storage near the bottom, but that serves data to programs at the rate of the fast storage near the top. – 42 –

Caching in a Memory Hierarchy Level k: 8 4 9 10 4 Level k+1:

Caching in a Memory Hierarchy Level k: 8 4 9 10 4 Level k+1: 14 10 3 Smaller, faster, more expensive device at level k caches a subset of the blocks from level k+1 Data is copied between levels in block-sized transfer units 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Larger, slower, cheaper storage device at level k+1 is partitioned into blocks. – 43 –

General Caching Concepts 14 12 Level k: Program needs object d, which is stored

General Caching Concepts 14 12 Level k: Program needs object d, which is stored in some block b. Request 12 14 0 1 2 3 4* 12 9 14 3 12 4* Cache hit n Cache miss Request 12 n n Level k+1: Program finds b in the cache at level k. E. g. , block 14. 0 1 2 3 4 4* 5 6 7 8 9 10 11 12 13 14 15 b is not at level k, so level k cache must fetch it from level k+1. E. g. , block 12. If level k cache is full, then some current block must be replaced (evicted). Which one is the “victim”? l Placement policy: where can the new block go? E. g. , b mod 4 l Replacement policy: which block should be evicted? E. g. , LRU – 44 –

General Caching Concepts Types of cache misses: n Cold (compulsary) miss l Cold misses

General Caching Concepts Types of cache misses: n Cold (compulsary) miss l Cold misses occur because the cache is empty. n Conflict miss l Most caches limit blocks at level k+1 to a small subset (sometimes a singleton) of the block positions at level k. l E. g. Block i at level k+1 must be placed in block (i mod 4) at level k+1. l Conflict misses occur when the level k cache is large enough, but multiple data objects all map to the same level k block. l E. g. Referencing blocks 0, 8, . . . would miss every time. n Capacity miss l Occurs when the set of active cache blocks (working set) is larger than the cache. – 45 –

Examples of Caching in the Hierarchy Cache Type What Cached Where Cached Latency (cycles)

Examples of Caching in the Hierarchy Cache Type What Cached Where Cached Latency (cycles) Managed By Registers 4 -byte word CPU registers 0 Compiler TLB On-Chip TLB 0 Hardware L 1 cache L 2 cache Virtual Memory Address translations 32 -byte block 4 -KB page On-Chip L 1 Off-Chip L 2 Main memory Buffer cache Parts of files Main memory 1 Hardware 100 Hardware+ OS 100 OS Network buffer cache Browser cache Parts of files Local disk Web pages Local disk Web cache Web pages Remote server disks 10, 000 AFS/NFS client 10, 000 Web browser 1, 000, 000 Web proxy server – 46 –

Cache Memories CENG 331: Introduction to Computer Systems 10 th Lecture Instructor: Erol Sahin

Cache Memories CENG 331: Introduction to Computer Systems 10 th Lecture Instructor: Erol Sahin Acknowledgement: Most of the slides are adapted from the ones prepared by R. E. Bryant, D. R. O’Hallaron of Carnegie-Mellon Univ.

Overview Topics n n Generic cache memory organization Direct mapped caches Set associative caches

Overview Topics n n Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance – 48 –

Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware. n

Cache Memories Cache memories are small, fast SRAM-based memories managed automatically in hardware. n Hold frequently accessed blocks of main memory CPU looks first for data in L 1, then in L 2, then in main memory. Typical bus structure: CPU chip register file cache bus L 2 cache L 1 cache bus interface ALU system bus memory bus I/O bridge main memory – 49 –

Inserting an L 1 Cache Between the CPU and Main Memory The tiny, very

Inserting an L 1 Cache Between the CPU and Main Memory The tiny, very fast CPU register file has room for four 4 -byte words. The transfer unit between the CPU register file and the cache is a 4 -byte block. line 0 The small fast L 1 cache has room for two 4 -word blocks. line 1 The transfer unit between the cache and main memory is a 4 -word block (16 bytes). block 10 a b c d . . . block 21 p q r s . . . block 30 The big slow main memory has room for many 4 -word blocks. w x y z . . . – 50 –

General Org of a Cache Memory Cache is an array of sets. Each set

General Org of a Cache Memory Cache is an array of sets. Each set contains one or more lines. 1 valid bit per line valid S= tag B = 2 b bytes per cache block 0 1 • • • B– 1 • • • set 0: Each line holds a block of data. 2 s sets t tag bits per line valid tag 0 1 • • • B– 1 1 • • • B– 1 E lines per set • • • set 1: valid tag 0 • • • set S-1: valid tag 0 Cache size: C = B x E x S data bytes – 51 –

Addressing Caches Address A: t bits set 0: set 1: v tag 0 •

Addressing Caches Address A: t bits set 0: set 1: v tag 0 • • • 0 1 • • • B– 1 1 • • • B– 1 • • • set S-1: v tag 0 • • • 0 m-1 s bits b bits 0 <tag> <set index> <block offset> The word at address A is in the cache if the tag bits in one of the <valid> lines in set <set index> match <tag>. The word contents begin at offset <block offset> bytes from the beginning of the block. – 52 –

Direct-Mapped Cache Simplest kind of cache Characterized by exactly one line per set 0:

Direct-Mapped Cache Simplest kind of cache Characterized by exactly one line per set 0: valid tag cache block set 1: valid tag cache block E=1 lines per set • • • set S-1: valid tag cache block – 53 –

Accessing Direct-Mapped Caches Set selection n Use the set index bits to determine the

Accessing Direct-Mapped Caches Set selection n Use the set index bits to determine the set of interest. selected set 0: valid tag cache block set 1: valid tag cache block • • • t bits m-1 tag s bits b bits 0 0 1 set index block offset 0 set S-1: valid tag cache block – 54 –

Accessing Direct-Mapped Caches Line matching and word selection n n Line matching: Find a

Accessing Direct-Mapped Caches Line matching and word selection n n Line matching: Find a valid line in the selected set with a matching tag Word selection: Then extract the word =1? (1) The valid bit must be set 0 selected set (i): 1 0110 1 2 3 4 w 0 5 w 1 w 2 (2) The tag bits in the cache = ? line must match the tag bits in the address m-1 t bits 0110 tag 6 s bits b bits i 100 set index block offset 0 7 w 3 (3) If (1) and (2), then cache hit, and block offset selects starting byte. – 55 –

Direct-Mapped Cache Simulation t=1 s=2 x xx M=16 byte addresses, B=2 bytes/block, S=4 sets,

Direct-Mapped Cache Simulation t=1 s=2 x xx M=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 entry/set b=1 x v 11 Address trace (reads): 0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002] (miss) tag data 0 M[0 -1] m[0] (1) (3) v (4) 13 [11012] (miss) v tag data 8 [10002] (miss) tag data 11 1 0 M[0 -1] m[0] 11 1 m[13] m[12] M[12 -13] v M[8 -9] m[8] M[12 -13] 11 (5) 0 [00002] (miss) tag data 11 0 m[1] m[0] M[0 -1] 11 1 m[13] m[12] M[12 -13] – 56 –

Why Use Middle Bits as Index? High-Order Bit Indexing 4 -line Cache 00 01

Why Use Middle Bits as Index? High-Order Bit Indexing 4 -line Cache 00 01 10 11 High-Order Bit Indexing n n Adjacent memory lines would map to same cache entry Poor use of spatial locality Middle-Order Bit Indexing n n Consecutive memory lines map to different cache lines Can hold C-byte region of address space in cache at one time 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Middle-Order Bit Indexing 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 – 57 –

Set Associative Caches Characterized by more than one line per set 0: set 1:

Set Associative Caches Characterized by more than one line per set 0: set 1: valid tag cache block E=2 lines per set • • • set S-1: valid tag cache block – 58 –

Accessing Set Associative Caches Set selection n identical to direct-mapped cache set 0: Selected

Accessing Set Associative Caches Set selection n identical to direct-mapped cache set 0: Selected set 1: valid tag cache block • • • t bits m-1 tag set S-1: s bits b bits 0 0 1 set index block offset 0 valid tag cache block – 59 –

Accessing Set Associative Caches Line matching and word selection n must compare the tag

Accessing Set Associative Caches Line matching and word selection n must compare the tag in each valid line in the selected set. =1? (1) The valid bit must be set. 0 selected set (i): 1 1001 1 0110 (2) The tag bits in one of the cache lines must match the tag bits in the address 1 2 3 4 w 0 m-1 6 w 1 w 2 7 w 3 (3) If (1) and (2), then cache hit, and block offset selects starting byte. = ? t bits 0110 tag 5 s bits b bits i 100 set index block offset 0 – 60 –

Multi-Level Caches Options: separate data and instruction caches, or a unified cache Processor Regs

Multi-Level Caches Options: separate data and instruction caches, or a unified cache Processor Regs L 1 d-cache L 1 i-cache size: speed: $/Mbyte: line size: 200 B 3 ns 8 -64 KB 3 ns 8 B 32 B larger, slower, cheaper Unified L 2 Cache Memory 1 -4 MB SRAM 128 MB DRAM 6 ns 60 ns $100/MB $1. 50/MB 32 B 8 KB disk 30 GB 8 ms $0. 05/MB – 61 –

Intel Pentium Cache Hierarchy Regs. L 1 Data 1 cycle latency 16 KB 4

Intel Pentium Cache Hierarchy Regs. L 1 Data 1 cycle latency 16 KB 4 -way assoc Write-through 32 B lines L 1 Instruction 16 KB, 4 -way 32 B lines L 2 Unified 128 KB--2 MB 4 -way assoc Write-back Write allocate 32 B lines Main Memory Up to 4 GB Processor Chip – 62 –

Intel i 7 processor Before, Intel Core 2 Duo and Quad processors had just

Intel i 7 processor Before, Intel Core 2 Duo and Quad processors had just an L 1 and L 2 cache. i 7 features L 1, L 2, and shared L 3 caches. • 64 K L 1 cache (32 K Instruction, 32 K Data) per core, • 1 MB of total L 2 cache, and • 8 MB of L 3 cache that is shared across all the cores. • That means that all Intel Core i 7 processors have over 9 MB of memory right there on the 45 nm processor – 63 –

Intel i 7 architecture Core 1 32 KB L 1 -I 32 KB L

Intel i 7 architecture Core 1 32 KB L 1 -I 32 KB L 1 -d 256 KB L 2 Core 2 32 KB L 1 -I 32 KB L 1 -d 256 KB L 2 Core 3 32 KB L 1 -I 32 KB L 1 -d 256 KB L 2 Core 4 32 KB L 1 -I 32 KB L 1 -d 256 KB L 2 8 MB shared L 3 Integrated Memory Controller Quick Path interconnect 32 KB L 1 inst cache is 4 -way set associative 32 KB L 1 data cache is 8 -way set associative 256 KB L 2 unified i+d cache is 8 -way set associative 8 MB L 3 shared cache 16 -way set associative all cache lines are 64 bytes in size MESI+F (forward) coherency – 64 –

Hits L 1 cache hit 4 cycles L 2 cache hit 10 cycles L

Hits L 1 cache hit 4 cycles L 2 cache hit 10 cycles L 3 cache hit, line unshared ~40 cycles L 3 cache hit, shared line in another core ~65 cycles L 3 cache hit, modified in another core ~75 cycles Local DRAM ~60 -180 cycles Remote L 3 cache or remote DRAM ~100 -300 cycles – 65 –

TLB cache 7 -entry instruction TLB 0, fully associative, maps 2 MB or 4

TLB cache 7 -entry instruction TLB 0, fully associative, maps 2 MB or 4 MB super pages 32 -entry data TLB 0, 4 -way set associative, maps 2 MB or 4 MB super pages 64 -entry instruction TLB, 4 -way set associative, maps 4 KB pages 64 -entry data TLB, 4 -way set associative, maps 4 KB pages 512 -entry shared second-level TLB, 4 -way set associative, maps 4 KB pages – 66 –

Cache Performance Metrics Miss Rate n n Fraction of memory references not found in

Cache Performance Metrics Miss Rate n n Fraction of memory references not found in cache (misses/references) Typical numbers: l 3 -10% for L 1 l can be quite small (e. g. , < 1%) for L 2, depending on size, etc. Hit Time n n Time to deliver a line in the cache to the processor (includes time to determine whether the line is in the cache) Typical numbers: l 1 clock cycle for L 1 l 3 -8 clock cycles for L 2 Miss Penalty n Additional time required because of a miss l Typically 25 -100 cycles for main memory – 67 –

Writing Cache Friendly Code Repeated references to variables are good (temporal locality) Stride-1 reference

Writing Cache Friendly Code Repeated references to variables are good (temporal locality) Stride-1 reference patterns are good (spatial locality) Examples: n cold cache, 4 -byte words, 4 -word cache blocks int sumarrayrows(int a[M][N]) { int i, j, sum = 0; int sumarraycols(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; } for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; } Miss rate = 1/4 = 25% Miss rate = 100% – 68 –

The Memory Mountain Read throughput (read bandwidth) n Number of bytes read from memory

The Memory Mountain Read throughput (read bandwidth) n Number of bytes read from memory per second (MB/s) Memory mountain n n Measured read throughput as a function of spatial and temporal locality. Compact way to characterize memory system performance. – 69 –

Memory Mountain Test Function /* The test function */ void test(int elems, int stride)

Memory Mountain Test Function /* The test function */ void test(int elems, int stride) { int i, result = 0; volatile int sink; for (i = 0; i < elems; i += stride) result += data[i]; sink = result; /* So compiler doesn't optimize away the loop */ } /* Run test(elems, stride) and return read throughput (MB/s) */ double run(int size, int stride, double Mhz) { double cycles; int elems = size / sizeof(int); test(elems, stride); /* warm up the cache */ cycles = fcyc 2(test, elems, stride, 0); /* call test(elems, stride) */ return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */ } – 70 –

Memory Mountain Main Routine /* mountain. c - Generate the memory mountain. */ #define

Memory Mountain Main Routine /* mountain. c - Generate the memory mountain. */ #define MINBYTES (1 << 10) /* Working set size ranges from 1 KB */ #define MAXBYTES (1 << 23) /*. . . up to 8 MB */ #define MAXSTRIDE 16 /* Strides range from 1 to 16 */ #define MAXELEMS MAXBYTES/sizeof(int) int data[MAXELEMS]; int main() { int size; int stride; double Mhz; /* The array we'll be traversing */ /* Working set size (in bytes) */ /* Stride (in array elements) */ /* Clock frequency */ init_data(data, MAXELEMS); /* Initialize each element in data to 1 */ Mhz = mhz(0); /* Estimate the clock frequency */ for (size = MAXBYTES; size >= MINBYTES; size >>= 1) { for (stride = 1; stride <= MAXSTRIDE; stride++) printf("%. 1 ft", run(size, stride, Mhz)); printf("n"); } exit(0); } – 71 –

The Memory Mountain – 72 –

The Memory Mountain – 72 –

Ridges of Temporal Locality Slice through the memory mountain with stride=1 n illuminates read

Ridges of Temporal Locality Slice through the memory mountain with stride=1 n illuminates read throughputs of different caches and memory – 73 –

A Slope of Spatial Locality Slice through memory mountain with size=256 KB n shows

A Slope of Spatial Locality Slice through memory mountain with size=256 KB n shows cache block size. – 74 –

Matrix Multiplication Example Major Cache Effects to Consider n Total cache size l Exploit

Matrix Multiplication Example Major Cache Effects to Consider n Total cache size l Exploit temporal locality and keep the working set small (e. g. , by using blocking) n Block size l Exploit spatial locality Description: n n n Multiply N x N matrices O(N 3) total operations Accesses /* ijk */ Variable sum for (i=0; i<n; i++) { held in register for (j=0; j<n; j++) { sum = 0. 0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } l N reads per source element l N values summed per destination » but may be able to hold in register – 75 –

Miss Rate Analysis for Matrix Multiply Assume: n n Line size = 32 B

Miss Rate Analysis for Matrix Multiply Assume: n n Line size = 32 B (big enough for 4 64 -bit words) Matrix dimension (N) is very large l Approximate 1/N as 0. 0 n Cache is not even big enough to hold multiple rows Analysis Method: n Look at access pattern of inner loop k i j j k A i B C – 76 –

Layout of C Arrays in Memory (review) C arrays allocated in row-major order n

Layout of C Arrays in Memory (review) C arrays allocated in row-major order n each row in contiguous memory locations Stepping through columns in one row: n for (i = 0; i < N; i++) sum += a[0][i]; n n accesses successive elements if block size (B) > 4 bytes, exploit spatial locality l compulsory miss rate = 4 bytes / B Stepping through rows in one column: n for (i = 0; i < n; i++) sum += a[i][0]; n n accesses distant elements no spatial locality! l compulsory miss rate = 1 (i. e. 100%) – 77 –

Matrix Multiplication (ijk) /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n;

Matrix Multiplication (ijk) /* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0. 0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } Inner loop: (*, j) (i, *) A B Row-wise Columnwise C Fixed Misses per Inner Loop Iteration: A B C 0. 25 1. 0 0. 0 – 78 –

Matrix Multiplication (jik) /* jik */ for (j=0; j<n; j++) { for (i=0; i<n;

Matrix Multiplication (jik) /* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0. 0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } } Misses per Inner Loop Iteration: A B C 0. 25 1. 0 0. 0 Inner loop: (*, j) (i, *) A B Row-wise Columnwise C Fixed – 79 –

Matrix Multiplication (kij) /* kij */ for (k=0; k<n; k++) { for (i=0; i<n;

Matrix Multiplication (kij) /* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop: (i, k) A Fixed (k, *) B (i, *) C Row-wise Misses per Inner Loop Iteration: A 0. 0 B 0. 25 C 0. 25 – 80 –

Matrix Multiplication (ikj) /* ikj */ for (i=0; i<n; i++) { for (k=0; k<n;

Matrix Multiplication (ikj) /* ikj */ for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop: (i, k) A Fixed (k, *) B (i, *) C Row-wise Misses per Inner Loop Iteration: A 0. 0 B 0. 25 C 0. 25 – 81 –

Matrix Multiplication (jki) /* jki */ for (j=0; j<n; j++) { for (k=0; k<n;

Matrix Multiplication (jki) /* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } Misses per Inner Loop Iteration: A B C 1. 0 0. 0 1. 0 Inner loop: (*, k) (*, j) (k, j) A Column wise B Fixed C Columnwise – 82 –

Matrix Multiplication (kji) /* kji */ for (k=0; k<n; k++) { for (j=0; j<n;

Matrix Multiplication (kji) /* kji */ for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } Inner loop: (*, k) (*, j) (k, j) A Columnwise B Fixed C Columnwise Misses per Inner Loop Iteration: A B C 1. 0 0. 0 1. 0 – 83 –

Summary of Matrix Multiplication ijk (& jik): kij (& ikj): jki (& kji): •

Summary of Matrix Multiplication ijk (& jik): kij (& ikj): jki (& kji): • 2 loads, 0 stores • 2 loads, 1 store • misses/iter = 1. 25 • misses/iter = 0. 5 • misses/iter = 2. 0 for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0. 0; for (k=0; k<n; k++) sum += a[i][k]*b[k][j]; c[i][j] = sum; } for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r*b[k][j]; } } for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } } – 84 –

Pentium Matrix Multiply Performance Miss rates are helpful but not perfect predictors. l Code

Pentium Matrix Multiply Performance Miss rates are helpful but not perfect predictors. l Code scheduling matters, too. – 85 –

Improving Temporal Locality by Blocking Example: Blocked matrix multiplication n “block” (in this context)

Improving Temporal Locality by Blocking Example: Blocked matrix multiplication n “block” (in this context) does not mean “cache block”. Instead, it mean a sub-block within the matrix. Example: N = 8; sub-block size = 4 A 11 A 12 A 21 A 22 B 11 B 12 X B 21 B 22 = C 11 C 12 C 21 C 22 Key idea: Sub-blocks (i. e. , Axy) can be treated just like scalars. C 11 = A 11 B 11 + A 12 B 21 C 12 = A 11 B 12 + A 12 B 22 C 21 = A 21 B 11 + A 22 B 21 C 22 = A 21 B 12 + A 22 B 22 – 86 –

Blocked Matrix Multiply (bijk) for (jj=0; jj<n; jj+=bsize) { for (i=0; i<n; i++) for

Blocked Matrix Multiply (bijk) for (jj=0; jj<n; jj+=bsize) { for (i=0; i<n; i++) for (j=jj; j < min(jj+bsize, n); j++) c[i][j] = 0. 0; for (kk=0; kk<n; kk+=bsize) { for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize, n); j++) { sum = 0. 0 for (k=kk; k < min(kk+bsize, n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; } } – 87 –

Blocked Matrix Multiply Analysis n n Innermost loop pair multiplies a 1 X bsize

Blocked Matrix Multiply Analysis n n Innermost loop pair multiplies a 1 X bsize sliver of A by a bsize X bsize block of B and accumulates into 1 X bsize sliver of C Loop over i steps through n row slivers of A & C, using same B for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize, n); j++) { sum = 0. 0 for (k=kk; k < min(kk+bsize, n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; Innermost } kk jj jj Loop Pair i A row sliver accessed bsize times kk i B C block reused n times in succession Update successive elements of sliver – 88 –

Pentium Blocked Matrix Multiply Performance Blocking (bijk and bikj) improves performance by a factor

Pentium Blocked Matrix Multiply Performance Blocking (bijk and bikj) improves performance by a factor of two over unblocked versions (ijk and jik) n relatively insensitive to array size. – 89 –

Concluding Observations Programmer can optimize for cache performance n n How data structures are

Concluding Observations Programmer can optimize for cache performance n n How data structures are organized How data are accessed l Nested loop structure l Blocking is a general technique All systems favor “cache friendly code” n Getting absolute optimum performance is very platform specific l Cache sizes, line sizes, associativities, etc. n Can get most of the advantage with generic code l Keep working set reasonably small (temporal locality) l Use small strides (spatial locality) – 90 –