Carnegie Mellon Virtual Memory Systems 15 213 18

  • Slides: 48
Download presentation
Carnegie Mellon Virtual Memory: Systems 15 -213 / 18 -213: Introduction to Computer Systems

Carnegie Mellon Virtual Memory: Systems 15 -213 / 18 -213: Introduction to Computer Systems 17 th Lecture, Oct. 25, 2012 Instructors: Dave O’Hallaron, Greg Ganger, and Greg Kesden 1

Carnegie Mellon Today ¢ ¢ Virtual memory questions and answers Simple memory system example

Carnegie Mellon Today ¢ ¢ Virtual memory questions and answers Simple memory system example Bonus: Case study: Core i 7/Linux memory system Bonus: Memory mapping 2

Carnegie Mellon Virtual memory reminder/review ¢ Programmer’s view of virtual memory § Each process

Carnegie Mellon Virtual memory reminder/review ¢ Programmer’s view of virtual memory § Each process has its own private linear address space § Cannot be corrupted by other processes ¢ System view of virtual memory § Uses memory efficiently by caching virtual memory pages Efficient only because of locality § Simplifies memory management and programming § Simplifies protection by providing a convenient interpositioning point to check permissions § 3

Carnegie Mellon Recall: Address Translation With a Page Table Virtual address Page table base

Carnegie Mellon Recall: Address Translation With a Page Table Virtual address Page table base register (PTBR) Page table address for process n-1 p p-1 Virtual page number (VPN) 0 Virtual page offset (VPO) Page table Valid Physical page number (PPN) Valid bit = 0: page not in memory (page fault) m-1 Physical page number (PPN) p p-1 0 Physical page offset (PPO) Physical address 4

Carnegie Mellon Recall: Address Translation: Page Hit 2 PTEA CPU Chip CPU 1 VA

Carnegie Mellon Recall: Address Translation: Page Hit 2 PTEA CPU Chip CPU 1 VA PTE MMU 3 PA Cache/ Memory 4 Data 5 1) Processor sends virtual address to MMU 2 -3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor 5

Carnegie Mellon Question #1 ¢ Are the PTEs cached like other memory accesses? ¢

Carnegie Mellon Question #1 ¢ Are the PTEs cached like other memory accesses? ¢ Yes (and no: see next question) 6

Carnegie Mellon Page tables in memory, like other data PTE CPU Chip PTEA CPU

Carnegie Mellon Page tables in memory, like other data PTE CPU Chip PTEA CPU PTEA hit VA MMU PTEA miss PA PA miss PA Memory Data PA hit Data PTEA L 1 cache VA: virtual address, PA: physical address, PTE: page table entry, PTEA = PTE address 7

Carnegie Mellon Question #2 ¢ Isn’t it slow to have to go to memory

Carnegie Mellon Question #2 ¢ Isn’t it slow to have to go to memory twice every time? ¢ Yes, it would be… so, real MMUs don’t 8

Carnegie Mellon Speeding up Translation with a TLB ¢ Page table entries (PTEs) are

Carnegie Mellon Speeding up Translation with a TLB ¢ Page table entries (PTEs) are cached in L 1 like any other memory word § PTEs may be evicted by other data references § PTE hit still requires a small L 1 delay ¢ Solution: Translation Lookaside Buffer (TLB) § Small, dedicated, super-fast hardware cache of PTEs in MMU § Contains complete page table entries for small number of pages 9

Carnegie Mellon TLB Hit CPU Chip CPU TLB 2 PTE VPN 3 1 VA

Carnegie Mellon TLB Hit CPU Chip CPU TLB 2 PTE VPN 3 1 VA MMU PA 4 Cache/ Memory Data 5 A TLB hit eliminates a memory access 10

Carnegie Mellon TLB Miss CPU Chip TLB 2 4 PTE VPN CPU 1 VA

Carnegie Mellon TLB Miss CPU Chip TLB 2 4 PTE VPN CPU 1 VA MMU 3 PTEA PA Cache/ Memory 5 Data 6 A TLB miss incurs an additional memory access (the PTE) Fortunately, TLB misses are rare. Why? 11

Carnegie Mellon Question #3 ¢ Isn’t the page table huge? How can it be

Carnegie Mellon Question #3 ¢ Isn’t the page table huge? How can it be stored in RAM? ¢ Yes, it would be… so, real page tables aren’t simple arrays 12

Carnegie Mellon Multi-Level Page Tables ¢ Suppose: § 4 KB (212) page size, 64

Carnegie Mellon Multi-Level Page Tables ¢ Suppose: § 4 KB (212) page size, 64 -bit address space, 8 -byte PTE ¢ Problem: § Would need a 32, 000 TB page table! § Level 2 Tables Level 1 Table 264 * 2 -12 * 23 = 255 bytes. . . ¢ Common solution: § Multi-level page tables § Example: 2 -level page table Level 1 table: each PTE points to a page table (always memory resident) § Level 2 table: each PTE points to a page (paged in and out like any other data) . . . § 13

Carnegie Mellon A Two-Level Page Table Hierarchy Level 1 page table Level 2 page

Carnegie Mellon A Two-Level Page Table Hierarchy Level 1 page table Level 2 page tables Virtual memory VP 0 PTE 1 . . . PTE 2 (null) PTE 1023 PTE 3 (null) PTE 0 PTE 5 (null) . . . PTE 6 (null) PTE 1023 VP 1024 2 K allocated VM pages for code and data . . . Gap PTE 7 (null) (1 K - 9) null PTEs . . . VP 2047 PTE 4 (null) PTE 8 0 6 K unallocated VM pages 1023 null PTEs PTE 1023 . . . 32 bit addresses, 4 KB pages, 4 -byte PTEs 1023 unallocated pages VP 9215 1023 unallocated pages 1 allocated VM page for the stack 14

Carnegie Mellon Translating with a k-level Page Table VIRTUAL ADDRESS n-1 VPN-2 Level 2

Carnegie Mellon Translating with a k-level Page Table VIRTUAL ADDRESS n-1 VPN-2 Level 2 page table Level 1 page table . . p-1 VPN-k 0 VPO Level k page table PPN m-1 p-1 PPN 0 PPO PHYSICAL ADDRESS 15

Carnegie Mellon Question #4 ¢ ¢ Shouldn’t fork() be really slow, since the child

Carnegie Mellon Question #4 ¢ ¢ Shouldn’t fork() be really slow, since the child needs a copy of the parent’s address space? Yes, it would be… so, fork() doesn’t really work that way 16

Carnegie Mellon Physical memory can be shared Process 1 virtual memory Physical memory Process

Carnegie Mellon Physical memory can be shared Process 1 virtual memory Physical memory Process 2 virtual memory ¢ Process 1 maps the shared pages Shared object 17

Carnegie Mellon Physical memory can be shared Process 1 virtual memory Physical memory Process

Carnegie Mellon Physical memory can be shared Process 1 virtual memory Physical memory Process 2 virtual memory ¢ ¢ Process 2 maps the shared pages Notice how the virtual addresses can be different Shared object 18

Carnegie Mellon Private Copy-on-write (COW) sharing Process 1 virtual memory Physical memory Process 2

Carnegie Mellon Private Copy-on-write (COW) sharing Process 1 virtual memory Physical memory Process 2 virtual memory ¢ Private ¢ copy-on-write area ¢ Two processes mapping private copy-on-write (COW) pages Area flagged as private copy-onwrite PTEs in private areas are flagged as read-only Private copy-on-write object 19

Carnegie Mellon Private Copy-on-write (COW) sharing Process 1 virtual memory Physical memory Process 2

Carnegie Mellon Private Copy-on-write (COW) sharing Process 1 virtual memory Physical memory Process 2 virtual memory ¢ Copy-on-write ¢ Write to private copy-on-write page ¢ ¢ Private copy-on-write object Instruction writing to private page triggers protection fault Handler creates new R/W page Instruction restarts upon handler return Copying deferred as long as possible! 20

Carnegie Mellon The fork Function Revisited ¢ fork provides private address space for each

Carnegie Mellon The fork Function Revisited ¢ fork provides private address space for each process ¢ To create virtual address for new process § Create exact copies of parent page tables § Flag each page in both processes (parent and child) as read-only § Flag writeable areas in both processes as private COW ¢ ¢ ¢ On return, each process has exact copy of virtual memory Subsequent writes create new physical pages using COW mechanism Perfect approach for common case of fork() followed by exec() § Why? 21

Carnegie Mellon Today ¢ ¢ Virtual memory questions and answers Simple memory system example

Carnegie Mellon Today ¢ ¢ Virtual memory questions and answers Simple memory system example Bonus: Case study: Core i 7/Linux memory system Bonus: Memory mapping 22

Carnegie Mellon Review of Symbols ¢ Basic Parameters § N = 2 n :

Carnegie Mellon Review of Symbols ¢ Basic Parameters § N = 2 n : Number of addresses in virtual address space § M = 2 m : Number of addresses in physical address space § P = 2 p : Page size (bytes) ¢ Components of the virtual address (VA) § § ¢ VPO: Virtual page offset VPN: Virtual page number TLBI: TLB index TLBT: TLB tag Components of the physical address (PA) § § § PPO: Physical page offset (same as VPO) PPN: Physical page number CO: Byte offset within cache line CI: Cache index CT: Cache tag 23

Carnegie Mellon Simple Memory System Example ¢ Addressing § 14 -bit virtual addresses §

Carnegie Mellon Simple Memory System Example ¢ Addressing § 14 -bit virtual addresses § 12 -bit physical address § Page size = 64 bytes 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN VPO Virtual Page Number Virtual Page Offset 11 10 9 8 7 6 5 4 3 2 1 PPN PPO Physical Page Number Physical Page Offset 0 0 24

Carnegie Mellon Simple Memory System Page Table Only show first 16 entries (out of

Carnegie Mellon Simple Memory System Page Table Only show first 16 entries (out of 256) VPN PPN Valid 00 28 1 08 13 1 01 – 0 09 17 1 02 33 1 0 A 09 1 03 02 1 0 B – 0 04 – 0 0 C – 0 05 16 1 0 D 2 D 1 06 – 0 0 E 11 1 07 – 0 0 F 0 D 1 25

Carnegie Mellon Simple Memory System TLB ¢ ¢ 16 entries 4 -way associative TLBT

Carnegie Mellon Simple Memory System TLB ¢ ¢ 16 entries 4 -way associative TLBT 13 12 11 10 TLBI 9 8 7 6 5 4 3 2 1 0 VPO VPN Set Tag PPN Valid 0 03 – 0 09 0 D 1 00 – 0 07 02 1 1 03 2 D 1 02 – 0 04 – 0 0 A – 0 2 02 – 0 08 – 0 06 – 0 03 – 0 3 07 – 0 03 0 D 1 0 A 34 1 02 – 0 26

Carnegie Mellon Simple Memory System Cache ¢ ¢ ¢ 16 lines, 4 -byte block

Carnegie Mellon Simple Memory System Cache ¢ ¢ ¢ 16 lines, 4 -byte block size Physically addressed Direct mapped CT 11 10 9 CI 8 7 6 5 4 CO 3 PPN 2 1 0 PPO Idx Tag Valid B 0 B 1 B 2 B 3 0 19 1 99 11 23 11 8 24 1 3 A 00 51 89 1 15 0 – – 9 2 D 0 – – 2 1 B 1 00 02 04 08 A 2 D 1 93 15 DA 3 B 3 36 0 – – B 0 B 0 – – 4 32 1 43 6 D 8 F 09 C 12 0 – – 5 0 D 1 36 72 F 0 1 D D 16 1 04 96 34 15 6 31 0 – – E 13 1 83 77 1 B D 3 7 16 1 11 C 2 DF 03 F 14 0 – – 27

Carnegie Mellon Address Translation Example #1 Virtual Address: 0 x 03 D 4 TLBT

Carnegie Mellon Address Translation Example #1 Virtual Address: 0 x 03 D 4 TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 0 1 1 0 1 0 0 VPN 0 x 0 F ___ 0 x 3 TLBI ___ VPO Y TLB Hit? __ 0 x 03 TLBT ____ N Page Fault? __ PPN: 0 x 0 D ____ Physical Address CI CT 11 10 9 8 7 6 5 4 3 2 1 0 0 0 1 1 0 1 0 0 PPN 0 CO ___ CO 0 x 5 CI___ 0 x 0 D CT ____ PPO Y Hit? __ 0 x 36 Byte: ____ 28

Carnegie Mellon Address Translation Example #2 Virtual Address: 0 x 0 B 8 F

Carnegie Mellon Address Translation Example #2 Virtual Address: 0 x 0 B 8 F TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 0 1 1 VPN 0 x 2 E ___ 2 TLBI ___ VPO N TLB Hit? __ 0 x 0 B TLBT ____ Y Page Fault? __ TBD PPN: ____ Physical Address CI CT 11 10 9 8 7 6 PPN CO ___ CI___ CT ____ 5 4 CO 3 2 1 0 PPO Hit? __ Byte: ____ 29

Carnegie Mellon Address Translation Example #3 Virtual Address: 0 x 0020 TLBT TLBI 13

Carnegie Mellon Address Translation Example #3 Virtual Address: 0 x 0020 TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 0 0 0 1 0 0 0 VPN 0 x 00 ___ 0 TLBI ___ VPO N TLB Hit? __ 0 x 00 TLBT ____ N Page Fault? __ PPN: 0 x 28 ____ Physical Address CI CT 11 10 9 8 7 6 5 4 3 2 1 0 1 0 0 0 0 0 PPN 0 CO___ CO 0 x 8 CI___ 0 x 28 CT ____ PPO N Hit? __ Mem Byte: ____ 30

Carnegie Mellon Today ¢ ¢ Virtual memory questions and answers Simple memory system example

Carnegie Mellon Today ¢ ¢ Virtual memory questions and answers Simple memory system example Bonus: Case study: Core i 7/Linux memory system Bonus: Memory mapping 31

Carnegie Mellon Intel Core i 7 Memory System Processor package Core x 4 Registers

Carnegie Mellon Intel Core i 7 Memory System Processor package Core x 4 Registers Instruction fetch L 1 d-cache 32 KB, 8 -way L 1 i-cache 32 KB, 8 -way L 2 unified cache 256 KB, 8 -way MMU (addr translation) L 1 d-TLB 64 entries, 4 -way L 1 i-TLB 128 entries, 4 -way L 2 unified TLB 512 entries, 4 -way Quick. Path interconnect 4 links @ 25. 6 GB/s each L 3 unified cache 8 MB, 16 -way (shared by all cores) To other cores To I/O bridge DDR 3 Memory controller 3 x 64 bit @ 10. 66 GB/s 32 GB/s total (shared by all cores) Main memory 32

Carnegie Mellon Review of Symbols ¢ ¢ ¢ Basic Parameters § N = 2

Carnegie Mellon Review of Symbols ¢ ¢ ¢ Basic Parameters § N = 2 n : Number of addresses in virtual address space § M = 2 m : Number of addresses in physical address space § P = 2 p : Page size (bytes) Components of the virtual address (VA) § TLBI: TLB index § TLBT: TLB tag § VPO: Virtual page offset § VPN: Virtual page number Components of the physical address (PA) § PPO: Physical page offset (same as VPO) § PPN: Physical page number § CO: Byte offset within cache line § CI: Cache index § CT: Cache tag 33

Carnegie Mellon End-to-end Core i 7 Address Translation 32/64 CPU L 2, L 3,

Carnegie Mellon End-to-end Core i 7 Address Translation 32/64 CPU L 2, L 3, and main memory Result Virtual address (VA) 36 12 VPN VPO 32 L 1 miss L 1 hit 4 TLBT TLBI L 1 d-cache (64 sets, 8 lines/set) TLB hit . . . TLB miss L 1 TLB (16 sets, 4 entries/set) 9 9 40 VPN 1 VPN 2 VPN 3 VPN 4 PPN CR 3 PTE PTE Page tables PTE 12 40 6 6 PPO CT CI CO Physical address (PA) 34

Carnegie Mellon Core i 7 Level 1 -3 Page Table Entries 63 62 XD

Carnegie Mellon Core i 7 Level 1 -3 Page Table Entries 63 62 XD 52 51 Unused 12 11 Page table physical base address 9 Unused 8 7 G PS 6 5 A 4 3 2 1 0 CD WT U/S R/W P=1 Available for OS (page table location on disk) P=0 Each entry references a 4 K child page table P: Child page table present in physical memory (1) or not (0). R/W: Read-only or read-write access permission for all reachable pages. U/S: user or supervisor (kernel) mode access permission for all reachable pages. WT: Write-through or write-back cache policy for the child page table. CD: Caching disabled or enabled for the child page table. A: Reference bit (set by MMU on reads and writes, cleared by software). PS: Page size either 4 KB or 4 MB (defined for Level 1 PTEs only). G: Global page (don’t evict from TLB on task switch) Page table physical base address: 40 most significant bits of physical page table address (forces page tables to be 4 KB aligned) 35

Carnegie Mellon Core i 7 Level 4 Page Table Entries 63 62 XD 52

Carnegie Mellon Core i 7 Level 4 Page Table Entries 63 62 XD 52 51 Unused 12 11 Page physical base address 9 Unused 8 G 7 6 5 D A Available for OS (page location on disk) 4 3 2 1 0 CD WT U/S R/W P=1 P=0 Each entry references a 4 K child page P: Child page is present in memory (1) or not (0) R/W: Read-only or read-write access permission for child page U/S: User or supervisor mode access WT: Write-through or write-back cache policy for this page CD: Cache disabled (1) or enabled (0) A: Reference bit (set by MMU on reads and writes, cleared by software) D: Dirty bit (set by MMU on writes, cleared by software) G: Global page (don’t evict from TLB on task switch) Page physical base address: 40 most significant bits of physical page address (forces pages to be 4 KB aligned) 36

Carnegie Mellon Core i 7 Page Table Translation 9 9 VPN 1 CR 3

Carnegie Mellon Core i 7 Page Table Translation 9 9 VPN 1 CR 3 Physical address of L 1 PT 40 / L 1 PT Page global directory L 1 PTE 512 GB region per entry 9 VPN 2 L 2 PT Page upper 40 directory / VPN 3 L 3 PT Page middle 40 directory / L 2 PTE 9 VPN 4 2 MB region per entry VPO Virtual address L 4 PT Page table 40 / Offset into /12 physical and virtual page L 4 PTE L 3 PTE 1 GB region per entry 12 4 KB region per entry Physical address of page 40 / 40 12 PPN PPO Physical address 37

Carnegie Mellon Cute Trick for Speeding Up L 1 Access CT Physical address (PA)

Carnegie Mellon Cute Trick for Speeding Up L 1 Access CT Physical address (PA) Virtual address (VA) ¢ Observation § § § 36 CT 6 6 CI CO PPN PPO Tag Check No Change Address Translation CI VPN VPO 36 12 L 1 Cache Bits that determine CI identical in virtual and physical address Can index into cache while address translation taking place Generally we hit in TLB, so PPN bits (CT bits) available next “Virtually indexed, physically tagged” Cache carefully sized to make this possible 38

Carnegie Mellon Today ¢ ¢ Virtual memory questions and answers Simple memory system example

Carnegie Mellon Today ¢ ¢ Virtual memory questions and answers Simple memory system example Bonus: Case study: Core i 7/Linux memory system Bonus: Memory mapping 39

Carnegie Mellon Memory Mapping ¢ VM areas initialized by associating them with disk objects.

Carnegie Mellon Memory Mapping ¢ VM areas initialized by associating them with disk objects. § Process is known as memory mapping. ¢ Area can be backed by (i. e. , get its initial values from) : § Regular file on disk (e. g. , an executable object file) Initial page bytes come from a section of a file § Anonymous file (e. g. , nothing) § First fault will allocate a physical page full of 0's (demand-zero page) § Once the page is written to (dirtied), it is like any other page § ¢ Dirty pages are copied back and forth between memory and a special swap file. 40

Carnegie Mellon Demand paging ¢ Key point: no virtual pages are copied into physical

Carnegie Mellon Demand paging ¢ Key point: no virtual pages are copied into physical memory until they are referenced! § Known as demand paging ¢ Crucial for time and space efficiency 41

Carnegie Mellon User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags,

Carnegie Mellon User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags, int fd, int offset) ¢ Map len bytes starting at offset of the file specified by file description fd, preferably at address start § start: may be 0 for “pick an address” § prot: PROT_READ, PROT_WRITE, . . . § flags: MAP_ANON, MAP_PRIVATE, MAP_SHARED, . . . ¢ Return a pointer to start of mapped area (may not be start) 42

Carnegie Mellon User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags,

Carnegie Mellon User-Level Memory Mapping void *mmap(void *start, int len, int prot, int flags, int fd, int offset) len bytes start (or address chosen by kernel) len bytes offset (bytes) 0 Disk file specified by file descriptor fd 0 Process virtual memory 43

Carnegie Mellon Using mmap to Copy Files ¢ Copying without transferring data to user

Carnegie Mellon Using mmap to Copy Files ¢ Copying without transferring data to user space. #include "csapp. h" /* * mmapcopy - uses mmap to copy * file fd to stdout */ void mmapcopy(int fd, int size) { /* mmapcopy driver */ int main(int argc, char **argv) { struct stat; int fd; /* Check for required cmdline arg */ if (argc != 2) { printf("usage: %s <filename>n”, argv[0]); exit(0); } /* Ptr to mem-mapped VM area */ char *bufp; } bufp = Mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd, 0); Write(1, bufp, size); return; } /* Copy the input arg to stdout */ fd = Open(argv[1], O_RDONLY, 0); Fstat(fd, &stat); mmapcopy(fd, stat. st_size); exit(0); 44

Carnegie Mellon Virtual Memory of a Linux Process-specific data structs (ptables, task and mm

Carnegie Mellon Virtual Memory of a Linux Process-specific data structs (ptables, task and mm structs, kernel stack) Different for each process Physical memory Identical for each process Kernel virtual memory Kernel code and data User stack %esp Memory mapped region for shared libraries brk Runtime heap (malloc) Process virtual memory Uninitialized data (. bss) Initialized data (. data) Program text (. text) 0 x 08048000 (32) 0 x 00400000 (64) 0 45

Carnegie Mellon Linux Organizes VM as Collection of “Areas” task_struct mm vm_area_struct mm_struct pgd

Carnegie Mellon Linux Organizes VM as Collection of “Areas” task_struct mm vm_area_struct mm_struct pgd mmap vm_end vm_start vm_prot vm_flags vm_next ¢ pgd: § Page global directory address § Points to L 1 page table ¢ vm_prot: § Read/write permissions for this area ¢ vm_flags § Pages shared with other processes or private to this process Process virtual memory vm_end vm_start vm_prot vm_flags Shared libraries Data vm_next Text vm_end vm_start vm_prot vm_flags vm_next 0 46

Carnegie Mellon Linux Page Fault Handling vm_area_struct Process virtual memory vm_end vm_start vm_prot vm_flags

Carnegie Mellon Linux Page Fault Handling vm_area_struct Process virtual memory vm_end vm_start vm_prot vm_flags vm_next vm_end vm_start vm_prot vm_flags shared libraries 1 read data 3 read Segmentation fault: accessing a non-existing page Normal page fault vm_next text vm_end vm_start vm_prot vm_flags vm_next 2 write Protection exception: e. g. , violating permission by writing to a read-only page (Linux reports as Segmentation fault) 47

Carnegie Mellon The execve Function Revisited User stack Private, demand-zero¢ To load and run

Carnegie Mellon The execve Function Revisited User stack Private, demand-zero¢ To load and run a new program a. out in the current process using execve: libc. so Memory mapped region for shared libraries . data. text Shared, file-backed ¢ ¢ a. out Runtime heap (via malloc) Private, demand-zero Uninitialized data (. bss) Private, demand-zero Initialized data (. data) . data. text Program text (. text) Private, file-backed ¢ 0 Free vm_area_struct’s and page tables for old areas Create vm_area_struct’s and page tables for new areas § Programs and initialized data backed by object files. §. bss and stack backed by anonymous files. Set PC to entry point in. text § Linux will fault in code and data pages as needed. 48