Demand Paged Virtual Memory Andy Wang Operating Systems
- Slides: 92
Demand Paged Virtual Memory Andy Wang Operating Systems COP 4610 / CGS 5765
Up to this point… n We assume that a process needs to load all of its address space before running q n e. g. , 0 x 0 to 0 x. FFFF Observation: 90% of time is spent on 10% of code
Demand Paging n Demand paging: allows pages that are referenced actively to be loaded into memory q q Remaining pages stay on disk Provides the illusion of infinite physical memory
Demand Paging Mechanism n n Page tables sometimes need to point to disk locations (as opposed to memory locations) A table entry needs a present (valid) bit q q Present means a page is in memory Not present means that there is a page fault
Page Fault n n Hardware trap OS performs the following steps while running other processes (analogy: firing and hiring someone) q q q Choose a page If the page has been modified, write its contents to disk Change the corresponding page table entry and TLB entry Load new page into memory from disk Update page table entry Continue thread
Transparent Page Faults n Transparent (invisible) mechanisms q q A process does not know how it happened It needs to save the processor states and the faulting instruction
More on Transparent Page Faults n An instruction may have side effects q Hardware needs to either unwind or finish off those side effects ld r 1, x // page fault
More on Transparent Page Faults n Hardware designers need to understand virtual memory q q Unwinding instructions not always possible Example: block transfer instruction source begin block trans dest begin dest end source end
Page Replacement Policies n Random replacement: replace a random page + Easy to implement in hardware (e. g. , TLB) - May toss out useful pages n First in, first out (FIFO): toss out the oldest page + Fair for all pages - May toss out pages that are heavily used
More Page Replacement Policies n Optimal (MIN): replaces the page that will not be used for the longest time + Optimal - Does not know the future n Least-recently used (LRU): replaces the page that has not been used for the longest time + Good if past use predicts future use - Tricky to implement efficiently
More Page Replacement Policies n Least frequently used (LFU): replaces the page that is used least often Tracks usage count of pages + Good if past use predicts future use - Difficult to replace pages with high counts q
Example n A process makes references to 4 pages: A, B, E, and R q n Reference stream: BEERBAREBEAR Physical memory size: 3 pages
FIFO Memory page 1 2 3 B E E R B A R E B E A R B
FIFO Memory page 1 2 3 B E E R B A R E B E A R B E
FIFO Memory page 1 2 3 B E E R B A R E B E A R B E *
FIFO Memory page 1 2 3 B E E R B A R E B E A R B E * R
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * E * R
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * E * R
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A E * R
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A E * R *
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A E * * R *
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A E * * R *
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A E * * B R *
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A E * * B R *
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A E * * B R * E
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A * E * * B R * E
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A * E * * B R * E
FIFO Memory page 1 2 3 B E E R B A R E B E A R B * A * R E * * B R * E
FIFO n 7 page faults Memory page 1 2 3 B E E R B A R E B E A R B * A * R E * * B R * E
FIFO n 4 compulsory cache misses Memory page 1 2 3 B E E R B A R E B E A R B * A * R E * * B R * E
MIN Memory page 1 2 3 B E E R B A R E B E A R B E * R
MIN Memory page 1 2 3 B E E R B A R E B E A R B * E * R
MIN Memory page 1 2 3 B E E R B A R E B E A R B * E * R
MIN Memory page 1 2 3 B E E R B A R E B E A R B * A E * R
MIN Memory page 1 2 3 B E E R B A R E B E A R B * A E * R *
MIN Memory page 1 2 3 B E E R B A R E B E A R B * A E * * R *
MIN Memory page 1 2 3 B E E R B A R E B E A R B * A E * * R *
MIN Memory page 1 2 3 B E E R B A R E B E A R B * A E * * R * B
MIN Memory page 1 2 3 B E E R B A R E B E A R B * A E * * R * * B
MIN Memory page 1 2 3 B E E R B A R E B E A R B * A * E * * R * * B
MIN Memory page 1 2 3 B E E R B A R E B E A R B * A * R E * * R * * B
MIN n 6 page faults Memory page 1 2 3 B E E R B A R E B E A R B * A * R E * * R * * B
LRU Memory page 1 2 3 B E E R B A R E B E A R B E * R
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E * R
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E * R
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E * A R
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E * A R *
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E * A R *
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E A R *
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E A R *
LRU Memory page 1 2 3 B E E R B A R E B E A R B *
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E A R * B *
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E A R * B *
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E A R * B * A
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E A R * B * A
LRU Memory page 1 2 3 B E E R B A R E B E A R B * E A R * B * R A
LRU n 8 page faults Memory page 1 2 3 B E E R B A R E B E A R B * E A R * B * R A
LFU Memory page 1 2 3 B E E R B A R E B E A R B
LFU Memory page 1 2 3 B E E R B A R E B E A R B E
LFU Memory page 1 2 3 B E E R B A R E B E A R B E 2
LFU Memory page 1 2 3 B E E R B A R E B E A R B E 2 R
LFU Memory page 1 2 3 B E E R B A R E B E A R B 2 E 2 R
LFU Memory page 1 2 3 B E E R B A R E B E A R B 2 E 2 R A
LFU Memory page 1 2 3 B E E R B A R E B E A R B 2 E 2 R A R
LFU Memory page 1 2 3 B E E R B A R E B E A R B 2 E 2 3 R A R
LFU Memory page 1 2 3 B E E R B A R E B E A R B 2 3 E 2 3 R A R
LFU Memory page 1 2 3 B E E R B A R E B E A R B 2 3 E 2 3 R A R 4
LFU Memory page 1 2 3 B E E R B A R E B E A R B 2 3 E 2 3 R A R 4 A
LFU Memory page 1 2 3 B E E R B A R E B E A R B 2 3 E 2 3 R A R 4 A R
LFU n 7 page faults Memory page 1 2 3 B E E R B A R E B E A R B 2 3 E 2 3 R A R 4 A R
Does adding RAM always reduce misses? n Yes for LRU and MIN q n Memory content of X pages X + 1 pages No for FIFO q q Due to modulo math Belady’s anomaly: getting more page faults by increasing the memory size
Belady’s Anomaly n 9 page faults Memory page 1 2 3 A B C D A B E A B C D E A D B E A C * * B C * D
Belady’s Anomaly n 10 page faults Memory page 1 2 3 4 A B C D A B E A B C D E A * B E * C D A E B D C
Implementing LRU n Perfect LRU requires a timestamp on each reference to a cache page q n Too expensive Common practice q Approximate the LRU behavior
Clock Algorithm n n Replaces an old page, but not the oldest page Arranges physical pages in a circle q n With a clock hand Each page has a used bit q q Set to 1 on reference On page fault, sweep the clock hand n n If the used bit == 1, set it to 0 If the used bit == 0, pick the page for replacement
Clock Algorithm 0 0 1 0 1 0
Clock Algorithm 0 0 1 0
Clock Algorithm 0 0 1 0
Clock Algorithm 0 0 1 0 0 0
Clock Algorithm 0 0 1 0 0 0
Clock Algorithm 0 0 1 0 0 0 replace
Clock Algorithm 0 0 1
Clock Algorithm n The clock hand cannot sweep indefinitely q n Slow moving hand q n Each bit is eventually cleared Few page faults Quick moving hand q Many page faults
Nth Chance Algorithm n A variant of clocking algorithm q q q A page has to be swept N times before being replaced N , Nth Chance Algorithm LRU Common implementation n n N = 2 for modified pages N = 1 for unmodified pages
States for a Page Table Entry n n Used bit: set when a page is referenced; cleared by the clock algorithm Modified bit: set when a page is modified; cleared when a page is written to disk Valid bit: set when a program can legitimately use this entry Read-only: set for a program to read the page, but not to modify it (e. g. , code pages)
Thrashing n Occurs when the memory is overcommitted q n Pages are still needed are tossed out Example q q q A process needs 50 memory pages A machine has only 40 memory pages Need to constantly move pages between memory and disk
Thrashing Avoidance n Programs should minimize the maximum memory requirement at a given time q n e. g. , matrix multiplications can be broken into submatrix multiplications OS figures out the memory needed for each process q Runs only the computations that can fit in RAM
Working Set n A set of pages that was referenced in the previous T seconds q n T , working set size of the entire process Observation q Beyond a certain threshold, more memory only slightly reduces the number of page faults
Working Set n LRU, 3 memory pages, 12 page faults Memory page 1 2 3 A B C D E F G H A D B C A C F D B G E H
Working Set n LRU, 4 memory pages, 8 page faults Memory page 1 2 3 4 A B C D E F G H A * B E * C F * D G * H
Working Set n LRU, 5 memory pages, 8 page faults Memory page 1 2 3 4 5 A B C D E F G H A * B F * C G * D H * E
Global and Local Replacement Policies n Global replacement policy: all pages are in a single pool (e. g. , UNIX) q One process needs more memory n Grabs memory from another process that needs less + Flexible - One process can drag down the entire system n Per-process replacement policy: each process has its own pool of pages
- Paged segmentation
- Paged memory management
- Virtual memory in operating system
- Demand paging in virtual memory
- Virtual memory meaning
- Demand paging in virtual memory
- Memory paging
- Virtual memory
- Virtual memory in memory hierarchy consists of
- Andy wang fsu
- Andy wang fsu
- Alpha axp page table entry
- Deterministic demand vs stochastic demand
- Fiscal measures to correct deficient demand
- Market demand curve
- Demand dependent inventory
- Iskedyul ng demand kahulugan
- Independent demand inventory examples
- Module 5 supply and demand introduction and demand
- What is demand forecasting and estimation
- Paradox of value
- Independent demand inventory system
- Semantic prototype
- Excplicit memory
- Long term memory vs short term memory
- Internal memory and external memory
- Primary memory and secondary memory
- Physical address vs logical address
- Which memory is the actual working memory?
- Eidetic memory vs iconic memory
- Shared vs distributed memory
- Belady's anomaly example
- Virtual memory advantages
- Explain virtual memory in computer architecture
- Virtual memory in os
- Process virtual address space
- Virtual memory os
- Virtual memory
- Virtual memory segmentation
- Memory hierarchy
- Virtual memory os
- Virtual memory os
- Virtual memory is commonly implemented by
- Slab is one or more physically contiguous frames.
- Virtual memory os
- Process virtual address space
- Virtual memory
- Tlb
- Stack memory layout
- Karakteristik memori virtual
- Csce 430
- Tlb computer architecture
- Reddit memory test
- Virtual memory
- Virtual memory
- Virtual memory
- Virtual memory
- Virtual memory indirection
- Karakteristik dari memori virtual
- Implementasi virtual memory
- Shared virtual memory
- Virtual memory
- What is virtual memory
- Virtual memory address translation
- Virtual memory
- Nachos virtual memory
- Nachos virtual memory
- Virtual memory organization
- Mmu
- Characteristics of virtual memory
- What is virtual memory
- 4 virtual
- Has virtual functions and accessible non-virtual destructor
- Which example of
- Evolution of operating systems
- Components of an operating system
- What is operating system
- Wsn operating systems
- Three easy pieces
- Operating system lab
- Introduction to operating systems
- Modern operating systems by andrew tanenbaum
- Components of file
- Design issues of distributed operating system
- Early operating systems
- Real-time operating systems
- Can we make operating systems reliable and secure
- Alternative operating systems
- Exokernel
- Operating systems: internals and design principles
- Operating system evolution
- Give examples of nos network operating system
- Purchase msdn subscription