CMPT 300 Operating System I Chapter 4 Memory











































- Slides: 43

CMPT 300 Operating System I Chapter 4 Memory Management

Why Memory Management? n Why money management? n n Parkinson’s law: programs expand to fill the memory available to hold them n n n Not enough money. Same thing for memory “ 640 KB memory are enough for everyone” – Bill Gates Programmers’ ideal: an infinitely large, infinitely fast memory, nonvolatile Registers Reality: memory hierarchy Cache Main memory Magnetic disk Magnetic tape 2

What Is Memory Management? n Memory manager: the part of the OS managing the memory hierarchy n n n Keep track of memory parts in use/not in use Allocate/de-allocate memory to processes Manage swapping between main memory and disk Basic memory management: every program is put and run in main memory as whole Swapping & paging: move processes back and forth between main memory and disk 3

Outline n n n n Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation 4

Mono Programming n One program at a time n n n Share memory with OS OS loads the program from disk to memory Three variations User program OS in RAM 0 x. FFF… OS in ROM Device drivers in ROM User program 0 0 OS in RAM 0 5

Multiprogramming With Fixed Partitions n n Advantages of Multiprogramming? Scenario: multiple programs at a time n n Divide memory up into n partitions, one partition can at most hold one program (process) n n Problem: how to allocate memory? Equal partitions vs. unequal partitions Each partition has a job queue Can be done manually when system is up A job arrives, put it into the input queue for the smallest partition large enough to hold it n Any space in a partition not used by a job is lost 6

Example: Multiprogramming With Fixed Partitions Partition 4 B 800 K 700 K Partition 3 400 K A Partition 2 Partition 1 Multiple input queues OS 200 K 100 K 0 7

Single Input Queue n Disadvantage of multiple input queues n n Small jobs may wait, while a queue with larger memory is empty Solution: single input queue Partition 4 250 K B 800 K 700 K Partition 3 A 10 K 400 K Partition 2 Partition 1 OS 200 K 100 K 0 8

How to Pick Jobs? n Pick the first job in the queue fitting an empty partition n n Pick the largest job fitting an empty partition n Fast, but may waste a large partition on a small job Memory efficient Smallest jobs may be interactive ones, need best service, slow Policies for efficiency and fairness n n Have at least one small partition around A job may not be skipped more than k times 9

A Naïve Model for Multiprogramming n Goal: determine the number of processes in main memory to keep the CPU busy n n n Multiprogramming improves CPU utilization If on average, a process computes 20% of the time it sitting in memory 5 processes can keep CPU busy all the time Assume all processes never wait for I/O at the same time. n Too optimistic! 10

A Probabilistic Model n A process spends a fraction p of its time waiting for I/O to complete n 0<p<1 n At once n processes in memory n CPU utilization 1 – pn n Probability that all n processes are waiting for I/O: pn n Assume processes are independent to each other n n Not true in reality. A process has to wait another process to give up CPU Using queue theory. 11

CPU Utilization n 1 – pn 12

Memory Management for Multiprogramming n Relocation n n When program is compiled, it assumes the starting address is 0. (logical address) When it is loaded into memory, it could start at any address. (physical address) How to map logical address to physical address? Protection n A program’s access should be confined to proper area 13

Relocation & Protection n Logical address for programming n n Physical address n n n Call a procedure at logical address 100 When the procedure is in partition 1 (started from physical address 100 k), then the procedure is at 100 K+100 Relocation problem: translation between logical address and physical address Protection: a malicious program can jump to space belonging to other users n Generate a new instruction on the fly that can reads or writes any word in memory 14

Relocation/Protection Using Registers n Base register: start of the partition n Limit register: length of the partition n n Every memory address generated adds the content of base register Base register: 100 K, CALL 100 K +100 Addresses are checked against the limit register Disadvantage: perform addition and comparison on every memory reference 15

Outline n n n n Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation 16

In Time-sharing/Interactive Systems… n Not enough main memory to hold all currently active processes n n Swapping: bring in each process in entirely n n Intuition: excess processes must be kept on disk and brought in to run dynamically Assumption: each process can be held in main memory, but cannot finish at one run Virtual memory: allow programs to run even when they are only partially in main memory n No assumption about program size 17

Swapping B C C C B B B A A A OS OS OS Time C C A OS Swap A out D D OS OS Swap B out D OS Hole 18

Swapping V. S. Fixed Partitions n n The number, location and size of partitions vary dynamically in swapping Flexibility, improve memory utilization Complicate allocating, de-allocating and keeping track of memory Memory compaction: combine “holes” in memory into a big one n n n More efficient in allocation Require a lot of CPU time Rarely used in real systems 19

Enlarge Memory for a Process n n Fixed size process: easy Growing process n n Expand to the adjacent hole, if there is a hole Otherwise, wait or swap some processes out to create a large enough hole If swap area on the disk is full, wait or be killed Allocate extra space whenever a process is swapped in or move 20

Handling Growing Processes Room for growth of B B Room for growth of A A OS Processes with one growing data segment B-Stack Room for growth B-Data B-Program A-Stack Room for growth A-Data A-Program OS Processes with growing data and stack segments 21

Memory Management With Bitmaps n Two ways to keep track of memory usage n n Bitmaps and free lists Bitmaps n n Memory is divided into allocation units One bit per unit: 0 -free, 1 -occupied A 1 1 1 1 1 0 1 B 1 1 0 1 1 0 C D E 0 1 1 0 22

Size of Allocation Units n n 4 bytes/unit 1 bit in map for 32 bits of memory bitmap takes 1/33 of memory Trade-off between allocation unit and memory utilization n n Smaller allocation unit larger bitmap Larger allocation unit smaller bitmap On average, half of the last unit is wasted When bring a k unit process into memory n n Need find a hole of k units Search for k consecutive 0 bits in the entire map 23

Memory Management With Linked Lists n Two types of entries: hole(H)/process(P) Address: 20 A B P 0 5 H 18 2 List is kept sorted by address. C D H 5 3 P 8 6 P 20 6 P 26 3 E P 14 4 H 29 3 X Length 6 Starts at 20 Process 24

Updating Linked Lists n Combine holes if possible n Not necessary for bitmap Before process X terminates A X X B After process X terminates A B B X 25

Allocate Memory for New Processes n First fit: find the first hole fitting requirement n Break the hole into two pieces: P + smaller H A P 0 2 n P 0 2 P P 2 3 H H 5 3 Empirical evidence: Slightly worse performance than first fit Best fit: take the smallest hole that is adequate n n n H 2 6 A Next fit: start search from the place of last fit n n H Slower Generate tiny useless holes Worst fit: always take the largest hole 26

Using Distinct Lists n Distinct lists for processes and holes n List of holes can be sorted on size n n Problem: how to free a process? n n Best fit becomes faster Merging holes is very costly Quick fit: grouping holes based on size n n Different lists for different sizes E. g. , List 1 for 4 KB holes, List 2 for 8 KB holes. n n n How about a 5 KB hole? Speed up the searching Merging holes is still costly 27

Outline n n n n Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation 28

Why Virtual Memory? n If the program is too big to fit in memory … n n Split the program into pieces – overlays Swapping overlays in and out Problem: programmer does the work of splitting the program into pieces. Virtual memory: OS takes care of everything n n n Size of program could be larger than the physical memory available. Keep the parts currently used in memory Put other parts on disk 29

Virtual and Physical Addresses n Virtual addresses (VA) are used/generated by programs n n Each process has its own VA. E. g, MOV REG, 1000 ; 1000 is VA Physical addresses (PA) are used in execution CPU package MMU: maps VA to PA Memory Bus Disk controller CPU MMU 30

Paging n Virtual address space is divided into pages n n n Page frames in physical memory n Pages and page frames are always the same size n Usually, from 512 B to 64 KB #Pages > #Page frames n n n Memories are allocated in the unit of page On a 32 -bit PC, VA could be as large as 4 GB, but PA < 1 GB In hardware, a present/absent bit keeps track of which pages are physically present in memory. Page fault: an unmapped page is requested n n OS picks up a little-used page frame and write its content back to hard disk Fetch the wanted page into the page frame just freed 31

Paging: An Example n n n Virtual addres s space Page 0: 0— 4095 VA: 0 page frame 2 PA: 8192 n 0— 4095 8192 --12287 60 -64 K X 56 -6 K X 52 -56 K X 48 -52 K X 44 -48 K 7 40 -44 K X 36 -40 K 5 32 -36 K X 28 -32 K X 24 -28 K X 20 -24 K 3 16 -20 K 4 12 -16 K 0 2 8 -12 K 6 1 4 -8 K 1 0 0 -4 K 2 VA: 8192 page frame 6 PA: 24567 VA: 8199 page 2, offset 7 page frame 6, offset 7 PA: 24567+7=24574 Pages 8 VA: 32789 page 8 unmapped page fault Page frames physical address space 28 -32 K 24 -28 K 20 -24 K 16 -20 K 12 -16 K 8 -12 K 4 -8 K 0 -4 K 32

The Magic in MMU 33

Page Table n Map virtual pages onto page frames n n n VA is split into page number and offset. Each page number has one entry in page table. Page table can be extremely large n n 32 bits virtual addresses, 4 kb/page 1 M pages. How about 64 bits VA? Each process needs its own page table 34

Typical Page Table Entry n n n n Entry size: usually 32 bits Page frame number: goal of page mapping Present/absent bit: page in memory? Protection: what kinds of access permitted Modified: Has the page been written? (If so, need to write back to disk later) Dirty bit Referenced: Has the page been referenced? Caching disable: read from the disk? Caching disabled Modified Present/absent Page frame number Referenced Protection 35

Fast Mapping n Virtual to physical mapping must be fast n n n several page table references/instruction Unacceptable to store the entire page table in main memory Have to seek for hardware solutions 36

Two Simple Designs for Page Table n Use fast hardware registers for page table n n Single physical page table in MMU: an array of fast registers: one entry for each virtual page Requires no memory reference during mapping Load registers at every process switching Expensive if the page table is large n n Put the whole table in main memory n n Cost of hardware and overhead of context switching Only one register pointing to the start of table Fast switching Several memory references/instruction Pure memory solution is slow, pure register solution is expensive, so … 37

Translation Lookaside Buffers (TLBs) n Observation: Most programs tend to make a large number of references to a small number of pages n n Put the heavily read fraction in registers TLB/associative memory Virtual address check found TLB Page table Not found Physical address 38

Outline n n n n Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation 39

Page Replacement n When a page fault occurs, and all page frames are full n n Many similar problems in computer systems n n n Choose one page to remove, if modified (called dirty page), update its disk copy Better choose an unmodified page Better choose a rarely used page Memory cache page replacement Web page cache replacement in web server Revisit: page table entry 40

Typical Page Table Entry n n n n Entry size: usually 32 bits Page frame number: goal of page mapping Present/absent bit: page in memory? Protection: what kinds of access permitted Modified: Has the page been written? (If so, need to write back to disk later) Dirty bit Referenced: Has the page been referenced? Caching disable: read from the disk? Caching disabled Modified Present/absent Page frame number Referenced Protection 41

Optimal Algorithm n Label each page in the main memory with number of instructions will be executed before next reference n n Remove the page with highest label n n E. g, a page labeled by “ 1” means this page will be referenced by the next instruction. Put off page faults as long as possible Unrealizable! n n Why? SJF process scheduling, Banker’s Algorithm for deadlock avoidance Could be used as a benchmark 42

Remove Not Recently Used Pages n R and M are initially 0 n n n Clear R bit periodically by software (OS) Four classes of pages when a page fault n n n Set R when a page is referenced Set M when a page is modified Done by hardware Class 0 1 2 3 (R 0 M 0): (R 0 M 1): (R 1 M 0): (R 1 M 1): not referenced, not modified not referenced, modified referenced, not modified referenced, modified NRU removes a page at random from the lowest numbered nonempty class 43