CSCICMPE 4334 Operating Systems Review Final Exam 1

  • Slides: 49
Download presentation
CSCI/CMPE 4334 Operating Systems Review: Final Exam 1

CSCI/CMPE 4334 Operating Systems Review: Final Exam 1

Review • • Chapters 1 ~ 12 in your textbook Lecture slides In-class exercises

Review • • Chapters 1 ~ 12 in your textbook Lecture slides In-class exercises 1~10 (on the course website) Review slides • Review for 2 mid-term exams • Review for final exam 2

Review • 5 questions (100 points) + 1 bonus question (20 points) – Process

Review • 5 questions (100 points) + 1 bonus question (20 points) – Process scheduling, concurrent processing – Memory allocation, memory management policy – I/O device management • Question types – Q/A 3

Time & Place & Event • 1: 15 pm ~ 3 pm, May 6

Time & Place & Event • 1: 15 pm ~ 3 pm, May 6 (Tuesday) • ENGR 1. 290 • Closed-book exam 4

Basic OS Organization File Manager Process, Thread & Resource Manager Processor(s) Memory Manager Device

Basic OS Organization File Manager Process, Thread & Resource Manager Processor(s) Memory Manager Device Manager Main Memory Devices 5

Chapter 5: Device Management • Abstract all devices (and files) to a few interfaces

Chapter 5: Device Management • Abstract all devices (and files) to a few interfaces • Device management strategies • Buffering & Spooling • Access time of disk device (see lecture slides and exercises) – FCFS, SSTF, Scan/Look, Circular Scan/Look – See examples in Exercise (2) 6

Randomly Accessed Devices Sec tor Track (Cylinder) Cylinder (set of tracks) (a) Multi-surface Disk

Randomly Accessed Devices Sec tor Track (Cylinder) Cylinder (set of tracks) (a) Multi-surface Disk (b) Disk Surface (b) Cylinders 7

Optimize Access Times Several techniques used to try to optimize the time required to

Optimize Access Times Several techniques used to try to optimize the time required to service an I/O request FCFS – first come, first served Requests are serviced in order received No optimization SSTF – shortest seek time first Find track “closest” to current track Could prevent distant requests from being serviced 8

Optimize Access Times – cont. Scan/Look Starts at track 0, moving towards last track,

Optimize Access Times – cont. Scan/Look Starts at track 0, moving towards last track, servicing requests for tracks as it passes them Reverses direction when it reaches last track Look is variant; ceases scan in a direction once last track request in that direction has been handled Circular Scan/Look Similar to scan/look but always moves in same direction; doesn’t reverse Relies on existence of special homing command in device hardware 9

Chapter 6: Implementing Processes, Threads, and Resources • Modern process – Concurrent processing –

Chapter 6: Implementing Processes, Threads, and Resources • Modern process – Concurrent processing – Fully utilize the CPU and I/O devices • Address space • Context switching • State diagram 10

Process Manager Program Process Abstract Computing Environment File Manager Process Mgr Protection Deadlock Device

Process Manager Program Process Abstract Computing Environment File Manager Process Mgr Protection Deadlock Device Manager Devices Memory Manager Memory Scheduler CPU Process Description Synchronization Resource Manager Other H/W 11

Chapter 7: Scheduling • Scheduling methods (see lecture slides and exercises) – First-come, first

Chapter 7: Scheduling • Scheduling methods (see lecture slides and exercises) – First-come, first served (FCFS) – Shorter jobs first (SJF) or Shortest job next (SJN) – Higher priority jobs first – Job with the closest deadline first 12

Process Model and Metrics P will be a set of processes, p 0, p

Process Model and Metrics P will be a set of processes, p 0, p 1, . . . , pn-1 S(pi) is the state of pi {running, ready, blocked} τ(pi), the service time The amount of time pi needs to be in the running state before it is completed W (pi), the waiting time The time pi spends in the ready state before its first transition to the running state TTRnd(pi), turnaround time The amount of time between the moment pi first enters the ready state and the moment the process exits the running state for the last time 13

Chapter 8: Basic Synchronization • Critical sections – ensure that when one process is

Chapter 8: Basic Synchronization • Critical sections – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section, called mutual exclusion • Requirements for Critical-Section Solutions – Mutual Exclusion – Progress – Bounded Waiting 14

Chapter 8: Basic Synchronization (cont'd) • Semaphore and its P() and V() functions –

Chapter 8: Basic Synchronization (cont'd) • Semaphore and its P() and V() functions – Definition – Usage 15

The Critical-Section Problem cont. – Structure of process Pi repeat entry section critical section

The Critical-Section Problem cont. – Structure of process Pi repeat entry section critical section exit section remainder section until false; 16

Requirements for Critical-Section Solutions Mutual Exclusion. Progress If process Pi is executing in its

Requirements for Critical-Section Solutions Mutual Exclusion. Progress If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. Bounded Waiting A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. 17

Requirements for Critical-Section Solutions Mutual Exclusion. Progress If process Pi is executing in its

Requirements for Critical-Section Solutions Mutual Exclusion. Progress If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. Bounded Waiting A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. 18

Dijkstra Semaphore A semaphore, s, is a nonnegative integer variable that can only be

Dijkstra Semaphore A semaphore, s, is a nonnegative integer variable that can only be changed or tested by these two indivisible (atomic) functions: V(s): [s = s + 1] P(s): [while(s == 0) {wait}; s = s - 1] 19

Solving the Canonical Problem Proc_0() { while(TRUE) { <compute section>; P(mutex); <critical section>; V(mutex);

Solving the Canonical Problem Proc_0() { while(TRUE) { <compute section>; P(mutex); <critical section>; V(mutex); } } proc_1() { while(TRUE { <compute section>; P(mutex); <critical section>; V(mutex); } } semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0); 20

Shared Account Balance Problem Proc_0() {. . . /* Enter the CS */ P(mutex);

Shared Account Balance Problem Proc_0() {. . . /* Enter the CS */ P(mutex); balance += amount; V(mutex); . . . } proc_1() {. . . /* Enter the CS */ P(mutex); balance -= amount; V(mutex); . . . } semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0); 21

Chapter 9: High-Level Synchronization • Simultaneous semaphore Psimultaneous(S 1, . . , Sn) •

Chapter 9: High-Level Synchronization • Simultaneous semaphore Psimultaneous(S 1, . . , Sn) • Event – wait() – signal() • Monitor – What is condition? Its usage? – What are 3 functions of the condition? 22

Chapter 10: Deadlock • 4 necessary conditions of deadlock – Mutual exclusion – Hold

Chapter 10: Deadlock • 4 necessary conditions of deadlock – Mutual exclusion – Hold and wait – No preemption – Circular wait • How to deal with deadlock in OS – Prevention – Avoidance – Recovery 23

Chapter 11: Memory Management • Functions of memory manager • • Abstraction Allocation Address

Chapter 11: Memory Management • Functions of memory manager • • Abstraction Allocation Address translation Virtual memory • Memory allocation (see lecture slides and exercises) • Fixed partition method • • First-fit Next-fit Best-fit Worst-fit • Variable partition method 24

Chapter 11: Memory Management (cont'd) • Dynamic address space binding • Static binding •

Chapter 11: Memory Management (cont'd) • Dynamic address space binding • Static binding • Dynamic binding 25

Primary & Secondary Memory CPU • CPU can load/store • Ctl Unit executes code

Primary & Secondary Memory CPU • CPU can load/store • Ctl Unit executes code from this memory • Transient storage Load • Access using I/O operations • Persistent storage Primary Memory (Executable Memory) e. g. RAM Secondary Memory e. g. Disk or Tape Information can be loaded statically or dynamically 26

Functions of Memory Manager • Allocate primary memory space to processes • Map the

Functions of Memory Manager • Allocate primary memory space to processes • Map the process address space into the allocated portion of the primary memory • Minimize access times using a cost-effective amount of primary memory • May use static or dynamic techniques 27

Memory Manager • Only a small number of interface functions provided – usually calls

Memory Manager • Only a small number of interface functions provided – usually calls to: – Request/release primary memory space – Load programs – Share blocks of memory • Provides following – Memory abstraction – Allocation/deallocation of memory – Memory isolation – Memory sharing 28

Memory Abstraction • Process address space – Allows process to use an abstract set

Memory Abstraction • Process address space – Allows process to use an abstract set of addresses to reference physical primary memory Process Address Space Hardware Primary Memory Mapped to object other than memory 29

Issues in a memory allocation algorithm Memory layout / organization how to divide the

Issues in a memory allocation algorithm Memory layout / organization how to divide the memory into blocks for allocation? Fixed partition method: divide the memory once before any bytes are allocated. Variable partition method: divide it up as you are allocating the memory. Memory allocation select which piece of memory to allocate to a request Memory organization and memory allocation are close related 30

Fixed-Partition Memory Mechanism Operating System pi needs ni units ni pi Region 0 N

Fixed-Partition Memory Mechanism Operating System pi needs ni units ni pi Region 0 N 0 Region 1 N 1 Region 2 N 2 Region 3 N 3 31

Which free block to allocate How to satisfy a request of size n from

Which free block to allocate How to satisfy a request of size n from a list of free blocks First-fit: Allocate the first hole that is big enough Next-fit: Choose the next block that is large enough Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. 32

Dynamic Address Relocation Performed automagically by processor CPU Relative Address 0 x 02010 Relocation

Dynamic Address Relocation Performed automagically by processor CPU Relative Address 0 x 02010 Relocation Register 0 x 10000 load + R 1, 0 x 02010 0 x 12010 MAR • Program loaded at 0 x 10000 Relocation Register = 0 x 10000 • Program loaded at 0 x 04000 Relocation Register = 0 x 04000 We never have to change the load module addresses! 33

Swapping – cont. Image for pi Swap pi out Swap pj in Image for

Swapping – cont. Image for pi Swap pi out Swap pj in Image for pj 34

Virtual Memory A characteristic of programs that is very important to the strategy used

Virtual Memory A characteristic of programs that is very important to the strategy used by virtual memory systems is spatial reference locality Refers to the implicit partitioning of code and data segments due to the functioning of the program (portion for initializing data, another for reading input, others for computation, etc. ) Can be used to select which parts of the process should be loaded into primary memory 35

Chapter 12: Virtual Memory • Address translation between virtual memory and physical memory •

Chapter 12: Virtual Memory • Address translation between virtual memory and physical memory • Terms • Page frame • Paging algorithms • Static allocation • Dynamic allocation 36

Chapter 12: Virtual Memory (cont'd) • Fetch policy • when a page should be

Chapter 12: Virtual Memory (cont'd) • Fetch policy • when a page should be loaded • On demand paging • Replacement policy (see lecture slides and exercises) • which page is unloaded • Policies • • • Random Balady’s optimal algorithm* LRU* LFU FIFO LRU approximation • Placement policy • where page should be loaded 37

Chapter 12: Virtual Memory (cont'd) • Thrashing • Load control 38

Chapter 12: Virtual Memory (cont'd) • Thrashing • Load control 38

Virtual Memory Organization Primary Memory Secondary Memory Image for pi 39

Virtual Memory Organization Primary Memory Secondary Memory Image for pi 39

Virtual Memory – cont Since binding changes with time, use a dynamic virtual address

Virtual Memory – cont Since binding changes with time, use a dynamic virtual address map, Yt Virtual Address Space Yt 40

Windows NT Paging System 1. Reference to Address k in Page i (User space)

Windows NT Paging System 1. Reference to Address k in Page i (User space) Virtual Address Space 2. Lookup (Page i, Addr k) 4. Reference (Page Frame j, Addr k) Primary Memory Paging Disk (Secondary Memory) 3. Supv space User space Translate (Page i, Addr k) to (Page Frame j, Addr k) 41

Size of Blocks of Memory Virtual memory system transfers “blocks” of the address space

Size of Blocks of Memory Virtual memory system transfers “blocks” of the address space to/from primary memory Fixed size blocks: System-defined pages are moved back and forth between primary and secondary memory Variable size blocks: Programmer-defined segments – corresponding to logical fragments – are the unit of movement Paging is the commercially dominant form of virtual memory today 42

Paging A page is a fixed size, 2 h, block of virtual addresses A

Paging A page is a fixed size, 2 h, block of virtual addresses A page frame is a fixed size, 2 h, block of physical memory (the same size as a page) When a virtual address, x, in page i is referenced by the CPU If page i is loaded at page frame j, the virtual address is relocated to page frame j If page is not loaded, the OS interrupts the process and loads the page into a page frame 43

Address Translation (cont) Virtual Address g bits h bits Page # Line # “page

Address Translation (cont) Virtual Address g bits h bits Page # Line # “page table” Yt Missing Page j bits Physical Address Frame # h bits Line # CPU Memory MAR 44

Paging Algorithms Two basic types of paging algorithms Static allocation Dynamic allocation Three basic

Paging Algorithms Two basic types of paging algorithms Static allocation Dynamic allocation Three basic policies in defining any paging algorithm Fetch policy – when a page should be loaded Replacement policy –which page is unloaded Placement policy – where page should be loaded 45

Fetch Policy Determines when a page should be brought into primary memory Usually don’t

Fetch Policy Determines when a page should be brought into primary memory Usually don’t have prior knowledge about what pages will be needed Majority of paging mechanisms use a demand fetch policy Page is loaded only when process references it 46

Replacement Policy When there is no empty page frame in memory, we need to

Replacement Policy When there is no empty page frame in memory, we need to find one to replace Write it out to the swap area if it has been changed since it was read in from the swap area Dirty bit or modified bit pages that have been changed are referred to as “dirty” these pages must be written out to disk because the disk version is out of date this is called “cleaning” the page Which page to remove from memory to make room for a new page We need a page replacement algorithm 47

Thrashing Diagram There are too many processes in memory and no process has enough

Thrashing Diagram There are too many processes in memory and no process has enough memory to run. As a result, the page fault is very high and the system spends all of its time handling page fault interrupts. 48

Good Luck! Q/A 49

Good Luck! Q/A 49