Unit III Concurrency Mutual Exclusion and Synchronization Operating

  • Slides: 135
Download presentation
Unit - III Concurrency: Mutual Exclusion and Synchronization

Unit - III Concurrency: Mutual Exclusion and Synchronization

 • Operating System design is concerned with the management of processes and threads:

• Operating System design is concerned with the management of processes and threads: • Multiprogramming • Multiprocessing • Distributed Processing

Multiple Applications invented to allow processing time to be shared among active applications Structured

Multiple Applications invented to allow processing time to be shared among active applications Structured Applications extension of modular design and structured programming Operating System Structure OS themselves implemented as a set of processes or threads

Concurrency & Shared Data • Concurrent processes may share data to support communication, info

Concurrency & Shared Data • Concurrent processes may share data to support communication, info exchange, . . . • Threads in the same process can share global address space • Concurrent sharing may cause problems • For example: lost updates

Concurrency K e e r y m s Table 5. 1 Some Key Terms

Concurrency K e e r y m s Table 5. 1 Some Key Terms Related to Concurrency

 • Interleaving and overlapping • can be viewed as examples of concurrent processing

• Interleaving and overlapping • can be viewed as examples of concurrent processing • both present the same problems • In multiprogramming, the relative speed of execution of processes cannot be predicted • depends on activities of other processes • the way the OS handles interrupts • scheduling policies of the OS

Difficulties of Concurrency • Sharing of global resources • Difficult for the OS to

Difficulties of Concurrency • Sharing of global resources • Difficult for the OS to manage the allocation of resources optimally • Difficult to locate programming errors as results are not deterministic and reproducible

 • Occurs when multiple processes or threads read and write shared data items

• Occurs when multiple processes or threads read and write shared data items • The final result depends on the order of execution – the “loser” of the race is the process that updates last and will determine the final value of the variable

Operating System Concerns • Design and management issues raised by the existence of concurrency:

Operating System Concerns • Design and management issues raised by the existence of concurrency: – The OS must: – be able to keep track of various processes – allocate and de-allocate resources for each active process – protect the data and physical resources of each process against interference by other processes – ensure that the processes and outputs are independent of the processing speed

P R O C E S S I N T E R A C

P R O C E S S I N T E R A C T I O N

Resource Competition § Concurrent processes come into conflict when they use the same resource

Resource Competition § Concurrent processes come into conflict when they use the same resource (competitively or shared) § for example: I/O devices, memory, processor time, clock § Three control problems must be faced § Need for mutual exclusion § Deadlock § Starvation § Sharing processes also need to address coherence

Need for Mutual Exclusion • If there is no controlled access to shared data,

Need for Mutual Exclusion • If there is no controlled access to shared data, processes or threads may get an inconsistent view of this data • The result of concurrent execution will depend on the order in which instructions are interleaved. • Errors are timing dependent and usually not reproducible.

A Simple Example n Assume P 1 and P 2 are executing this code

A Simple Example n Assume P 1 and P 2 are executing this code and share the variable a n Processes time. can be preempted at any n Assume P 1 is preempted after the input statement, and P 2 then executes entirely n The character echoed by P 1 will be the one read by P 2 !! static char a; void echo() { cin >> a; cout << a; }

What’s the Problem? • This is an example of a race condition • Individual

What’s the Problem? • This is an example of a race condition • Individual processes (threads) execute sequentially in isolation, but concurrency causes them to interact. • We need to prevent concurrent execution by processes when they are changing the same data. We need to enforce mutual exclusion.

The Critical Section Problem n When a process executes code that manipulates shared data

The Critical Section Problem n When a process executes code that manipulates shared data (or resources), we say that the process is in its critical section (CS) for that shared data n We must enforce mutual exclusion on the execution of critical sections. n Only one process at a time can be in its CS (for that shared data or resource).

The Critical Section Problem • Enforcing mutual exclusion guarantees that related CS’s will be

The Critical Section Problem • Enforcing mutual exclusion guarantees that related CS’s will be executed serially instead of concurrently. • The critical section problem is how to provide mechanisms to enforce mutual exclusion so the actions of concurrent processes won’t depend on the order in which their instructions are interleaved

The Critical Section Problem n Processes/threads must request permission to enter a CS, &

The Critical Section Problem n Processes/threads must request permission to enter a CS, & signal when they leave CS. n Program n n n structure: entry section: requests entry to CS exit section: notifies that CS is completed remainder section (RS): code that does not involve shared data and resources. n The CS problem exists on multiprocessors as well as on uniprocessors.

Mutual Exclusion and Data Coherence • Mutual Exclusion ensures data coherence if properly used.

Mutual Exclusion and Data Coherence • Mutual Exclusion ensures data coherence if properly used. • Critical Resource (CR) - a shared resource such as a variable, file, or device • Data Coherence: – The final value or state of a CR shared by concurrently executing processes is the same as the final value or state would be if each process executed serially, in some order.

Deadlock and Starvation • Deadlock: two or more processes are blocked permanently because each

Deadlock and Starvation • Deadlock: two or more processes are blocked permanently because each is waiting for a resource held in a mutually exclusive manner by one of the others. • Starvation: a process is repeatedly denied access to some resource which is protected by mutual exclusion, even though the resource periodically becomes available.

Mutual Exclusion Figure 5. 1 Illustration of Mutual Exclusion

Mutual Exclusion Figure 5. 1 Illustration of Mutual Exclusion

 • Mutual Exclusion: must be enforced • Non interference: A process that halts

• Mutual Exclusion: must be enforced • Non interference: A process that halts must not interfere with other processes • No deadlock or starvation • Progress : A process must not be denied access to a critical section when there is no other process using it • No assumptions are made about relative process speeds or number of processes • A process remains inside its critical section for a finite time only

– uniprocessor system – disabling interrupts guarantees mutual exclusion – the efficiency of execution

– uniprocessor system – disabling interrupts guarantees mutual exclusion – the efficiency of execution could be noticeably degraded – this approach will not work in a multiprocessor architecture

– Special Machine Instructions • Compare&Swap Instruction » also called a “compare and exchange

– Special Machine Instructions • Compare&Swap Instruction » also called a “compare and exchange instruction” » a compare is made between a memory value and a test value » if the old memory value = test value, swap in a new value to the memory location » always return the old memory value » carried out atomically in the hardware.

– Compare&Swap Instruction • Pseudo-code definition of the hardware instruction: int compare_and_swap (word, test_val,

– Compare&Swap Instruction • Pseudo-code definition of the hardware instruction: int compare_and_swap (word, test_val, new_val) { Int oldval; Oldval = word if (word ==test_val) word = new_val; return oldval

word = bolt test_val = 0 new_val = 1 If bolt is 0 when

word = bolt test_val = 0 new_val = 1 If bolt is 0 when the C&S is executed, the condition is false and P enters its critical section. (leaves bolt = 1) If bolt = 1 when C&S executes, P continues to execute the while loop. It’s busy waiting ( or spinning) Figure 5. 2 Hardware Support for Mutual Exclusion

Exchange Instruction Figure 5. 2 Hardware Support for Mutual Exclusion

Exchange Instruction Figure 5. 2 Hardware Support for Mutual Exclusion

Special Machine Instruction: Advantages • Applicable to any number of processes on either a

Special Machine Instruction: Advantages • Applicable to any number of processes on either a single processor or multiple processors sharing main memory • Simple and easy to verify • It can be used to support multiple critical sections; each critical section can be defined by its own variable

Special Machine Instruction: Disadvantages Busy-waiting is employed, thus while a process is waiting for

Special Machine Instruction: Disadvantages Busy-waiting is employed, thus while a process is waiting for access to a critical section it continues to consume processor time • Starvation is possible when a process leaves a critical section and more than one process is waiting • Deadlock is possible if prioritybased scheduling is used •

Semaphore A variable that has an integer value upon which only three operations are

Semaphore A variable that has an integer value upon which only three operations are defined: There is no way to inspect or manipulate semaphores other than these three operations 1) May be initialized to a nonnegative integer value 2) The sem. Wait operation decrements the value 3) The sem. Signal operation increments the value

Consequences There is no way to know before a process decrements a semaphore whether

Consequences There is no way to know before a process decrements a semaphore whether it will block or not There is no way to know which process will continue immediately on a uniprocessor system when two processes are running concurrently You don’t know whether another process is waiting so the number of unblocked processes may be zero or one

Semaphore Primitives

Semaphore Primitives

Binary Semaphore Primitives

Binary Semaphore Primitives

Strong/Weak Semaphores ❋ A queue is used to hold processes waiting on the semaphore

Strong/Weak Semaphores ❋ A queue is used to hold processes waiting on the semaphore Strong Semaphores • the process that has been blocked the longest is released from the queue first (FIFO) Weak Semaphores • the order in which processes are removed from the queue is not specified

Example of Semaphore Mechanism

Example of Semaphore Mechanism

Producer/Consumer Problem General Situation: • one or more producers are generating data and placing

Producer/Consumer Problem General Situation: • one or more producers are generating data and placing these in a buffer • a single consumer is taking items out of the buffer one at time • only one producer or consumer may access the buffer at any one time The Problem: • ensure that the producer can’t add data into full buffer and consumer can’t remove data from an empty buffer

Buffer Structure

Buffer Structure

Figure 5. 9 An Incorrect Solution to the Infinite-Buffer Producer/Consumer Problem Using Binary Semaphores

Figure 5. 9 An Incorrect Solution to the Infinite-Buffer Producer/Consumer Problem Using Binary Semaphores

Figure 5. 10 A Correct Solution to the Infinite-Buffer Producer/Consumer Problem Using Binary Semaphores

Figure 5. 10 A Correct Solution to the Infinite-Buffer Producer/Consumer Problem Using Binary Semaphores

S o l u t i o n U s i n g S

S o l u t i o n U s i n g S e m a p h o r e

S Um o s a l i p u n h t g o

S Um o s a l i p u n h t g o i o �r n. S e s Figure 5. 13 A Solution to the Bounded-Buffer Producer/Consumer Problem Using Semaphores

Implementation of Semaphores • Imperative that the sem. Wait and sem. Signal operations be

Implementation of Semaphores • Imperative that the sem. Wait and sem. Signal operations be implemented as atomic primitives • Can be implemented in hardware or firmware • Software schemes such as Dekker’s or Peterson’s algorithms can be used • Use one of the hardware-supported schemes for mutual exclusion

Review • Concurrent processes, threads • Access to shared data/resources • Need to enforce

Review • Concurrent processes, threads • Access to shared data/resources • Need to enforce mutual exclusion • Hardware mechanisms have limited usefulness • Semaphores: OS mechanism for mutual exclusion & other synchronization issues • Standard semaphore/counting • Binary semaphore • Producer/consumer problem

Monitors • Programming language construct that provides equivalent functionality to that of semaphores and

Monitors • Programming language construct that provides equivalent functionality to that of semaphores and is easier to control • Implemented in a number of programming languages • including Concurrent Pascal, Pascal-Plus, Modula-2, Modula-3, and Java • Has also been implemented as a program library • Software module consisting of one or more procedures, an initialization sequence, and local data

Monitor Characteristics Local data variables are accessible only by the monitor’s procedures and not

Monitor Characteristics Local data variables are accessible only by the monitor’s procedures and not by any external procedure Only one process may be executing in the monitor at a time Process enters monitor by invoking one of its procedures

Synchronization • Achieved by the use of condition variables that are contained within the

Synchronization • Achieved by the use of condition variables that are contained within the monitor and accessible only within the monitor • Condition variables are operated on by two functions: » cwait(c): suspend execution of the calling process on condition c » csignal(c): resume execution of some process blocked after a cwait on the same condition

Figure 5. 15 Structure of a Monitor

Figure 5. 15 Structure of a Monitor

Figure 5. 16 A Solution to the Bounded-Buffer Producer/Consumer Problem Using a Monitor

Figure 5. 16 A Solution to the Bounded-Buffer Producer/Consumer Problem Using a Monitor

Readers/Writers Problem • A data area is shared among many processes • some processes

Readers/Writers Problem • A data area is shared among many processes • some processes only read the data area, (readers) and some only write to the data area (writers) • Conditions that must be satisfied: 1. any number of readers may simultaneously read the file 2. only one writer at a time may write to the file 3. if a writer is writing to the file, no reader may read it

Readers Have Priority S i o o l n u t Figure 5. 22

Readers Have Priority S i o o l n u t Figure 5. 22 A Solution to the Readers/Writers Problem Using Semaphore: Readers Have Priority

Solution: Writers Have Priority Figure 5. 23 A Solution to the Readers/Writers Problem Using

Solution: Writers Have Priority Figure 5. 23 A Solution to the Readers/Writers Problem Using Semaphore: Writers Have Priority

State of the Process Queues

State of the Process Queues

Message Passing Figure 5. 24 A Solution to the Readers/Writers Problem Using Message Passing

Message Passing Figure 5. 24 A Solution to the Readers/Writers Problem Using Message Passing

 • The permanent blocking of a set of processes that either compete for

• The permanent blocking of a set of processes that either compete for system resources or communicate with each other • A set of processes is deadlocked when each process in the set is blocked awaiting an event that can only be triggered by another blocked process in the set • Permanent • No efficient solution

Potential Deadlock I need quad C and D I need quad D and A

Potential Deadlock I need quad C and D I need quad D and A I need quad B and C I need quad A and B

Actual Deadlock HALT until D is free HALT until A is free HALT until

Actual Deadlock HALT until D is free HALT until A is free HALT until C is free HALT until B is free

Joint Progress Diagram

Joint Progress Diagram

No Deadlock Example

No Deadlock Example

Reusable • can be safely used by only one process at a time and

Reusable • can be safely used by only one process at a time and is not depleted by that use • processors, I/O channels, main and secondary memory, devices, and data structures such as files, databases, and semaphores Consumable • one that can be created (produced) and destroyed (consumed) • interrupts, signals, messages, and information • in I/O buffers

Reusable Resources Example

Reusable Resources Example

Example 2: Memory Request • Space is available for allocation of 200 Kbytes, and

Example 2: Memory Request • Space is available for allocation of 200 Kbytes, and the following sequence of events occur: P 1. . . P 2. . . Request 80 Kbytes; Request 70 Kbytes; Request 60 Kbytes; Request 80 Kbytes; . . . • Deadlock occurs if both processes progress to their second request

Consumable Resources Deadlock • Consider a pair of processes, in which each process attempts

Consumable Resources Deadlock • Consider a pair of processes, in which each process attempts to receive a message from the other process and then send a message to the other process:

Deadlock Detection, Prevention, and Avoidance

Deadlock Detection, Prevention, and Avoidance

Resource Allocation Graphs

Resource Allocation Graphs

Resource Allocation Graphs

Resource Allocation Graphs

Conditions for Deadlock Mutual Exclusion • only one process may use a resource at

Conditions for Deadlock Mutual Exclusion • only one process may use a resource at a time Hold-and-Wait No Pre-emption Circular Wait • a process may hold allocated resources while awaiting assignment of others • no resource can be forcibly removed from a process holding it • a closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain

Dealing with Deadlock • Three general approaches exist for dealing with deadlock: Prevent Deadlock

Dealing with Deadlock • Three general approaches exist for dealing with deadlock: Prevent Deadlock • adopt a policy that eliminates one of the conditions Avoid Deadlock • make the appropriate dynamic choices based on the current state of resource allocation Detect Deadlock • attempt to detect the presence of deadlock and take action to recover

 • Design a system in such a way that the possibility of deadlock

• Design a system in such a way that the possibility of deadlock is excluded • Two main methods: – Indirect • prevent the occurrence of one of the three necessary conditions – Direct • prevent the occurrence of a circular wait

Mutual Exclusion if access to a resource requires mutual exclusion then it must be

Mutual Exclusion if access to a resource requires mutual exclusion then it must be supported by the OS Hold and Wait require that a process request all of its required resources at one time and blocking the process until all requests can be granted simultaneously

 • No Preemption – if a process holding certain resources is denied a

• No Preemption – if a process holding certain resources is denied a further request, that process must release its original resources and request them again – OS may preempt the second process and require it to release its resources • Circular Wait – define a linear ordering of resource types

 • A decision is made dynamically whether the current resource allocation request will,

• A decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to a deadlock • Requires knowledge of future process requests

Deadlock Avoidance Resource Allocation Denial • do not grant an incremental resource request to

Deadlock Avoidance Resource Allocation Denial • do not grant an incremental resource request to a process if this allocation might lead to deadlock Process Initiation Denial • do not start a process if its demands might lead to deadlock

 • Referred to as the banker’s algorithm • State of the system reflects

• Referred to as the banker’s algorithm • State of the system reflects the current allocation of resources to processes • Safe state is one in which there is at least one sequence of resource allocations to processes that does not result in a deadlock • Unsafe state is a state that is not safe

Determination of a Safe State • State of a system consisting of four processes

Determination of a Safe State • State of a system consisting of four processes and three resources • Allocations have been made to the four processes Amount of existing resources Resources available after allocation

P 3 Runs to Completion Thus, the state defined originally is a safe state

P 3 Runs to Completion Thus, the state defined originally is a safe state

Deadlock Avoidance Logic

Deadlock Avoidance Logic

 • It is not necessary to preempt and rollback processes, as in deadlock

• It is not necessary to preempt and rollback processes, as in deadlock detection • It is less restrictive than deadlock prevention

 • Maximum resource requirement for each process must be stated in advance •

• Maximum resource requirement for each process must be stated in advance • Processes under consideration must be independent and with no synchronization requirements • There must be a fixed number of resources to allocate • No process may exit while holding resources

Deadlock Strategies Deadlock prevention strategies are very conservative • limit access to resources by

Deadlock Strategies Deadlock prevention strategies are very conservative • limit access to resources by imposing restrictions on processes Deadlock detection strategies do the opposite • resource requests are granted whenever possible

Deadline Detection Algorithms § A check for deadlock can be made as frequently as

Deadline Detection Algorithms § A check for deadlock can be made as frequently as each resource request or, less frequently, depending on how likely it is for a deadlock to occur § Advantages: § it leads to early detection § the algorithm is relatively simple §Disadvantage § frequent checks consume considerable processor time

Recovery Strategies • Abort all deadlocked processes • Back up each deadlocked process to

Recovery Strategies • Abort all deadlocked processes • Back up each deadlocked process to some previously defined checkpoint and restart all processes • Successively abort deadlocked processes until deadlock no longer exists • Successively preempt resources until deadlock no longer exists

D e a d l o p h c r e k o s

D e a d l o p h c r e k o s a A c p

Scheduling

Scheduling

Processor Scheduling • Aim is to assign processes to be executed by the processor

Processor Scheduling • Aim is to assign processes to be executed by the processor in a way that meets system objectives, such as response time, throughput, and processor efficiency • Broken down into three separate functions: long term scheduling medium term scheduling short term scheduling

Scheduling and Process State Transitions

Scheduling and Process State Transitions

Figure 9. 2 Nesting of Scheduling Functions (Referencing figure 3. 9 b)

Figure 9. 2 Nesting of Scheduling Functions (Referencing figure 3. 9 b)

Q u e u i n g D i a g r a m

Q u e u i n g D i a g r a m

Long-Term Scheduler • • Determines which programs are admitted to the system for processing

Long-Term Scheduler • • Determines which programs are admitted to the system for processing Controls the degree of multiprogramming • the more processes that are created, the smaller the percentage of time that each process can be executed • may limit to provide satisfactory service to the current set of processes Creates processes from the queue when it can, but must decide: when the operating system can take on one or more additional processes which jobs to accept and turn into processes first come, first served priority, expected execution time, I/O requirements

Medium-Term Scheduling • Part of the swapping function • Swapping-in decisions are based on

Medium-Term Scheduling • Part of the swapping function • Swapping-in decisions are based on the need to manage the degree of multiprogramming • considers the memory requirements of the -out processes swapped

Short-Term Scheduling • • Known as the dispatcher Executes most frequently Makes the fine-grained

Short-Term Scheduling • • Known as the dispatcher Executes most frequently Makes the fine-grained decision of which process to execute next Invoked when an event occurs that leads to the blocking of the current process or that may provide an opportunity to preempt a currently running process in favor of another Examples: • • Clock interrupts I/O interrupts Operating system calls Signals (e. g. , semaphores)

Short Term Scheduling Criteria • • Main objective is to allocate processor time to

Short Term Scheduling Criteria • • Main objective is to allocate processor time to optimize certain aspects of system behavior A set of criteria is needed to evaluate the scheduling policy User-oriented criteria • relate to the behavior of the system as perceived by the individual user or process (such as response time in an interactive system) • important on virtually all systems System-oriented criteria • focus in on effective and efficient utilization of the processor (rate at which processes are completed) • generally of minor importance on single-user systems

Short-Term Scheduling Criteria: Performance examples: example: • response time • throughput Performance-related quantitative •

Short-Term Scheduling Criteria: Performance examples: example: • response time • throughput Performance-related quantitative • predictability Criteria can be classified into: easily measured Non-performance related qualitative hard to measure

Table 9. 2 Scheduling Criteria

Table 9. 2 Scheduling Criteria

Priority Queuing

Priority Queuing

 • Determines which Ready process is dispatched next • May be based on

• Determines which Ready process is dispatched next • May be based on priority, resource requirements, or the execution characteristics of the process • If based on execution characteristics, some factors to consider are § w = time spent in system so far, waiting § e = time spent in execution so far § s = total service time required by the process, including e; (estimated by system or user)

§ When/under what circumstances is the selection function is exercised? § Two categories: §

§ When/under what circumstances is the selection function is exercised? § Two categories: § Nonpreemptive § Preemptive

Nonpreemptive – once a process is in the running state, it will continue until

Nonpreemptive – once a process is in the running state, it will continue until it terminates or blocks itself for I/O Preemptive – currently running process may be interrupted and moved to ready state by the OS – preemption may occur when a new process arrives, on an interrupt, or periodically

Alternative Scheduling Policies

Alternative Scheduling Policies

Table 9. 4 Process Scheduling Example

Table 9. 4 Process Scheduling Example

Table 9. 5 Comparison of Scheduling Policies (Assumes no process blocks itself, for I/O

Table 9. 5 Comparison of Scheduling Policies (Assumes no process blocks itself, for I/O or other event wait. )

 • • • Simplest scheduling policy Also known as first-in-first-out (FIFO) or a

• • • Simplest scheduling policy Also known as first-in-first-out (FIFO) or a strict queuing scheme When the current process ceases to execute, the longest process in the Ready queue is selected • • Performs much better for long processes than short ones Tends to favor processor-bound processes over I/O-bound processes

 • • • Uses preemption based on a clock Also known as time

• • • Uses preemption based on a clock Also known as time slicing because each process is given a slice of time before being preempted Principal design issue is the length of the time quantum, or slice, to be used • • Particularly effective in a generalpurpose time-sharing system or transaction processing system One drawback is its relative treatment of processor-bound and I/O-bound processes

Figure 9. 6 a Effect of Size of Preemption Time Quantum

Figure 9. 6 a Effect of Size of Preemption Time Quantum

Figure 9. 6 b Effect of Size of Preemption Time Quantum

Figure 9. 6 b Effect of Size of Preemption Time Quantum

Virtual Round Robin (VRR)

Virtual Round Robin (VRR)

 • • • Nonpreemptive policy in which the process with the shortest expected

• • • Nonpreemptive policy in which the process with the shortest expected processing time is selected next A short process will jump to the head of the queue Possibility of starvation for longer processes • • One difficulty is the need to know, or at least estimate, the required processing time of each process If the programmer’s estimate is substantially under the actual running time, the system may abort the job

 • Problem: Estimating execution time • OS may collect statistics and use process

• Problem: Estimating execution time • OS may collect statistics and use process history to estimate run time – e. g. , for processes in a production environment • Problem: avoiding starvation for long processes • Problem: not suitable for timesharing or transaction processing due to no preemption.

 • • • Preemptive version of SPN Scheduler always chooses the process that

• • • Preemptive version of SPN Scheduler always chooses the process that has the shortest expected remaining processing time Risk of starvation of longer processes • • Should give superior turnaround time performance to SPN because a short job is given immediate preference to a running longer job Still depends on having accurate service time estimates.

 • • Chooses next process with the greatest ratio Attractive because it accounts

• • Chooses next process with the greatest ratio Attractive because it accounts for the age of the process • While shorter jobs are favored, aging without service increases the ratio so that a longer process will eventually get past competing shorter jobs

Multilevel Feedback Scheduling • • • Useful when • there is no information about

Multilevel Feedback Scheduling • • • Useful when • there is no information about relative length of various jobs, but • You would like to favor short jobs • Basic algorithm may starve long processes or I/O bound processes. • • • Scheduling is similar to RR: FCFS with a time quantum. However, when a process blocks or is preempted it is “fed back” into the next lower level queue. Once it reaches the lowest level queue a process is served by RR until it terminates. Process is dispatched from the highest priority non-empty queue Result: new processes favored over long older processes Modifications address starvation and I/O bound processes

Feedback Scheduling

Feedback Scheduling

Feedback Performance

Feedback Performance

Feedback Queue Modifications • To avoid unreasonably long waits for long processes, give processes

Feedback Queue Modifications • To avoid unreasonably long waits for long processes, give processes in lowerpriority queues longer quantums • To avoid starvation let a process that has not executed for a certain amount of time move to a higher level queue. • To lessen the penalty on I/O bound processes use some version of virtual RR

Performance Comparison • Any scheduling discipline that chooses the next item to be served

Performance Comparison • Any scheduling discipline that chooses the next item to be served independent of service time obeys the relationship:

Table 9. 6 Formulas for Single. Server Queues with Two Priority Categories

Table 9. 6 Formulas for Single. Server Queues with Two Priority Categories

Overall Normalized Response Time

Overall Normalized Response Time

Normalized Response Time for Processes Shorter

Normalized Response Time for Processes Shorter

Normalized Response Time for Longer Processes

Normalized Response Time for Longer Processes

Results Simulation

Results Simulation

Fair-Share Scheduling • Scheduling decisions based on the process sets • Each user is

Fair-Share Scheduling • Scheduling decisions based on the process sets • Each user is assigned a share of the processor • Objective is to monitor usage to give fewer resources to users who have had more than their fair share and more to those who have had less than their fair share

Fair-Share Scheduler

Fair-Share Scheduler

Traditional UNIX Scheduling • Used in both SVR 3 and 4. 3 BSD UNIX

Traditional UNIX Scheduling • Used in both SVR 3 and 4. 3 BSD UNIX • these systems were primarily targeted at the timesharing interactive environment • Designed to provide good response time for interactive users while ensuring that low-priority background jobs do not starve • Employed multilevel feedback using round robin within each of the priority queues • Made use of one-second preemption • Priority based on process type and execution history

Scheduling Formula

Scheduling Formula

Bands • Used to optimize access to block devices and to allow the operating

Bands • Used to optimize access to block devices and to allow the operating system to respond quickly to system calls Swapper Block I/O device control File manipulation Character I/O device control User processes

Example of Traditional UNIX Process Scheduling

Example of Traditional UNIX Process Scheduling