Lecture 5 Interprocess Communication and Synchronization Contents n























![Dining Philosophers Problém Shared data l semaphore chop. Stick[ ] = new Semaphore[5]; Initialization Dining Philosophers Problém Shared data l semaphore chop. Stick[ ] = new Semaphore[5]; Initialization](https://slidetodoc.com/presentation_image_h2/69f700b57b32d3a81b11e814949a89af/image-24.jpg)


![Dining Philosophers with Monitors monitor DP { enum {THINKING; HUNGRY, EATING} state [5] ; Dining Philosophers with Monitors monitor DP { enum {THINKING; HUNGRY, EATING} state [5] ;](https://slidetodoc.com/presentation_image_h2/69f700b57b32d3a81b11e814949a89af/image-27.jpg)


- Slides: 29
Lecture 5: Inter-process Communication and Synchronization
Contents n Where is the problem n Race Condition and Critical Section n Possible Solutions n Semaphores n Deadlocks n Classical Synchronization Tasks n Monitors n Examples AE 4 B 33 OSS Lecture 6 / Page 2 Silberschatz, Galvin and Gagne © 2005
Example n Concurrent access to shared data may result in data inconsistency n Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes n Suppose that we wanted to provide a solution to the producer-consumer problem: We have a limited size buffer (N items). The producer puts data into the buffer and the consumer takes data from the buffer l We can have an integer count that keeps track of the number of occupied buffer entries. Initially, count is set to 0. l It is incremented by the producer after it inserts a new item in the buffer and is decremented by the consumer after it consumes a buffer item l b[0] b[1] out↑ b[2] b[3] b[4] . . . b[N-1] in↑ b[0] b[1] b[2] in↑ AE 4 B 33 OSS Lecture 6 / Page 3 b[3] . . . b[4] b[N-1] out↑ Silberschatz, Galvin and Gagne © 2005
Producer & Consumer Problem Shared data: #define BUF_SZ = 20 typedef struct { … } item; item buffer[BUF_SZ]; int count = 0; Producer: Consumer: void producer() { int in = 0; item next. Produced; while (1) { /* Generate new item */ while (count == BUF_SZ) ; /* do nothing */ buffer[in] = next. Produced; in = (in + 1) % BUF_SZ; count++ ; } } void consumer() { int out = 0; item next. Consumed; while (1) { while (count == 0) ; /* do nothing */ next. Consumed = buffer[out]; out = (out + 1) % BUF_SZ; count-- ; /* Process next. Consumed */ } } n This is a naive solution that does not work AE 4 B 33 OSS Lecture 6 / Page 4 Silberschatz, Galvin and Gagne © 2005
Race Condition n count++ could be implemented as reg 1 = count reg 1 = reg 1 + 1 count = reg 1 n count-- could be implemented as reg 2 = count reg 2 = reg 2 - 1 count = reg 2 n Consider this execution interleaving with “count = 5” initially: S 0: producer executes S 1: producer executes S 2: consumer executes S 3: consumer executes S 4: consumer executes S 5: producer executes reg 1 = count reg 1 = reg 1 + 1 reg 2 = count reg 2 = reg 2 – 1 count = reg 2 count = reg 1 {reg 1 = 5} {reg 1 = 6} {reg 2 = 5} {reg 2 = 4} {count = 6} n Variable count represents a shared resource AE 4 B 33 OSS Lecture 6 / Page 5 Silberschatz, Galvin and Gagne © 2005
Critical-Section Problem What is a CRITICAL SECTION? Part of the code when one process tries to access a particular resource shared with another process. We speak about a critical section related to that resource. 1. Mutual Exclusion – If process Pi is executing in its critical section, then no other processes can be executing in their critical sections related to that resource 2. Progress – If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then one of the processes that wants to enter the critical section should be allowed as soon as possible 3. Bounded Waiting – A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes AE 4 B 33 OSS Lecture 6 / Page 6 Silberschatz, Galvin and Gagne © 2005
Critical Section Solution Critical section has two basic operation: enter_CS and leave_CS Possible implementation of this operation: n Only SW at application layer n Hardware support for operations n SW solution with supprot of OS AE 4 B 33 OSS Lecture 6 / Page 7 Silberschatz, Galvin and Gagne © 2005
SW solution for 2 processes n Have a variable turn whose value indicates which process may enter the critical section. If turn == 0 then P 0 can enter, if turn == 1 then P 1 can. P 0 while(TRUE) { while(turn!=0); /* wait */ critical_section(); turn = 1; noncritical_section(); } P 1 while(TRUE) { while(turn!=1); /* wait */ critical_section(); turn = 0; noncritical_section(); } n However: l Suppose that P 0 finishes its critical section quickly and sets turn = 1; both processes are in their non-critical parts. P 0 is quick also in its non-critical part and wants to enter the critical section. As turn == 1, it will have to wait even though the critical section is free. 4 4 AE 4 B 33 OSS The requirement #2 (Progression) is violated Moreover, the behaviour inadmissibly depends on the relative speed of the processes Lecture 6 / Page 8 Silberschatz, Galvin and Gagne © 2005
Peterson’s Solution n Two processes solution from 1981 n Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. n The two processes share two variables: l l int turn; Boolean flag[2] n The variable turn indicates whose turn it is to enter the critical section. n The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready (i = 0, 1) j = 1 -i; flag[i] = TRUE; turn = j; while ( flag[j] && turn == j); // CRITICAL SECTION flag[i] = FALSE; AE 4 B 33 OSS Lecture 6 / Page 9 Silberschatz, Galvin and Gagne © 2005
Synchronization Hardware n Many systems provide hardware support for critical section code n Uniprocessors – could disable interrupts Currently running code would execute without preemption l Dangerous to disable interrupts at application level l 4 Disabling l interrupts is usually unavailable in CPU user mode Generally too inefficient on multiprocessor systems 4 Operating systems using this are not broadly scalable n Modern machines provide special atomic hardware instructions 4 Atomic = non-interruptible Test memory word and set value l Swap contents of two memory words l AE 4 B 33 OSS Lecture 6 / Page 10 Silberschatz, Galvin and Gagne © 2005
Test. And. Set Instruction n Semantics: boolean Test. And. Set (boolean *target) { boolean rv = *target; *target = TRUE; return rv: } n Shared boolean variable lock, initialized to false. n Solution: while (Test. And. Set (&lock )) ; // active waiting // critical section lock = FALSE; // remainder section AE 4 B 33 OSS Lecture 6 / Page 11 Silberschatz, Galvin and Gagne © 2005
Swap Instruction n Semantics: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: } n Shared Boolean variable lock initialized to FALSE; each process has a local Boolean variable key. n Solution: key = TRUE; while (key == TRUE) { // waiting Swap (&lock, &key ); } // critical section lock = FALSE; // remainder section AE 4 B 33 OSS Lecture 6 / Page 12 Silberschatz, Galvin and Gagne © 2005
Synchronization without active waiting n Active waiting waste CPU l Can lead to failure if process with high priority is actively waiting for process with low priority n Solution: blocking by system functions l l sleep() wakeup(process) the process is inactive wake up process after leaving critical section void producer() { while (1) { if (count == BUFFER_SIZE) sleep(); // if there is no space wait - sleep buffer[in] = next. Produced; in = (in + 1) % BUFFER_SIZE; count++ ; if (count == 1) wakeup(consumer); // if there is something to consume } } void consumer() { while (1) { if (count == 0) sleep(); // cannot do anything – wait - sleep next. Consumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count-- ; if (count == BUFFER_SIZE-1) wakeup(producer); // now there is space for new product } } AE 4 B 33 OSS Lecture 6 / Page 13 1 3 Silberschatz, Galvin and Gagne © 2005
Synchronization without active waiting (2) n Presented code is not good solution: l Critical section for shared variable count and function sleep() is not solved 4 Consumer read count == 0 and then Producer is switch before it call sleep() function 4 Producer insert new product into buffer and try to wake up Consumer because count == 1. But Consumer is not sleeping! 4 Producer is switched to Consumer that continues in program by calling sleep() function 4 When producer fill the buffer it call function sleep() – both processes are sleeping! n Better solution: Semaphores AE 4 B 33 OSS 1 Meziprocesní komunikace a synchronizace procesů 4 Silberschatz, Galvin and Gagne © 2005 Lecture 6 / Page 14
Semaphore n Synchronization tool that does not require busy waiting l Busy waiting waists CPU time n Semaphore S – system object l With each semaphore there is an associated waiting queue. Each entry in waiting queue has two data items: 4 4 value (of type integer) pointer to next record in the list Two standard operations modify S: wait() and signal() wait(S) { value--; if (value < 0) { add caller to waiting queue block(P); } } signal(S) { value++; if (value <= 0) { remove caller from the waiting queue wakeup(P); } } l AE 4 B 33 OSS Lecture 6 / Page 15 Silberschatz, Galvin and Gagne © 2005
Semaphore as General Synchronization Tool n Counting semaphore – the integer value can range over an unrestricted domain n Binary semaphore – the integer value can be only 0 or 1 l Also known as mutex lock n Can implement a counting semaphore S as a binary semaphore n Provides mutual exclusion (mutex) Semaphore S; // initialized to 1 wait (S); Critical Section signal (S); AE 4 B 33 OSS Lecture 6 / Page 16 Silberschatz, Galvin and Gagne © 2005
Spin-lock n Spin-lock is a general (counting) semaphore using busy waiting instead of blocking Blocking and switching between threads and/or processes may be much more time demanding than the time waste caused by shorttime busy waiting l One CPU does busy waiting and another CPU executes to clear away the reason for waiting l n Used in multiprocessors to implement short critical sections l Typically inside the OS kernel n Used in many multiprocessor operating systems l AE 4 B 33 OSS Windows 2 k/XP, Linuxes, . . . Lecture 6 / Page 17 Silberschatz, Galvin and Gagne © 2005
Deadlock and Starvation n Overlapping critical sections related to different resources n Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes n Let S and Q be two semaphores initialized to 1 P 0 P 1 P 0 preempted wait (S); wait (Q); . . . signal (S); signal (Q); . . . wait (Q); wait (S); signal (Q); signal (S); n Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. AE 4 B 33 OSS Lecture 6 / Page 18 Silberschatz, Galvin and Gagne © 2005
Classical Problems of Synchronization n Bounded-Buffer Problem l Passing data between 2 processes n Readers and Writers Problem l Concurrent reading and writing data (in databases, . . . ) n Dining-Philosophers Problem from 1965 l An interesting illustrative problem to solve deadlocks 4 Five philosophers sit around a table; they either think or eat 4 They eat slippery spaghetti and each needs two sticks (forks) 4 What happens if all five philosophers pick-up their right-hand side stick? “They will die of hunger” AE 4 B 33 OSS Lecture 6 / Page 19 Silberschatz, Galvin and Gagne © 2005
Bounded-Buffer Problem using Semaphores n Three semaphores mutex – for mutually exclusive access to the buffer – initialized to 1 used – counting semaphore indicating item count in buffer – initialized to 0 l free – number of free items – initialized to BUF_SZ l l void producer() { while (1) { /* Generate new item into next. Produced */ wait(free); wait(mutex); buffer[in] = next. Produced; in = (in + 1) % BUF_SZ; signal(mutex); signal(used); } } void consumer() { while (1) { wait(used); wait(mutex); next. Consumed = buffer[out]; out = (out + 1) % BUF_SZ; signal(mutex); signal(free); /* Process the item from next. Consumed */ } } AE 4 B 33 OSS Lecture 6 / Page 20 Silberschatz, Galvin and Gagne © 2005
Readers and Writers n The task: Several processes access shared data 4 Some processes read the data – readers 4 Other processes need to write (modify) the l data – writers Concurrent reads are allowed 4 An arbitrary number of readers can access the data with no limitation l Writing must be mutually exclusive to any other action (reading and writing) 4 At a moment, only one writer may access the data 4 Whenever a writer modifies the data, no reader may read it n Two possible approaches l Priority for readers 4 No reader will wait unless the shared data are locked by a writer. In other words: Any reader waits only for leaving the critical section by a writer 4 Consequence: Writers may starve l Priority for writers 4 Any ready writer waits for freeing the critical section (by reader of writer). In other words: Any ready writer overtakes all ready readers. 4 Consequence: Readers may starve AE 4 B 33 OSS Lecture 6 / Page 21 Silberschatz, Galvin and Gagne © 2005
Readers and Writers with Readers’ Priority Shared data l l semaphore wrt, readcountmutex; int readcount l wrt = 1; readcountmutex = 1; readcount = 0; Initialization Implementation Writer: wait(wrt); . . writer modifies data. . signal(wrt); Reader: wait(readcountmutex); readcount++; if (readcount==1) wait(wrt); signal(readcountmutex); . . . read shared data. . . wait(readcountmutex); readcount--; if (readcount==0) signal(wrt); signal(readcountmutex); AE 4 B 33 OSS Lecture 6 / Page 22 Silberschatz, Galvin and Gagne © 2005
Readers and Writers with Writers’ Priority Shared data l semaphore wrt, rdr, readcountmutex, writecountmutex; int readcount, writecount; Initialization l wrt = 1; rdr = 1; readcountmutex = 1; writecountmutex = 1; readcount = 0; writecount = 0; Implementation Reader: wait(rdr); wait(readcountmutex); readcount++; if (readcount == 1) wait(wrt); signal(readcountmutex); signal(rdr); Writer: wait(writecountmutex); writecount++; if (writecount==1) wait(rdr); signal(writecountmutex); wait(wrt); . . . modify shared data. . . read shared data. . . wait(readcountmutex); readcount--; if (readcount == 0) signal(wrt); signal(readcountmutex); AE 4 B 33 OSS signal(wrt); wait(writecountmutex); writecount--; if (writecount==0) release(rdr); signal(writecountmutex); Lecture 6 / Page 23 Silberschatz, Galvin and Gagne © 2005
Dining Philosophers Problém Shared data l semaphore chop. Stick[ ] = new Semaphore[5]; Initialization l for(i=0; i<5; i++) chop. Stick[i] = 1; Implementation of philosopher i: do { chop. Stick[i]. wait; chop. Stick[(i+1) % 5]. wait; eating(); // Now eating chop. Stick[i]. signal; chop. Stick[(i+1) % 5]. signal; thinking(); // Now thinking } while (TRUE) ; n This solution contains NO deadlock prevention l AE 4 B 33 OSS A rigorous avoidance of deadlock for this task is very complicated Lecture 6 / Page 24 Silberschatz, Galvin and Gagne © 2005
Monitors n A high-level abstraction that provides a convenient and effective mechanism for process synchronization n Only one process may be active within the monitor at a time monitor_name { // shared variable declarations condition x, y; // condition variables declarations procedure P 1 (…) { …. } … procedure Pn (…) {……} } Initialization code ( …. ) { … } n Two operations on a condition variable: x. wait () – a process that invokes the operation is suspended. l x. signal () – resumes one of processes (if any) that invoked x. wait () l AE 4 B 33 OSS Lecture 6 / Page 25 Silberschatz, Galvin and Gagne © 2005
Monitor with Condition Variables AE 4 B 33 OSS Lecture 6 / Page 26 Silberschatz, Galvin and Gagne © 2005
Dining Philosophers with Monitors monitor DP { enum {THINKING; HUNGRY, EATING} state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i] != EATING) self [i]. wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5); } void test (int i) { if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING) ) { state[i] = EATING ; self[i]. signal () ; } } } AE 4 B 33 OSS initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } Lecture 6 / Page 27 Silberschatz, Galvin and Gagne © 2005
Synchronization Examples n Windows XP Synchronization Uses interrupt masks to protect access to global resources on uniprocessor systems l Uses spinlocks on multiprocessor systems l Also provides dispatcher objects which may act as either mutexes and semaphores l Dispatcher objects may also provide events l 4 An event acts much like a condition variable n Linux Synchronization l l Disables interrupts to implement short critical sections Provides semaphores and spin locks n Pthreads Synchronization Pthreads API is OS-independent and the detailed implementation depends on the particular OS l By POSIX, it provides l 4 mutex locks 4 condition variables (monitors) 4 read-write locks (for long critical 4 spin locks AE 4 B 33 OSS sections) Lecture 6 / Page 28 Silberschatz, Galvin and Gagne © 2005
End of Lecture 6 Questions?