Chapter 6 Concurrent Processes CIS 106 Microcomputer Operating

  • Slides: 31
Download presentation
Chapter 6 Concurrent Processes CIS 106 Microcomputer Operating Systems Gina Rue CIS Faculty Ivy

Chapter 6 Concurrent Processes CIS 106 Microcomputer Operating Systems Gina Rue CIS Faculty Ivy Tech State College Northwest Region 01

Introduction Concurrent Processes • Multiprocessing systems have more than one CPU – problems that

Introduction Concurrent Processes • Multiprocessing systems have more than one CPU – problems that occur in single processor systems apply to multi-processes in general • single processor with 2 or more processes • more than one processor with multiprocesses See Illustration p. 125 2

What Is Parallel Processing? – Parallel Processing, also called multiprocessing, where two or more

What Is Parallel Processing? – Parallel Processing, also called multiprocessing, where two or more processors operate in unison – Two or more CPUs are executing instructions simultaneously – Each CPU can have a process in the RUNNING state at the same time – Process Manager has to coordinate the activity of each processor, as well as synchronize the interaction among the CPUs 3

What Is Parallel Processing? – Synchronization is the key to the system’s success because

What Is Parallel Processing? – Synchronization is the key to the system’s success because many things can go wrong in a multiprocessing system – The system won’t work unless every processor communicates & cooperates with every other processor – Since mid-1980 s, costs of CPU hardware declined, has increased multiprocessors used in business environment today 4

What Is Parallel Processing? – Two major forces behind multiprocessing development • enhance throughput

What Is Parallel Processing? – Two major forces behind multiprocessing development • enhance throughput • increase computing power – Two primary benefits • increased reliability • faster processing 5

Typical Multiprocessing Configurations Much depends on how multiple processors are configured. Three typical configurations

Typical Multiprocessing Configurations Much depends on how multiple processors are configured. Three typical configurations are: – master/slave – loosely coupled – symmetric 6

Master/Slave Configuration • Single-processor (master) with additional (slave) processors – master processor manages entire

Master/Slave Configuration • Single-processor (master) with additional (slave) processors – master processor manages entire system – well suited for front-end users (interactive) & back-end users (batch) – advantage is simplicity – disadvantages • reliability is no higher than single processor • can lead to poor use of resources • increase # of interrupts, slave/master See Fig. 6. 1 p. 128 7

Loosely Coupled Configuration • Several complete computer systems, each with its own memory, I/O

Loosely Coupled Configuration • Several complete computer systems, each with its own memory, I/O devices, CPU, & OS – each processor controls its own resources – each processor can communicate & cooperate with the others – job scheduling for new jobs may be assigned to processor with lightest load or best combination of output devices – if one processor fails, others can continue to work independently, difficult to detect failed processor See Fig. 6. 2 p. 128 8

Symmetric Configuration • Processor scheduling is decentralized • Best implemented if the processors are

Symmetric Configuration • Processor scheduling is decentralized • Best implemented if the processors are all of the same type – four advantages over loosely coupled: • more reliable • uses resources effectively • balances load well • degraded gracefully in the event of system failure See Fig. 6. 3 p. 129 9

Process Synchronization Software • Success hinges on the capability of the OS to make

Process Synchronization Software • Success hinges on the capability of the OS to make resources unavailable to other processes while it’s being used by one of them • Used resources must be locked away from other processes until it is released - critical region allow a process to finish • A mistake could leave a job waiting 10 indefinitely

Test-And-Set • single indivisible machine instruction known as “TS” • introduced by IBM for

Test-And-Set • single indivisible machine instruction known as “TS” • introduced by IBM for System 360/370 computers • In a machine cycle it tests if the key is available/unavailable • single bit in a storage location that can contain a zero (free) or one (busy) 11

Test-And-Set • Simple to implement & works well for a small # of processes

Test-And-Set • Simple to implement & works well for a small # of processes • two drawbacks: • starvation can occur when many processes are waiting to enter critical region • busy waiting-the waiting processes remain in unproductive, resourceconsuming wait loops 12

WAIT and SIGNAL • Modification of “TS” designed to remove busy waiting • Two

WAIT and SIGNAL • Modification of “TS” designed to remove busy waiting • Two operations are mutually exclusive: • WAIT activated when the process encounters a busy condition code • SIGNAL activated when the process exits the critical region, condition code is set to “free” 13

Semaphores • A nonnegative integer variable that’s used as a flag • signals if

Semaphores • A nonnegative integer variable that’s used as a flag • signals if and when a resource is free & can be used to process • Dijkstra (1965) introduced two operations to overcome process synchronization problems • P - proberen (to test) • V - verhogen (to increment) See Table 6. 1 p. 133 14

Semaphores • P & V are executed by the OS in response called issued

Semaphores • P & V are executed by the OS in response called issued by any one process naming a semaphore as parameter • Traditional name for semaphore is mutex - Mutual Exclusion • necessary to avoid two operations attempt to execute at the same time 15

Semaphores • Sequential computations, mutex is achieved automatically because each operation is handled in

Semaphores • Sequential computations, mutex is achieved automatically because each operation is handled in order, one at a time • Parallel computations, order of execution can change, so mutex must be explicitly stated & maintained 16

Process Cooperation Several processes work together to complete a common task, two examples: •

Process Cooperation Several processes work together to complete a common task, two examples: • producers and consumers • readers and writers Each requires both mutual exclusion and synchronization, they are implemented by using semaphores 17

Producers and Consumers Classic problem: • One process produces data that another process consumes

Producers and Consumers Classic problem: • One process produces data that another process consumes later • It can also be expanded to several pairs of producers and consumers • Can be extended to buffers that hold records or data where process-to-process communication is required See Fig. 6. 5 p. 135 18

Readers and Writers Classic problem: • Two processes need to access a shared resource

Readers and Writers Classic problem: • Two processes need to access a shared resource such as a file or database • Combination priority policy used to prevent starvation • Readers must call two procedures: – checks whether the resources can be immediately granted for reading – checks to see if there any writers waiting 19

Concurrent Programming Multiprocessing can also refer to one job using several processors to execute

Concurrent Programming Multiprocessing can also refer to one job using several processors to execute sets of instructions in parallel – requires a programming language – requires a system to this type 20

Applications of Concurrent Programming Monoprogramming languages • instructions are executed one at a time

Applications of Concurrent Programming Monoprogramming languages • instructions are executed one at a time • sufficient for most computational purposes – easy to implement – fast enough for most users See Table 6. 2 p. 138 21

Applications of Concurrent Programming By using a language that allows concurrent processing, arithmetic expressions

Applications of Concurrent Programming By using a language that allows concurrent processing, arithmetic expressions can be processed differently • COBEGIN • COEND – indicate to a compiler which instructions can be processed concurrently 22

Applications of Concurrent Programming When operations are performed at the same time, we increase

Applications of Concurrent Programming When operations are performed at the same time, we increase computation speed, but also create complexity of the program language & hardware • explicit parallelism – detects which instructions can be executed in parallel • implicit parallelism – automatic detection by the compiler of instructions that can be performed in parallel See Table 6. 3 p. 138 23

ADA - Agusta Ada Byron • U. S. Department of Defense early 1970 s

ADA - Agusta Ada Byron • U. S. Department of Defense early 1970 s • Designed original language for embedded computer systems • Ada - high-level programming language made available to public in 1980 – modules support “information hiding” – implement concurrent programming – design made it easy to verify correctness of a program 24

Ada - Modular Programming An Ada program would contain one or more program units

Ada - Modular Programming An Ada program would contain one or more program units that could be compiled separately and were composed of: – a specification part, which has all the information that must be visible to other units (the argument list) – a body part, made up of implementation details that don’t need to be visible to other units 25

Ada - Modular Programming Program units can fall into any one of three types:

Ada - Modular Programming Program units can fall into any one of three types: – subprograms which are executable algorithms – packages which are collections of entities, (procedures or functions) – tasks which are concurrent computations • is the heart of the language’s parallel processing ability • key is synchronization of the tasks 26

Ada - The wave of the future? Landmark language: – researchers find it helpful

Ada - The wave of the future? Landmark language: – researchers find it helpful because of parallel processing power – modular design is appealing to application programmers and systems analysts – tasking capabilities appeal to designers of database systems & other applications that require parallel processing – some universities offer Ada courses majoring in computer systems 27

Summary • Multiprocessing systems have two or more CPUs that must be synchronized by

Summary • Multiprocessing systems have two or more CPUs that must be synchronized by the Process Manager • Each processor must communicate with the other • Multiprocessor configurations – master/slave – loosely coupled – symmetric 28

Summary • Multiprocessing also occurs in single processor systems between interacting processes that obtain

Summary • Multiprocessing also occurs in single processor systems between interacting processes that obtain control of the CPU at different times • Success depends on the ability of the system to synchronize the processors or processes & the system’s other resources 29

Summary • Mutual exclusion helps keep the processes having allocated resources from becoming deadlocked

Summary • Mutual exclusion helps keep the processes having allocated resources from becoming deadlocked – test-and-set – WAIT and SIGNAL – semaphores (P - proberen, V – verhogen, and mutex) 30

Summary • Hardware & Software used to synchronize processes • Synchronization problems – missed

Summary • Hardware & Software used to synchronize processes • Synchronization problems – missed waiting customers – synchronization of producers & consumers – mutual exclusion of readers & writers 31