On the expressive power of synchronization primitives in

  • Slides: 41
Download presentation
On the expressive power of synchronization primitives in the π-calculus Catuscia Palamidessi, INRIA Saclay,

On the expressive power of synchronization primitives in the π-calculus Catuscia Palamidessi, INRIA Saclay, France 14 October 2009 BASICS'09, Shanghai 1

Focus on the -calculus l Contents l l l The -calculus with mixed choice

Focus on the -calculus l Contents l l l The -calculus with mixed choice ( ) Expressive power of the -calculus and problems with its fully distributed implementation The asynchronous -calculus ( a) The π hierarchy Towards a randomized fully distributed implementation of l l The probabilistic asynchronous -calculus ( pa) Encoding into pa using a generalized dining cryptographers algorithm 14 October 2009 BASICS'09, Shanghai 2

The -calculus l Proposed by [Milner, Parrow, Walker ‘ 92] as a formal language

The -calculus l Proposed by [Milner, Parrow, Walker ‘ 92] as a formal language reason about concurrent systems Concurrent: several processes running in parallel l Asynchronous cooperation: every process proceeds at its own speed l Synchronous communication: handshaking, input and output prefix l l to Mixed guarded choice: input and output guards like in CSP and CCS. The implementation of guarded choice is aka the binary interaction problem Dynamic generation of communication channels Scope extrusion: a channel name can be communicated and its scope extended to include the recipient Q z z x P 14 October 2009 y z R BASICS'09, Shanghai 3

 : the -calculus with mixed choice Syntax g : : = x(y) |

: the -calculus with mixed choice Syntax g : : = x(y) | x^y | t P : : = | | 14 October 2009 Si g i. P i P|P (x) P rec. A P A prefixes (input, output, silent) mixed guarded choice parallel new name recursion procedure name BASICS'09, Shanghai 4

Operational semantics l Transition system l Rules Choice Open 14 October 2009 P -a

Operational semantics l Transition system l Rules Choice Open 14 October 2009 P -a Q Si gi. Pi –gi Pi P -x^y P’ __________ (y) P -x^(y) P’ BASICS'09, Shanghai 5

Operational semantics l Rules (continued) Com Close P -x(y) P’ Q -x^z Q’ ____________

Operational semantics l Rules (continued) Com Close P -x(y) P’ Q -x^z Q’ ____________ P | Q -t P’ [z/y] | Q’ P -x(y) P’ Q -x^(z) Q’ _____________ Par 14 October 2009 P | Q -t (z) (P’ [z/y] | Q’) P -g P’ _________ Q | P -g Q | P f(Q), b(g) disjoint BASICS'09, Shanghai 6

Features which make very expressive - and cause difficulty in its distributed implementation l

Features which make very expressive - and cause difficulty in its distributed implementation l (Mixed) Guarded choice l l Symmetric solution to certain distributed problems involving distributed agreement Link mobility l l l Network reconfiguration It allows expressing HO (e. g. l calculus) in a natural way In combination with guarded choice, it allows solving more distributed problems than those solvable by guarded choice alone 14 October 2009 BASICS'09, Shanghai 7

The expressive power of l Example of distributed agreement: The leader election problem in

The expressive power of l Example of distributed agreement: The leader election problem in a symmetric network Two symmetric processes must elect one of them as the leader l In a finite amount of time l The two processes must agree y P Q x x. Pwins + y^. Ploses | y. Qwins + x^. Qloses Ploses | Qwins Pwins | Qloses 14 October 2009 BASICS'09, Shanghai 8

Example of a network where the leader election problem cannot be solved by guarded

Example of a network where the leader election problem cannot be solved by guarded choice alone For the following network there is no (fully distributed and symmetric) solution in CCS, or in CSP 14 October 2009 BASICS'09, Shanghai 9

A solution to the leader election problem in 14 October 2009 winner looser BASICS'09,

A solution to the leader election problem in 14 October 2009 winner looser BASICS'09, Shanghai 10

Approaches to the implementation of guarded choice in literature l l l [Parrow and

Approaches to the implementation of guarded choice in literature l l l [Parrow and Sjodin 92], [Knabe 93], [Tsai and Bagrodia 94]: asymmetric solution based on introducing an order on processes Other asymmetric solutions based on differentiating the initial state Plenty of centralized solutions [Joung and Smolka 98] proposed a randomized solution to the multiway interaction problem, but it works only under an assumption of partial synchrony among processes In this talk we propose an implementation fully distributed, symmetric, and using no synchronous hypotheses. 14 October 2009 BASICS'09, Shanghai 11

State of the art in l l l Formalisms able to express distributed agreement

State of the art in l l l Formalisms able to express distributed agreement are difficult to implement in a distributed fashion For this reason, the field has evolved towards variants of which retain mobility, but have no guarded choice One example of such variant is the asynchronous calculus proposed by [Honda. Tokoro’ 91, Boudol, ’ 92] (Asynchronous = Asynchronous communication) 14 October 2009 BASICS'09, Shanghai 12

 a : the Asynchonous Version of [Amadio, Castellani, Sangiorgi ’ 97] Syntax g

a : the Asynchonous Version of [Amadio, Castellani, Sangiorgi ’ 97] Syntax g : : = x(y) | t P : : = | | | 14 October 2009 Si g i. P i x^y P|P (x) P rec. A P A prefixes input guarded choice output action parallel new name recursion procedure name BASICS'09, Shanghai 13

Characteristics of a l Asynchronous communication: l l l we can’t write a continuation

Characteristics of a l Asynchronous communication: l l l we can’t write a continuation after an output, i. e. no x^y. P, but only x^y | P so P will proceed without waiting for the actual delivery of the message Input-guarded choice: only input prefixes are allowed in a choice. Note: the original asynchronous -calculus did not contain a choice construct. However the version presented here was shown by [Nestmann and Pierce, ’ 96] to be equivalent to the original asynchronous -calculus l It can be implemented in a fully distributed fashion (see for instance Odersky’s group’s project Pi. Lib) 14 October 2009 BASICS'09, Shanghai 14

The π hierarchy l We can relate the various sublanguages of π by using

The π hierarchy l We can relate the various sublanguages of π by using encodings l l Preserving certain observable properties of runs. Here we will consider as observable properties the presence/absence of certain actions. Existence of such encoding represented by 14 October 2009 BASICS'09, Shanghai 15

The π hierarchy mixed choice Value-passing CCS Internal mobility Separate choice Input guarded choice

The π hierarchy mixed choice Value-passing CCS Internal mobility Separate choice Input guarded choice output prefix asynchronous 14 October 2009 BASICS'09, Shanghai 16

The π hierarchy mixed choice Internal mobility Palamidessi Value-passing CCS Separate choice Input guarded

The π hierarchy mixed choice Internal mobility Palamidessi Value-passing CCS Separate choice Input guarded choice Nestmann output prefix asynchronous 14 October 2009 BASICS'09, Shanghai 17

Separation result 1 l It is not possible to encode mixed-choice π into separate-choice

Separation result 1 l It is not possible to encode mixed-choice π into separate-choice π l l Homomorphically wrt |: Preserving 2 distinct observable actions This result is based on a sort of confluence property, which holds for the separate-choice π and not for the separate-choice π The proof proceeds by showing that the separatechoice π cannot solve the leader election problem for 2 nodes 14 October 2009 BASICS'09, Shanghai 18

Separation result 2 l It is not possible to encode mixed-choice π into value-passing

Separation result 2 l It is not possible to encode mixed-choice π into value-passing ccs or π with internal mobil. l l Homomorphically wrt |: Without introducing extra channels Preserving 2 distinct observable actions The proof proceeds by showing that the separatechoice π cannot solve the leader election problem for certain kinds of graphs 14 October 2009 BASICS'09, Shanghai 19

Towards a fully distributed implementation of l l The results of previous pages show

Towards a fully distributed implementation of l l The results of previous pages show that a fully distributed implementation of must necessarily be randomized A two-steps approach: [[ ]] probabilistic asynchronous << >> distributed machine 14 October 2009 BASICS'09, Shanghai Advantages: the correctness proof is easier since [[ ]] (which is the difficult part of the implementation) is between two similar languages 20

 pa: the Probabilistic Asynchonous Syntax g : : = x(y) | t P

pa: the Probabilistic Asynchonous Syntax g : : = x(y) | t P : : = | | | 14 October 2009 prefixes Si pi gi. P i x^y P|P (x) P rec. A P A pr. inp. guard. choice Si pi = 1 output action parallel new name recursion procedure name BASICS'09, Shanghai 21

The operational semantics of pa l Based on the Probabilistic Automata of Segala and

The operational semantics of pa l Based on the Probabilistic Automata of Segala and Lynch l Distinction between l l nondeterministic behavior (choice of the scheduler) and probabilistic behavior (choice of the process) Scheduling Policy: 1/2 1/3 14 October 2009 1/3 2/3 1/3 The scheduler chooses the group of transitions Execution: The process chooses probabilistically the transition within the group BASICS'09, Shanghai 22

The operational semantics of pa l Representation of a group of transition P {

The operational semantics of pa l Representation of a group of transition P { --gi-> pi Pi } i l Rules Choice Si pi gi. Pi {--gi-> pi Pi }i Par P {--gi-> pi. Pi }i __________ Q | P {--gi-> pi. Q | Pi }i 14 October 2009 BASICS'09, Shanghai 23

The operational semantics of pa Com P {--xi(yi)-> pi. Pi }i Q {--x^z-> 1

The operational semantics of pa Com P {--xi(yi)-> pi. Pi }i Q {--x^z-> 1 Q’ }i __________________ P | Q {--t-> pi. Pi[z/yi] | Q’ }xi=x U { --xi(yi)-> pi Pi | Q }xi=/=x Res P {--xi(yi)-> pi. Pi }i __________ (x) P { --xi(yi)-> qi (x) Pi }xi =/= x 14 October 2009 qi renormalized BASICS'09, Shanghai 24

Implementation of pa l Compilation in a DM l l l << >> :

Implementation of pa l Compilation in a DM l l l << >> : pa DM Distributed << P | Q >> = << P >>. start() | << Q >>. start(); Compositional << P op Q >> = << P >> jop << Q >> for all op Channels are buffers with test-and-set (synchronized) methods for input and output. The input-guarded choice selects probabilistically one of the channels with available data 14 October 2009 BASICS'09, Shanghai 25

Encoding into pa l [[ ]] : pa l Fully distributed [[ P |

Encoding into pa l [[ ]] : pa l Fully distributed [[ P | Q ]] = [[ P ]] | [[ Q ]] l Preserves the communication structure [[ P s ]] = [[ P ]] s l Compositional [[ P op Q ]] = Cop [ [[ P ]] , [[ Q ]] ] l Correct wrt a notion of probabilistic testing semantics P must O 14 October 2009 iff [[ P ]] must [[ O ]] with prob 1 BASICS'09, Shanghai 26

Encoding into pa l Idea (from an idea of Uwe Nestmann): l l l

Encoding into pa l Idea (from an idea of Uwe Nestmann): l l l Every mixed choice is translated into a parallel comp. of processes corresponding to the branches, plus a lock f The input processes compete for acquiring both its own lock and the lock of the partner The input process which succeeds first, establishes the communication. The other alternatives are discarded R S Q f Pi P f R’i f Ri Si Qi f The problem is reduced to a dining philosophers problem: each lock is a fork, each input process is a philosopher, and enters a competition to get his adjacent forks. The winners of the competition can synchronize, which corresponds to eating in the DP. There can be more than one winner Generalized DP: each fork can be adjacent to more than two Philosophers 14 October 2009 BASICS'09, Shanghai 27

Dining Philosophers: classic case Each fork is shared by exactly two philosophers 14 October

Dining Philosophers: classic case Each fork is shared by exactly two philosophers 14 October 2009 BASICS'09, Shanghai 28

Dining Philosophers: generalized case • Each fork can be shared by more than two

Dining Philosophers: generalized case • Each fork can be shared by more than two philosophers 14 October 2009 BASICS'09, Shanghai 29

Intended properties of solution l Deadlock freedom (aka progress): l Starvation freedom: every hungry

Intended properties of solution l Deadlock freedom (aka progress): l Starvation freedom: every hungry philosopher will eventually eat l Robustness wrt a large class of schedulers: l Fully distributed: no centralized control or memory l Symmetric: if there is a hungry philosopher, a philosopher will eventually eat (but we won’t consider this property here) A scheduler decides who does the next move , not necessarily in cooperation with the program, maybe even against it l All philosophers run the same code and are in the same initial state l The same holds for the forks 14 October 2009 BASICS'09, Shanghai 30

The Dining Philosophers - a brief history l l l Problem proposed by Edsger

The Dining Philosophers - a brief history l l l Problem proposed by Edsger Dijkstra in 1965 (actually the popular formulation is due to Tony Hoare) Many solutions had been proposed for the DP, but none of them satisfied all requirements In 1981, Lehmann and Rabin proved that l l There was no “deterministic” solution satisfying all requirements They proposed a randomized solution and proved that it satisfies all requirement. Progress is satisfied in the probabilistic sense, I. e. there is probability 1 that a philosopherwill eventually eat. Meanwhile, Francez and Rodeh had come out in 1980 with solution to the DC written in CSP The controversy was solved by Lehmann and Rabin who proved that CSP (with guarded choice) is not implementable in a distributed fashion (deterministically). 14 October 2009 BASICS'09, Shanghai 31

The algorithm of Lehmann and Rabin 7) think; choose probabilistically first_fork in {left, right};

The algorithm of Lehmann and Rabin 7) think; choose probabilistically first_fork in {left, right}; if not taken(first_fork) then take(first_fork) else goto 3; if not taken(second_fork) then take(second_fork); else { release(first_fork); goto 2 } eat; release(second_fork); release(first_fork); 8) goto 1 1) 2) 3) 4) 5) 6) 14 October 2009 BASICS'09, Shanghai 32

Problems Wrt to our encoding goal, the algorithm of Lehmann and Rabin has two

Problems Wrt to our encoding goal, the algorithm of Lehmann and Rabin has two problems: l 1. It only works for the classical case (not for the generalized one) 2. It works only for fair schedulers 14 October 2009 BASICS'09, Shanghai 33

Conditions on the graph l l Theorem: The algorithm of Lehmann and Rabin is

Conditions on the graph l l Theorem: The algorithm of Lehmann and Rabin is deadlock-free if and only if all cycles are pairwise disconnected There are essentially three ways in which two cycles can be connected: 14 October 2009 BASICS'09, Shanghai 34

Proof of theorem l l If part) Each cycle can be considered separately. On

Proof of theorem l l If part) Each cycle can be considered separately. On each of them the classic algorithm is deadlock-free. Some additional care must be taken for the arcs that are not part of the cycle. Only if part) By analysis of the three possible cases. Actually they are all similar. We illustrate the first case committed taken 14 October 2009 BASICS'09, Shanghai 35

Proof of theorem l l l The initial situation has probability p > 0

Proof of theorem l l l The initial situation has probability p > 0 The scheduler forces the processes to loop Hence the system has a deadlock (livelock) with probability p Note that this scheduler is not fair. However we can define even a fair scheduler which induces an infinite loop with probability > 0. The idea is to have a scheduler that “gives up” after n attempts when the process keep choosing the “wrong” fork, but that increases (by f) its “stubborness” at every round. With a suitable choice of n and f we have that the probability of a loop is p/4 14 October 2009 BASICS'09, Shanghai 36

Solution for the Generalized DP l l l As we have seen, the algorithm

Solution for the Generalized DP l l l As we have seen, the algorithm of Lehmann and Rabin does not work on general graphs However, it is easy to modify the algorithm so that it works in general The idea is to reduce the problem to the pairwise disconnetted cycles case: l l Each fork is initially associated with one token. Each philosopher needs to acquire a token in order to participate to the competition. After this initial phase, the algorithm is the same as the Lehman & Rabin’s Theorem: The competing philosophers determine a graph in which all cycles are pairwise disconnected Proof: By case analysis. To have a situation with two connected cycles we would need a node with two tokens. 14 October 2009 BASICS'09, Shanghai 37

Generalized philosophers l l l The other problem we had to face: the solution

Generalized philosophers l l l The other problem we had to face: the solution of Lehmann and Rabin works only for fair schedulers, while pa does not provide any guarantee of fairness Fortunately, it turns out that the fairness is required only in order to avoid a busy-waiting livelock at instruction 3. If we replace busywaiting with suspension, then the algorithm works for any scheduler This result was achieved independently also by [Duflot, Fribourg, Picarronny 02]. 14 October 2009 BASICS'09, Shanghai 38

The algorithm of Lehmann and Rabin Modified so to avoid the need for fairness

The algorithm of Lehmann and Rabin Modified so to avoid the need for fairness 1) 2) 3) 4) 4) 5) 5) 6) 6) 7) 7) 8) 8) (second_fork); think; choose probabilistically first_fork in {left, right}; release(first_fork); if not 1 taken(first_fork) goto think; then take(first_fork) else wait; choose probabilistically first_fork in {left, right}; if not taken(second_fork) then take(second_fork); taken(first_fork) then take(first_fork) else {3; release(first_fork); goto 2 } goto eat; if not taken(second_fork) then take(second_fork); else { release(first_fork); goto 2 } release(second_fork); Eat; release(first_fork); release goto 1 14 October 2009 BASICS'09, Shanghai 39

The encoding l [[ (x) P ]] = (x) [[ P ]] l [[P

The encoding l [[ (x) P ]] = (x) [[ P ]] l [[P | Q ]] = [[ P ]] | [[ Q ]] l [[ ∑ gi. Pi ]] = the translation we have just seen l Theorem: For every P, [[P]] and P are testing-equivalent. Namely for every test T, inf (Prob (succ, [[ P ]] | [[T ]] ) = inf (Prob (succ , P | T)) sup (Prob (succ, [[ P ]] | [[T ]] ) = sup (Prob (succ , P | T)) 14 October 2009 BASICS'09, Shanghai 40

Conclusion l We have provided an encoding of the calculus into a probabilistic version

Conclusion l We have provided an encoding of the calculus into a probabilistic version of its asynchronous fragment l l fully distributed compositional correct wrt a notion of testing semantics Advantages: l l high-level solutions to distributed algorithms Easier to prove correct (no reasoning about randomization required) 14 October 2009 BASICS'09, Shanghai 41