DISTRIBUTED MUTEX EE 324 Lecture 11 Vector Clocks
DISTRIBUTED MUTEX EE 324 Lecture 11
Vector Clocks Vector clocks overcome the shortcoming of Lamport logical clocks � L(e) < L(e’) does not imply e happened before e’ Goal � Want ordering that matches causality � V(e) < V(e’) if and only if e → e’ Method � Label ci each event by vector V(e) [c 1, c 2 …, cn] = # events in process i that causally precede e
Vector Clock Algorithm Initially, all vectors [0, 0, …, 0] For event on process i, increment own ci Label message sent with local vector When process j receives message with vector [d 1, d 2, …, dn]: � Set local each local entry k to max(ck, dk) � Increment value of cj
Vector Clocks 4 Vector clocks overcome the shortcoming of Lamport logical clocks � L(e) < L(e’) does not imply e happened before e’ Vector timestamps are used to timestamp local events They are applied in schemes for replication of data
Vector Clocks 5 At p 1 �a occurs at (1, 0, 0); b occurs at (2, 0, 0); piggyback (2, 0, 0) on m 1 At p 2 on receipt of m 1 use max ((0, 0, 0), (2, 0, 0)) = (2, 0, 0) and add 1 to own element = (2, 1, 0) Meaning of =, <=, max etc for vector timestamps � compare elements pairwise
Vector Clocks 6 Note that e e’ implies L(e)<L(e’). The converse is also true Can you see a pair of parallel events? � c || e( parallel) because neither V(c) <= V(e) nor V(e) <= V(c)
Figure 14. 6 Lamport timestamps for the events shown in Figure 14. 5 Instructor’s Guide for Coulouris, Dollimore, Kindberg and Blair, Distributed Systems: Concepts and Design Edn. 5 © Pearson Education 2012
Figure 14. 7 Vector timestamps for the events shown in Figure 14. 5 Instructor’s Guide for Coulouris, Dollimore, Kindberg and Blair, Distributed Systems: Concepts and Design Edn. 5 © Pearson Education 2012
Logical clocks including VC doesn’t capture everything Out-of-band communication
Distributed Mutex (Reading CDK 5 15. 2) We learned about mutex, semaphore, and CVs within a single system. � What do they have in common? � They require a shared state and we kept it in the memory. Distributed mutex � No shared memory � How do we implement it? Message passing � Challenges: Message can be dropped and processes can fail.
Distributed Mutex (Reading CDK 5 15. 2) Entering/leaving a critical section � Enter() --- block if neccessary � Resource. Accessses() --- access shared resource (in side the CS) � Leave() Goal � Safety: at most one process may execute in the CS at one time � Liveness: Requests to enter/exit CS eventually succeeds. (no deadlock or starvation) � ordering: If one entry request “happened before” another, then entry to CS must happen in that order.
Distributed Mutex (Reading CDK 5 15. 2) ordering � Example explained in class Other performance objectives � Reduce the number of messages � Minimize synchronization delay
Mutual Exclusion A Centralized Algorithm 13 1. 2. 3. Process 1 asks the coordinator for permission to access a shared resource Permission is granted Process 2 then asks permission to access the same resource The coordinator does not reply When process 1 releases the resource, it tells the coordinator, which then replies to 2
Mutual Exclusion A Centralized Algorithm Advantages � Simple, small delay (one RTT) to acquire mutex � Only 3 messages required to enter and leave the critical section Disadvantages � Single point of failure � Central performance bottleneck � Does not ensure ordering (example? ) � Must elect a master in a consistent fashion
A Token Ring Algorithm 15 An unordered group of processes on a network A logical ring constructed in software � Use ring to pass right to access resource
A Token Ring Algorithm Benefits: Simple Problems: Failure recovery can be difficult. �A single process failure can break the ring. �But, a failure can be recovered by dropping the process in the logical ring. �Does not ensure ordering �Long synchronization delay: Need to wait for up to N-1 messages, for N processors
Lamport’s Shared Priority Queue Maintain a global priority queue of requests for the critical section. But each process has its own queue. The ordering inside the Qs is enforced by Lamport’s clock. � Thus, we enforce ordering.
Lamport’s Shared Priority Queue Each process i locally maintains Qi (its own version of the priority Q) To execute critical section, you must have replies from all other processes AND your request must be at the front of Qi When you have all replies: � All other processes are aware of your request (because the request happens before response) � You are aware of any earlier requests (assume messages from the same process are not reordered)
Lamport’s Shared Priority Queue To enter critical section at process i : Stamp your request with the current time T � Add request to Qi � Broadcast REQUEST(T) to all processes � Wait for all replies and for T to reach front of Qi To leave � Pop head of Qi, Broadcast RELEASE to all processes On receipt of REQUEST(T’) from process j: Add T’ to Qi � If waiting for REPLY from j for an earlier request T, wait until j replies to you � Otherwise REPLY • On receipt of RELEASEPop head of Qi
Lamport’s Shared Priority Queue Advantages: �Fair �Short synchronization delay Disadvantages: �Very unreliable (Any process failure halts progress) � 3(N-1) messages per entry/exit
Announcements Midterm � In the process of making it. � Written exam: take home exam. Honor code. Only textbook and lecture notes. No discussion. No Internet. Release at noon 10/24. Due Friday evening 10/25 � Programming part: Can use the Internet, but no discussion.
- Slides: 21