Fair Cloud Sharing the Network in Cloud Computing

  • Slides: 21
Download presentation
Fair. Cloud: Sharing the Network in Cloud Computing Computer Communication Review(2012) Arthur : Lucian

Fair. Cloud: Sharing the Network in Cloud Computing Computer Communication Review(2012) Arthur : Lucian Popa Arvind Krishnamurthy Sylvia Ratnasamy Ion Stoica Presenter : 段雲鵬

Outline • • • Introduction challenges sharing networks Properties for network sharing Mechanism Conclusion

Outline • • • Introduction challenges sharing networks Properties for network sharing Mechanism Conclusion 2/20

Some concepts • Bisection bandwidth – Each node has a unit weight – Each

Some concepts • Bisection bandwidth – Each node has a unit weight – Each link has a unit weight • Flow def – standard five-tuple in packet headers • B denotes bandwidth • T denotes traffic • W denotes the weight of a VM 3/20

Background • Resource in cloud computing – Network , CPU , memory • Network

Background • Resource in cloud computing – Network , CPU , memory • Network allocation – More difficult • Source, destination and cross traffic – Tradeoff • payment proportionality VS bandwidth guarantees 4/20

Introduction • Network allocation – Unkown to users , bad predictability • Fairness issues

Introduction • Network allocation – Unkown to users , bad predictability • Fairness issues – Flows, source-destination pairs, or sources alone , destination alone • Difference with other resource – Interdependent Users – Interdependent Resources 5/20

Assumption • From a per-VM viewpoint • Be agnostic to VM placement and routing

Assumption • From a per-VM viewpoint • Be agnostic to VM placement and routing algorithms • In a single datacenter • Be largely orthogonal to work on network topologies to improve bisection bandwidth 6/20

Traditional Mechanism • Per flow fairness – Unfair when simply instantiating more flow •

Traditional Mechanism • Per flow fairness – Unfair when simply instantiating more flow • Per source-destination pair – Unfair when one VM communicates with more VMs • Per source – Unfair to destinations • Asymmetric – Only be fair for source or destination only 7/20

Examples • Per source-destination pair If there is little traffic on the A-F and

Examples • Per source-destination pair If there is little traffic on the A-F and B-E , B(A)=B(B) =B(E) =B(F) =2*B(C) =2*B(D) =B(G) =B(H) Per source B(E) =B(F) =0. 25*B(D) , In the opposite direction, B(A) =B(B) =0. 25*B(C) 8/20

Properties for network sharing(1) • Strategy proofness – Can’t increase bandwidth by modifying behavior

Properties for network sharing(1) • Strategy proofness – Can’t increase bandwidth by modifying behavior at application level • Pareto Efficiency A – X and Y is bottlenecked , when B(X-Y) increases, B(A-B) must decrease , otherwise congestion will be worse A X 10 M B Y 1 M 10 M 9/20

Properties for network sharing(2) • Non-zero Flow Allocation – A strictly +B() between each

Properties for network sharing(2) • Non-zero Flow Allocation – A strictly +B() between each pairs are expected • Independence L 1 – When T 2 increase , B 1 should not be affected • Symmetry – If all flows’ direction are swiched, the allocation should be the same L 2 Congested A e. g Not Congested 10/20 B

Network weight and user’s payment. • Weight Fidelity(provide incentive) – Strict Monotonicity (Monotonicity) •

Network weight and user’s payment. • Weight Fidelity(provide incentive) – Strict Monotonicity (Monotonicity) • If W(VM) increases , then all its traffic must increase (not decrease). – Proportionality • Guaranteed Bandwidth Subset P(2/3) – Admission control • They are conflicting, tradeoff Subset Q(1/3) No communication between P and Q 11/20

Per Endpoint Sharing (PES) • Can explicitly trade between weight fidelity and guaranteed bandwidth

Per Endpoint Sharing (PES) • Can explicitly trade between weight fidelity and guaranteed bandwidth – NA denote the number of VMs A is communicating with – WS-D=f(WS, WD) , WA-B=WB-A – Normalized by L 1 normalization • Drawback : Static Method (out of discussion) 12/20

Example • WA-D=WA/NA+WD/ND =1/2+1/2=1 • WA-C=WB-D=1/2+1/1=1. 5 • Total Weight=4(4 VMs) • So WA-D=1/4=0.

Example • WA-D=WA/NA+WD/ND =1/2+1/2=1 • WA-C=WB-D=1/2+1/1=1. 5 • Total Weight=4(4 VMs) • So WA-D=1/4=0. 25 WA-C=WB-D=1. 5/4=0. 325 13/20

Comparison 14/20

Comparison 14/20

PES • For one host , B ∝ (closer VMs) instead of (remote VMs)

PES • For one host , B ∝ (closer VMs) instead of (remote VMs) • Higher guarantees for the worst case • WA−B = WB−A =α*WA/NA+ β*WB/ NB – α and β can be designed to weight between bandwidth guarantees and weight fidelity 15/20

One Sided PES (OSPES) • Designed for tree-based topology • WA−B = WB−A =α*WA/NA+

One Sided PES (OSPES) • Designed for tree-based topology • WA−B = WB−A =α*WA/NA+ β*WB/ NB • When closer to A, α = 1 and β = 0 • When closer to B, α = 0 and β = 1 16/20

OSPES • fair sharing for the traffic towards or from the tree root –

OSPES • fair sharing for the traffic towards or from the tree root – Resource allocation are depended on the root When – Non-strict monotonicity W(A) = W(B) , If the access link is 1 Gbs, then each VM is guaranteed 500 Mbps WA-VM 1=1/1 WB-VMi=1/10(i=2, 3……, 11) 17/20

Max-Min Fairness • The minimum data rate that a dataflow achieves is maximized –

Max-Min Fairness • The minimum data rate that a dataflow achieves is maximized – The bottleneck is fully utilized • Can be applied 18/20

Conclusion • Problem : sharing the network within a cloud computing datacenter • Tradeoff

Conclusion • Problem : sharing the network within a cloud computing datacenter • Tradeoff between payment proportionality and bandwidth guarantees • A mechanism to make tradeoff between conflicting requirements 19/20

Reference • • • [1] Amazon web services. http: //aws. amazon. com. [2] M.

Reference • • • [1] Amazon web services. http: //aws. amazon. com. [2] M. Al-Fares, A. Loukissas, and A. Vahdat. A scalable, commodity data center network architecture. In SIGCOMM. ACM, 2008. [3] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat. Hedera: Dynamic Flow Scheduling for Data Center Networks. In NSDI, 2010. [4] H. Ballani, P. Costa, T. Karagiannis, and A. Rowstron. Towards Predictable Datacenter Networks. In ACM SIGCOMM, 2011. [5] D. P. Bertsekas and R. Gallager. Data networks (2. ed. ). Prentice Hall, 1992. [6] B. Briscoe. Flow rate fairness: Dismantling a religion. ACM SIGCOMM • • Computer Communication Review, 2007. [7] N. G. Duffield, P. Goyal, A. G. Greenberg, P. P. Mishra, K. K. Ramakrishnan, and J. E. van der Merwe. A flexible model for resource management in virtual private networks. In SIGCOMM, 1999. [8] A. Ghodsi, M. Zaharia, B. Hindman, et al. Dominant resource fairness: fair allocation of multiple resource types. In USENIX NSDI, 2011. [9] A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta. VL 2: A Scalable and Flexible Data Center

Thanks !! 21/20

Thanks !! 21/20