Internet Server Clusters Jeff Chase Duke University Department

  • Slides: 46
Download presentation
Internet Server Clusters Jeff Chase Duke University, Department of Computer Science CPS 212: Distributed

Internet Server Clusters Jeff Chase Duke University, Department of Computer Science CPS 212: Distributed Information Systems

Using Clusters for Scalable Services Clusters are a common vehicle for improving scalability and

Using Clusters for Scalable Services Clusters are a common vehicle for improving scalability and availability at a single service site in the network. Are network services the “Killer App” for clusters? • incremental scalability just wheel in another box. . . • excellent price/performance high-end PCs are commodities: high-volume, low margins • fault-tolerance “simply a matter of software” • high-speed cluster interconnects are on the market SANs + Gigabit Ethernet. . . cluster nodes can coordinate to serve requests w/ low latency • “shared nothing”

[Fox/Brewer]: SNS, TACC, and All That [Fox/Brewer 97] proposes a cluster-based reusable software infrastructure

[Fox/Brewer]: SNS, TACC, and All That [Fox/Brewer 97] proposes a cluster-based reusable software infrastructure for scalable network services (“SNS”), such as: • Tran. Send: scalable, active proxy middleware for the Web think of it as a dial-up ISP in a box, in use at Berkeley distills/transforms pages based on user request profiles • Inktomi/Hot. Bot search engine core technology for Inktomi Inc. , today with $15 B market cap. “bringing parallel computing technology to the Internet” Potential services are based on Transformation, Aggregation, Caching, and Customization (TACC), built above SNS.

TACC Vision: deliver “the content you want” by viewing HTML content as a dynamic,

TACC Vision: deliver “the content you want” by viewing HTML content as a dynamic, mutable medium. 1. Transform Internet content according to: • network and client needs/limitations e. g. , on-the-fly compression/distillation [ASPLOS 96], packaging Web pages for Palm. Pilots, encryption, etc. • directed by user profile database 2. Aggregate content from different back-end services or resources. 3. Cache content to reduce cost/latency of delivery. 4. Customize (see Transform)

Tran. Send Structure $ $ html gif jpg $ To Internet SAN (high speed)

Tran. Send Structure $ $ html gif jpg $ To Internet SAN (high speed) Utility (10 base. T) Coordination bus Front Ends $. . . Profiles Control Panel Cache partition Datatype-specific distiller [adapted from Armando Fox (through http: //ninja. cs. berkeley. edu/pubs)]

SNS/TACC Philosophy 1. Specify services by plugging generic programs into the TACC framework, and

SNS/TACC Philosophy 1. Specify services by plugging generic programs into the TACC framework, and compose them as needed. sort of like CGI with pipes run by long-lived worker processes that serve request queues allows multiple languages, etc. 2. Worker processes in the TACC framework are loosely coordinated, independent, and stateless. ACID vs. BASE serve independent requests from multiple users narrow view of a “service”: one-shot readonly requests, and stale data is OK 3. Handle bursts with designated overflow pool of machines.

TACC Examples Hot. Bot search engine • Query crawler’s DB A DB • Cache

TACC Examples Hot. Bot search engine • Query crawler’s DB A DB • Cache recent searches T C • Customize UI/presentation $ html Tran. Send transformation proxy • On-the-fly lossy compression of inline images (GIF, JPG, etc. ) • Cache original & transformed $ • User specifies aggressiveness, “refinement” UI, etc. C [Fox] T

(Worker) Ignorance Is Bliss What workers don’t need to know • Data sources/sinks •

(Worker) Ignorance Is Bliss What workers don’t need to know • Data sources/sinks • User customization (key/value pairs) • Access to cache • Communication with other workers by name Common case: stateless workers C, Perl, Java supported • Recompilation often unnecessary • Useful tasks possible in <10 lines of (buggy) Perl [Fox]

Questions 1. What are the research contributions of the paper? system architecture decouples SNS

Questions 1. What are the research contributions of the paper? system architecture decouples SNS concerns from content TACC programming model composes stateless worker modules validation using two real services, with measurements How is this different from clusters for parallel computing? 2. How is this different from clusters for parallel computing? 3. What are the barriers to scale in SNS/TACC? 4. How are requests distributed to caches, FEs, workers? 5. What can we learn from the quantitative results? 6. What about services that allow client requests to update shared data? e. g. , message boards, calendars, mail,

SNS/TACC Functional Issues 1. What about fault-tolerance? • Service restrictions allow simple, low-cost mechanisms.

SNS/TACC Functional Issues 1. What about fault-tolerance? • Service restrictions allow simple, low-cost mechanisms. Primary/backup process replication is not necessary with BASE model and stateless workers. • Uses a process-peer approach to restart failed processes. Processes monitor each other’s health and restart if necessary. Workers and manager find each other with “beacons” on wellknown ports. 2. Load balancing? • Manager gathers load info and distributes to front-ends. • How are incoming requests distributed to front-ends?

[Saito] Porcupine: A Highly Available Clusterbased Mail Service Yasushi Saito Brian Bershad Hank Levy

[Saito] Porcupine: A Highly Available Clusterbased Mail Service Yasushi Saito Brian Bershad Hank Levy http: //porcupine. cs. washington. edu/ University of Washington Department of Computer Science and Engineering, Seattle, WA

[Saito] Why Email? Mail is important Real demand Mail is hard Write intensive Low

[Saito] Why Email? Mail is important Real demand Mail is hard Write intensive Low locality Mail is easy Well-defined API Large parallelism Weak consistency How much of Porcupine is reusable to other services? Can we use the SNS/TACC framework for this?

[Saito] Goals Use commodity hardware to build a large, scalable mail service Three facets

[Saito] Goals Use commodity hardware to build a large, scalable mail service Three facets of scalability. . . Performance: Linear increase with cluster size Manageability: React to changes automatically Availability: Survive failures gracefully

[Saito] Conventional Mail Solution Static partitioning SMTP/IMAP/POP Performance problems: No dynamic load balancing Manageability

[Saito] Conventional Mail Solution Static partitioning SMTP/IMAP/POP Performance problems: No dynamic load balancing Manageability problems: Manual data partition decision Availability problems: Ann’s Bob’s mbox Joe’s Suzy’s mbox Limited fault tolerance NFS servers

[Saito] Key Techniques and Relationships Functional Homogeneity “any node can perform any task” Automatic

[Saito] Key Techniques and Relationships Functional Homogeneity “any node can perform any task” Automatic Load Replication Reconfiguration Balancing Availability Manageability Performance Framework Techniques Goals

[Saito] Porcupine Architecture SMTP server POP server IMAP server Load Balancer User map Membership

[Saito] Porcupine Architecture SMTP server POP server IMAP server Load Balancer User map Membership RPC Manager Replication Manager . . . Node A Node B . . . Mailbox storage Node Z User profile Mail map

Porcupine Operations Protocol handling User lookup Internet A Load Balancing Message store C A

Porcupine Operations Protocol handling User lookup Internet A Load Balancing Message store C A 1. “send mail to DNS-RR 3. “Verify bob” selection bob” . . . [Saito] B C . . . 2. Who manages bob? A 4. “OK, bob has msgs on C and D 6. “Store msg” B 5. Pick the best nodes to store new msg C

[Saito] Basic Data Structures “bob” Apply hash function B CACABAC User map bob: {A,

[Saito] Basic Data Structures “bob” Apply hash function B CACABAC User map bob: {A, C} Suzy’s MSGs Bob’s MSGs suzy: {A, C} ann: {B} Ann’s MSGs Joe’s MSGs joe: {B} Mail map /user info fragment list Bob’s MSGs Suzy’s Mailbox MSGs storage mailbox fragments A B C

[Saito] Porcupine Advantages: Optimal resource utilization Automatic reconfiguration and task re-distribution upon node failure/recovery

[Saito] Porcupine Advantages: Optimal resource utilization Automatic reconfiguration and task re-distribution upon node failure/recovery Fine-grain load balancing Results: Better Availability Better Manageability Better Performance

[Saito] Availability Goals: Maintain function after failures React quickly to changes regardless of cluster

[Saito] Availability Goals: Maintain function after failures React quickly to changes regardless of cluster size Graceful performance degradation / improvement Strategy: Two complementary mechanisms Hard state: email messages, user profile Optimistic fine-grain replication Soft state: user map, mail map Reconstruction after membership change

[Saito] Soft-state Reconstruction suzy 1. Membership protocol Usermap recomputation B C A B A

[Saito] Soft-state Reconstruction suzy 1. Membership protocol Usermap recomputation B C A B A C ann bob: {A, C} B A A B A B B C A B A C joe: {C} B A A B A B joe: {C} ann: A C A C joe: {C} ann: {B} B C A B A B A C suzy: {A, B} ann: {B} B C A B A C suzy: {A, B} ann: {B} A B C 2. Distributed disk scan bob: {A, C} suzy: Timeline A C A C bob: {A, C} suzy: {A, B}

How does Porcupine React to Configuration Changes? [Saito]

How does Porcupine React to Configuration Changes? [Saito]

[Saito] Hard-state Replication Goals: Keep serving hard state after failures Handle unusual failure modes

[Saito] Hard-state Replication Goals: Keep serving hard state after failures Handle unusual failure modes Strategy: Exploit Internet semantics Optimistic, eventually consistent replication Per-message, per-user-profile replication Efficient during normal operation Small window of inconsistency How will Porcupine behave in a partition failure?

More on Porcupine Replication To add/delete/modify a message: • Find and update any replica

More on Porcupine Replication To add/delete/modify a message: • Find and update any replica of the mailbox fragment. Do whatever it takes: make a new fragment if necessary. . . pick a new replica if chosen replica does not respond. • Replica asynchronously transmits updates to other fragment replicas. continuous reconciling of replica states • Log/force pending update state, and target nodes to receive update. on recovery, continue transmitting updates where you left off • Order updates by loosely synchronized physical clocks. Clock skew should be less than the inter-arrival gap for a sequence of order-dependent requests. . . use node. ID to break ties. • How many node failures can Porcupine survive? What happens if nodes fail “forever”?

[Saito] How Efficient is Replication? 68 m/day 24 m/day

[Saito] How Efficient is Replication? 68 m/day 24 m/day

[Saito] How Efficient is Replication? 68 m/day 33 m/day 24 m/day

[Saito] How Efficient is Replication? 68 m/day 33 m/day 24 m/day

[Saito] Load balancing: Deciding where to store messages Goals: Handle skewed workload well Support

[Saito] Load balancing: Deciding where to store messages Goals: Handle skewed workload well Support hardware heterogeneity No voodoo parameter tuning Strategy: Spread-based load balancing Spread: soft limit on # of nodes per mailbox Large spread better load balance Small spread better affinity Load balanced within spread Use # of pending I/O requests as the load measure

Questions • How to select the front-end node to handle the request? Does it

Questions • How to select the front-end node to handle the request? Does it matter which one we choose? • Don’t we already know how to build big mail servers? (e. g. , Earthlink, Christenson USITS 97) Why do we need Porcupine? • What properties of the mail “data model” allow this approach, with weaker consistency guarantees than a database? • How does the system leverage/exploit the weaker semantics? • Can the architecture accommodate new features, e. g. , Pachydermlike storage/indexing of large mail collections? • Could I run Porcupine on the same cluster with other applications? • Could this have been built on Microsoft’s MSCS? How much application effort would have been saved?

Clusters: A Broader View MSCS (“Wolfpack”) is designed as basic infrastructure for commercial applications

Clusters: A Broader View MSCS (“Wolfpack”) is designed as basic infrastructure for commercial applications on clusters. • “A cluster service is a package of fault-tolerance primitives. ” • Service handles startup, resource migration, failover, restart. • But: apps may need to be “cluster-aware”. Apps must participate in recovery of their internal state. Use facilities for logging, checkpointing, replication, etc. • Service and node OS supports uniform naming and virtual environments. Preserve continuity of access to migrated resources. Preserve continuity of the environment for migrated resources.

Wolfpack: Resources • The components of a cluster are nodes and resources. Shared nothing:

Wolfpack: Resources • The components of a cluster are nodes and resources. Shared nothing: each resource is owned by exactly one node. • Resources may be physical or logical. Disks, servers, databases, mailbox fragments, IP addresses, . . . • Resources have types, attributes, and expected behavior. • (Logical) resources are aggregated in resource groups. Each resource is assigned to at most one group. • Some resources/groups depend on other resources/groups. Admin-installed registry lists resources and dependency tree. • Resources can fail. cluster service/resource managers detect failures.

Fault-Tolerant Systems: The Big Picture application service replication logging checkpointing voting database redundant hardware

Fault-Tolerant Systems: The Big Picture application service replication logging checkpointing voting database redundant hardware parity ECC application service mail service file/storage system replication logging checkpointing voting cluster service messaging system replication RAID parity checksum ack/retransmission Note: dependencies redundancy at any/each/every level what failure semantics to the level above?

Wolfpack: Resource Placement and Migration The cluster service detects component failures and responds by

Wolfpack: Resource Placement and Migration The cluster service detects component failures and responds by restarting resources or migrating resource groups. • Restart resource in place if possible. . . • . . . else find another appropriate node and migrate/restart. Ideally, migration/restart/failover is transparent. • Logical resources (processes) execute in virtual environments. uniform name space for files, registry, OS objects (NT mods) • Node physical clocks are loosely synchronized, with clock drift less than minimal time for recovery/migration/restart. guarantees migrated resource sees monotonically increasing clocks • Route resource requests to the node hosting the resource. • Is the failure visible to other resources that depend on the resource?

Membership 101 Cluster nodes must agree on the set of cluster members (the view).

Membership 101 Cluster nodes must agree on the set of cluster members (the view). • distribute resource ownership effectively shift resources on node failures or additions • eliminate dangerous/expensive interactions with faulty nodes • “keep everyone in the loop” on updates and events e. g. , multicast groups and group communication The literature on group membership is tangled up with the problem of ordered multicast (e. g. , “CATOCS”). • What are the ordering guarantees for message delivery, especially with respect to membership changes? • Ordered group communication is controversial, but everyone needs a solution for the separate but related membership problem.

Failure Detectors First problem: how to detect that a member has failed? • pings,

Failure Detectors First problem: how to detect that a member has failed? • pings, timeouts, beacons, heartbeats • recovery notifications “I was gone for awhile, but now I’m back. ” Is the failure detector accurate? Is the failure detector live? In an asynchronous system, it is possible for a failure detector to be accurate or live, but not both. • As it turns out, it is impossible for an asynchronous system to agree on anything with accuracy and liveness! • But this is academic. . .

Failure Detectors in Real Systems Common solution: • Use a failure detector that is

Failure Detectors in Real Systems Common solution: • Use a failure detector that is live but not accurate. Assume bounded processing delays and delivery times. Timeout with multiple retries detects failure accurately with high probability. If a “failed” site turns out to be alive, then kill it (fencing). • Use a recovery detector that is accurate but not live. “I’m back. . hey, did anyone hear me? ” What do we assume about communication failures? How much pinging is enough? 1 -to-N, N-to-N, ring? What about network partitions?

Membership Service Second problem: How to propagate knowledge of failure/recovery events to other nodes?

Membership Service Second problem: How to propagate knowledge of failure/recovery events to other nodes? • Surviving nodes should agree on the new view (regrouping). • Convergence should be rapid. • The regrouping protocol should itself be tolerant of message drops, message reorderings, and failures. liveness and accuracy again • The regrouping protocol should be scalable. • The protocol should handle network partitions. • Behavior of the messaging system (e. g. , group multicast) across membership changes must be well-specified and understood.

Example: Wombat • Wombat is a new membership protocol, an outgrowth of Porcupine. Gretta

Example: Wombat • Wombat is a new membership protocol, an outgrowth of Porcupine. Gretta Bartels, University of Washington, Duke ‘ 98 • Wombat is empirically more efficient/scalable than competing algorithms such as Three Round. • But: Wombat makes no guarantees about the relative ordering of membership events and messages. Adherents of group communication would not accept it as a “real” membership protocol. • Wombat’s assumptions have not been formally defined, and its properties have not been proven. If you can’t prove that it works, you can’t believe that it works. • Disclaimer: Wombat is a promising work in progress.

Wombat Basics Nodes are ranked by unique IDs. leader Node IDs are permanent. Node

Wombat Basics Nodes are ranked by unique IDs. leader Node IDs are permanent. Node i pings predecessor(i). The highest-ranked node is the leader. ping All other nodes are minions. The leader periodically broadcasts its view to all known minions. physical broadcast Minions adopt the leader’s view. determine pred from leader’s view minions

Node Arrival/Recovery in Wombat If node i joins the cluster: 1. i waits for

Node Arrival/Recovery in Wombat If node i joins the cluster: 1. i waits for the leader’s next beacon. 2. i detects that the leader’s view does not include i. “I’m here too. ” 3. i notifies the leader. 4. The leader updates its view. 5. The leader broadcasts its new view. 6. Minions adopt the leader’s view. i

Node Failure in Wombat If a node fails: 1. Its successor notifies the leader.

Node Failure in Wombat If a node fails: 1. Its successor notifies the leader. 2. The leader updates its view. 3. The leader broadcasts its view. 4. Minions adopt the leader’s new view. 5. Life goes on. “Node i has failed. ” Xi

Leader Failure in Wombat If the leader fails: 1. Successor detects the failure. 2.

Leader Failure in Wombat If the leader fails: 1. Successor detects the failure. 2. Successor knows that the failed node was the leader. 3. Successor broadcasts as leader. 4. Minions adopt the new leader’s view. 5. Life goes on. X “I am in control. ”

Multiple Failures in Wombat If the leader and its successor(s) fail(s), the next ranking

Multiple Failures in Wombat If the leader and its successor(s) fail(s), the next ranking node must assume command on its own. 1. Each node has a broadcast timer; if the timer goes off, broadcast as leader. 2. Each node’s timer is set by its rank. if i< j then timer(i)<timer(j) 3. Reset timer on each beacon. 4. Leader’s timer value is adaptive. Go faster if things are changing. X X “I must be in control. ”

Suppressing False Leaders If a node falsely broadcasts as leader: 1. All nodes that

Suppressing False Leaders If a node falsely broadcasts as leader: 1. All nodes that know of a better leader recognize the usurper as such. 2. The real leader recognizes that it is a better leader than the usurper. 3. The real leader broadcasts the union of its view and the usurper’s view. 4. The usurper shuts up and adopts the real leader’s view. What if the “real leader” is dead? X “I don’t think so. ” “I must be in control. ”

Partitions in Wombat partition leader If a network failure partitions the cluster: 1. The

Partitions in Wombat partition leader If a network failure partitions the cluster: 1. The old partition continues. 2. The leader of the new partition eventually broadcasts its view. 3. Minions accept the new leader’s view. notion partition leader

Healing a Partition dominating leader When the partition heals, either: 1. The dominating partition

Healing a Partition dominating leader When the partition heals, either: 1. The dominating partition leader hears a false broadcast, and. . . 2. . corrects it by broadcasting the union of the views. - or 1. The dominating partition leader broadcasts first, and. . . 2. . minions respond “I’m here”. partition leader

Wombat: Wrinkles 1. What are the assumptions about: • network? • clocks? 2. Are

Wombat: Wrinkles 1. What are the assumptions about: • network? • clocks? 2. Are these reasonable/realistic assumptions? 3. How to ensure a single cluster view in the event of a partition? 4. How long does it take for the view to converge after a partition? 5. How do we start a cluster? What if a node starts or recovers but never receives a beacon? 6. What about the ordering of messages and membership events? 7. How do minions come to accept a new leader? 8. What about “message storms”?