Congestion Avoidance Outline Overview Queuing Disciplines TCP Congestion

  • Slides: 21
Download presentation
Congestion Avoidance Outline: Overview Queuing Disciplines TCP Congestion Control Combined Techniques Spring 2002 CS

Congestion Avoidance Outline: Overview Queuing Disciplines TCP Congestion Control Combined Techniques Spring 2002 CS 332 1

Congestion Avoidance • TCP congestion control strategy: – Increase load until congestion occurs, then

Congestion Avoidance • TCP congestion control strategy: – Increase load until congestion occurs, then back off from this point – Needs to create losses to determine connection bandwidth • Alternative: – Predict when congestion is about to happen, then reduce host sending rates just before packets start being dropped. – Not widely adopted at this time Spring 2002 CS 332 2

DECbit • Designed for Digital Network Architecture (DNA) – Connectionless network with connection-oriented transport

DECbit • Designed for Digital Network Architecture (DNA) – Connectionless network with connection-oriented transport protocol (sound familiar? ) • General idea – Router monitors load, set binary congestion bit when congestion imminent – Receiver copies congestion bit into ACK it sends back – Sender cuts its sending rate Spring 2002 CS 332 3

DECbit Details • Router measures average queue length over previous busy+idle cycle, plus current

DECbit Details • Router measures average queue length over previous busy+idle cycle, plus current busy cycle. • If above average is >= 1, then set congestion bit – Value of 1 seems to optimize power – Tradeoff between higher throughput and longer delay • Host maintains a congestion window – If less than 50% of last windows worth of packets have congestion bit set, increase window by one packet – Else decrease window size to 0. 875 times current value • Note additive increase/multiplicative decrease Spring 2002 CS 332 4

Random Early Detection (RED) • • Similar to DECbit Invented by Sally Floyd and

Random Early Detection (RED) • • Similar to DECbit Invented by Sally Floyd and Van Jacobson, early 90 s Designed to be used with TCP Two differences between RED and DECbit – RED implicitly notifies of imminent congestion by dropping a packet, thus causing timeout or duplicate ACK – When RED drops a packet and how it decides which to drop (DECbit just drops when queue fills) Spring 2002 CS 332 5

RED Philosophy • Philosophy: drop a few packets before buffer is exhausted in hope

RED Philosophy • Philosophy: drop a few packets before buffer is exhausted in hope that this will avoid having to drop lots of packets later (note could have simply marked packets instead of dropping them) • Queuing philosophy: early random drop – Drop arriving packet with some drop probability whenever queue length exceeds some drop level • Algorithm defines: – How to monitor queue length – When to drop packet Spring 2002 CS 332 6

RED (cont. ) • Compute average queue length similar to TCP timeout: Avg. Len

RED (cont. ) • Compute average queue length similar to TCP timeout: Avg. Len = (1 – Weight)× Avg. Len + Weight × Sample. Len 0 < Weight < 1 Effectively low pass filter to handle bursty nature of traffic Spring 2002 CS 332 7

More RED • Two parameters: Min. Threshold, Max. Threshold if (Avg. Len Min. Threshold)

More RED • Two parameters: Min. Threshold, Max. Threshold if (Avg. Len Min. Threshold) { queue_packet(); } else if (Min. Threshold < Avg. Len < Max. Threshold){ calculate probability P; drop arriving packet with probability P; } else if (Max. Threshold Avg. Len) { drop arriving packet; } Spring 2002 CS 332 8

Still More RED • Rationale: if Avg. Len reaches Max. Threshold, then gentle approach

Still More RED • Rationale: if Avg. Len reaches Max. Threshold, then gentle approach isn’t working (though research has indicated that a more smooth transition to complete dropping might be more appropriate) P= Max. P × (Avg. Len – Min. Threshold)/(Max. Threshold – Min. Threshold) Spring 2002 CS 332 9

More RED than you can shake a stick at • A Problem: As is,

More RED than you can shake a stick at • A Problem: As is, packet drops not well distributed in time. – Occur in clusters – Because packet arrivals from a connection are likely to arrive in bursts, this clustering causes multiple drops in single connection – Bad, since only need one drop per round trip, to slow, whereas lots of drops could send connection into slow start Spring 2002 CS 332 10

RED just won’t go away… • Solution: Make P a function of both Avg.

RED just won’t go away… • Solution: Make P a function of both Avg. Len and how long since last packet dropped: Temp. P = Max. P × (Avg. Len – Min. Threshold)/(Max. Threshold – Min. Threshold) P = Temp. P/(1 – count × Temp. P) • count: how many packets have been queued Avg. Len has been between two thresholds while • Note that larger count => larger P • Spreads out occurrence of drops Spring 2002 CS 332 11

RED again • Because packet drops are random, flows that use more bandwidth have

RED again • Because packet drops are random, flows that use more bandwidth have higher probability of packet drop, so a sense of fairness built in (sort of) • At times, queue length will exceed Max. Threshold (though Avg. Len may not). Need extra space in queue above Max. Threshold to handle these bursts without forcing router into tail drop mode Spring 2002 CS 332 12

Tuning RED • If traffic bursty, Min. Threshold should be large enough to allow

Tuning RED • If traffic bursty, Min. Threshold should be large enough to allow link utilization at fairly high level • Max. Threshold – Min. Threshold should be larger than typical increase in calculated queue length during on RTT (set Max. Threshold to twice Min. Threshold) • From time router drops packet to time router sees relief is at least one RTT, so makes no sense to respond to congestion on time scales less than one RTT (100 ms good rule). Choose weight so that changes on time scale less than RTT are filtered out • Caveat: These all depend on traffic mix (I. e. networkload). Active area of research Spring 2002 CS 332 13

Source Based Congestion Avoidance • Key: watch for clues that router queues building up

Source Based Congestion Avoidance • Key: watch for clues that router queues building up • Scheme 1: Congestion window increases as in TCP, but every two round trip delays, check if current RTT is greater than avg of min and max observed RTT. If so, decrease window by one -eighth • Scheme 2: Every RTT, increase window by one packet. Compare throughput achieved to throughput with window one packet smaller (i. e. find slope of the throughput vs window curve). If difference less than half throughput achieved when only one packet in network, then decrease window by one packet. (Throughput calculated as (num bytes outstanding in network)/RTT) Spring 2002 CS 332 14

TCP Vegas Congestion window Avg sending rate (throughput) Avg queue size at bottleneck Spring

TCP Vegas Congestion window Avg sending rate (throughput) Avg queue size at bottleneck Spring 2002 CS 332 15

TCP Vegas • Metaphor: driving on ice. Speedometer (window size) says you’re going 30

TCP Vegas • Metaphor: driving on ice. Speedometer (window size) says you’re going 30 mph, but you know (observed throughput) you’re only going 10. Extra energy absorbed by tires (buffers) • TCP Vegas idea: measure and control amount of “extra” data in network (i. e. data source would not have transmitted if trying to match bandwidth) – Too much extra data => delay and congestion – Too little extra data => slow response to transient increases in bandwidth Spring 2002 CS 332 16

TCP Vegas • Base. RTT: RTT of packet when flow not congested (set to

TCP Vegas • Base. RTT: RTT of packet when flow not congested (set to minimum observed RTT) • Expected. Rate = Congestion. Window/Base. RTT (Congestion. Window is from TCP. Assumed here to be equal to num bytes in transit) • Actual. Rate: Record RTT for distinguished packet, count bytes sent between packet transmit and return of ACK, divide this by RTT. Done once per round trip • Compare Actual. Rate to Expected. Rate and adjust window accordingly Spring 2002 CS 332 17

TCP Vegas • Diff = Expected. Rate – Actual. Rate – Must be nonnegative

TCP Vegas • Diff = Expected. Rate – Actual. Rate – Must be nonnegative or we need to change Base. RTT • , with < – corresponds roughly to too little extra data in network – corresponds roughly to too much extra data in network • If Diff < , increase window linearly during next RTT • If Diff > , decrease window linearly during next RTT • If < Diff < , leave window alone Spring 2002 CS 332 18

Intuition • Farther actual throughput gets from expected throughput, more congestion in network, sending

Intuition • Farther actual throughput gets from expected throughput, more congestion in network, sending should be reduced • Actual throughput gets to close to expected throughput, then in danger of underutilizing available bandwidth • Goal is to keep between and extra bytes in network Spring 2002 CS 332 19

TCP Vegas Congestion Window Expected. Rate (colored line), Actual. Rate (black line), shaded area

TCP Vegas Congestion Window Expected. Rate (colored line), Actual. Rate (black line), shaded area is region between and thresholds Spring 2002 CS 332 20

TCP Vegas • , compared to throughput rates, so typically given in KBps. •

TCP Vegas • , compared to throughput rates, so typically given in KBps. • Intuition: how many extra buffers connection is occupying in network – Ex. Base. RTT = 100 ms, packet size 1 KB, = 30 KBps, = 60 KBps. So in one RTT, have between 3 KB and 6 KB in network (I. e. 3 to 6 packets, or equivalently 3 to 6 extra buffers in network) – In practice setting to one buffer and to three buffers works well • TCP Vegas decreases window linearly (so why isn’t it unstable? ) Spring 2002 CS 332 21