Using EdgeToEdge Feedback Control to Make Assured Service
Using Edge-To-Edge Feedback Control to Make Assured Service More Assured in Diff. Serv Networks K. R. R. Kumar, A. L. Ananda, Lillykutty Jacob Centre for Internet Research School of Computing National University of Singapore
Outline Introduction – Need for Qo. S – Solutions TCP over Diff. Serv – Issues CATC – Key Observations – Design Considerations – Topology – Edge-to-Edge Feedback Architecture – Marking Algorithm Simulation Details Results and Analysis Deployment Inferences and Future work
Introduction Need for Qo. S – An exponential growth in traffic resulted in deterioration of Qo. S. – Over provisioning of networks could be a solution. – A better solution: An intelligent network service with better resource allocation and management methods,
Solutions Integrated Service – Per flow based Qo. S. – Not scalable. Differentiated services – Qo. S for aggregated flows – Scalable – The philosophy: simpler at the core (AQM), complex at the edges.
Diff. Serv Meter Packets Classifier Marker Shaper/ Dropper Drop Logical View of a Packet Classifier and Traffic Conditioner Forward
Diff. Serv cont’d. . Per-Hop behaviours – Expedited forwarding: Deterministic Qo. S – Assured forwarding: Statistical Qo. S Classifier Traffic Conditioner – Token Bucket (TB), Time Sliding Window (TSW) Meter Marker Shaper/Dropper
TCP over Diff. Serv Recent measurements have shown TCP flows being in majority (95% approx. of byte share). TCP flows are much more sensitive to transient congestion. Unruly flows like UDP kills TCP traffic Bandwidth assurance affected by size of target rate. Biased against – Longer RTTs – Smaller window sizes
Congestion Aware Traffic Conditioner (CATC) Key Observations – Markers , one of the major building blocks of a traffic conditioner helps in resource allocation. – Proper understanding of transient congestion in the network helps. – Edge routers have a better understanding of the domain traffic. – An early indication of congestion in a network helps to prioritize the packets in advance. – Existing feedback mechanisms are end-to-end. Eg: ECN
CATC cont’d. . Design Considerations – Markers should Be least sensitive to marker or TCP parameters. Be transparent to end hosts. Maintain optimum marking. Minimize synchronizations. Be fair to different target sizes. Be congestion aware.
Topology
Edge-to-Edge Feedback architecture Two edge routers – Control sender (CS) and control receiver (CR) Upstream: – At CS: CS sends control packets (CP) at regular interval of time, control packet interval (cpi). CPs are given highest priority. – At Core: Core routers maintain the status of drops of the best effort packets. Information maintained as a status flag to a max. of cpi time. CP’s congestion notification (CN) bit set or reset based on status flag. – At CR: Responds to the incoming CP with a CN bit set by setting the congestion echo (CE) bit of the outgoing acknowledgement.
Feedback arch. Cont’d Downstream – At CS: Maintains a parameter, congestion factor (cf). Cf is set to 1 or 0 based on status of the CE bit in acknowledgement received.
Marking algorithm For each packet arrival If avg_rate cir then mp=mp+(1 - avg_rate/cir)*(1+cf*(cir/cir_max)); mark the packet using : cp 11 w. p. mp (marked packets) cp 00 w. p. (1 -mp) (unmarked packets)
Marking Algo. Cont’d. . else if avg_rate > cir then mp=mp+ (1 - avg_rate/cir)*(1 -cf*(cir/cir_max)); mark the packet using : cp 11 w. p. mp (marked packets) cp 00 w. p. (1 -mp) (unmarked packets)
Marking Algo. Cont’d. . where, avg_rate = the rate estimate on each packet arrival mp = marking probability ( 1) cir = committed information rate (target rate) cf = congestion factor cir_max = maximum committed information rate also, cp denotes ‘codepoint’ and w. p. denotes ‘with probability’.
Algo cont’d. . Marking probability computation based on: – cir – avg_rate – cf – cir_max among all cirs.
Algo. Cont’d. . The effect on mp: – i)Flow component (1 - avg_rate/cir) constantly compares the average rate observed with the target rate to keep the rate closer to the target. – ii)Network component cf*(cir/cir_max) provides a dynamic indication of congestion level status in the network. The marking probability increment is done in proportion to the target rate by multiplying cf with a weight factor cir/cir_max to mitigate the impact of the target rates.
Simulation Details NS (2. 1 b 7 a) simulator on Red Hat 7. 0 Modified Nortel’s Diff. Serv module for our architecture implementation. Core routers use RIO like mechanism FTP bulk data transfer for TCP traffic
Simulation Parameters
Simulation details cont’d. . Experiments conducted: – Assured services (AS) for aggregates. AS in under- and well- subscribed cases. AS in the oversubscribed case. – Protection from BE UDP flows – Effect of UDP flows with assured (target) rates.
R&A: under- and wellsubscribed
R&A: over-subscribed
R&A: Goodput vs Time Graph (2/6 Mbps target rate. )
Analysis CATC Able to achieve the target rates for the under- and well- subscribed cases. Maintain the achieved rate close to its target rate. Total link utilization remains more or less constant throughout.
R&A: AS in presence of BE UDP and TCP
R&A: AS in presence of AS UDP and BE TCP
Analysis CATC – Achieves goodput close to the target rates. – Succeeds in taking the share of BE TCP and UDP flows in the worst case scenario. – The average link utilization pretty good. – The AS UDP flow gets its assured rate.
Deployment MPLS over Diff. Serv. Marker anywhere (lack of sensitivity to marker parameters).
Inferences and Future work The architecture is transparent to TCP sources and hence doesn’t require any modifications at the end hosts. The edge-to-edge feedback control loop helps the marker to take proactive measures in maintaining the assured service effectively, especially during periods of congestion. A single feedback control is used for an aggregated flow. Hence this architecture is scalable to any number of flows between the two edge gateways. The architecture is adaptive to changes in load and network conditions. The marking algorithm takes care of any bursts in the flows.
Future work Extend present architecture to take care of drops in priority queues. A new algorithm to incorporate this.
Q&A
Thank You!
- Slides: 32