SIP Server Overload Control Henning Schulzrinne Charles Shen

  • Slides: 76
Download presentation
SIP Server Overload Control Henning Schulzrinne & Charles Shen Columbia University

SIP Server Overload Control Henning Schulzrinne & Charles Shen Columbia University

Outline § Introduction to SIP § SIP Overload Control Motivation and Overview § Filter-based

Outline § Introduction to SIP § SIP Overload Control Motivation and Overview § Filter-based SIP Overload Control § Overload Control for SIP over UDP § Overload Control for SIP over TCP § Summary 2

SIP Proxying Caller SIP Proxy A INVITE 100 Trying 180 Ringing 200 OK ACK

SIP Proxying Caller SIP Proxy A INVITE 100 Trying 180 Ringing 200 OK ACK SIP Proxy B INVITE 100 Trying 180 Ringing 200 OK ACK Callee INVITE 100 Trying 180 Ringing 200 OK ACK Media BYE BYE 200 OK 6

SIP Server Overload: Motivation - When does it happen? § Legitimate short-term load spike

SIP Server Overload: Motivation - When does it happen? § Legitimate short-term load spike • Viewer-voting TV shows or ticket giveaways • Special holidays • Natural disaster areas • Mass phone reboot after power failure and recovery § Malicious attack caused overload • Denial-of-Service attacks 7

SIP Server Overload: Real Data Nation-wide => Kobe legend Earthquake of Jan. 17, 1995

SIP Server Overload: Real Data Nation-wide => Kobe legend Earthquake of Jan. 17, 1995 regular day 800 • about 20 times higher than normal day of average calls in an hour to Kobe from nationwide 600 Earthquake happened at 5: 46 am, Jan. 17, 1995 400 About 20 times of normal day • Traffic increased rapidly just after the earthquake at 5: 50 and the peak is more than 50 times higher than normal day. • The congestion continued until next Monday 200 Capacity of NTT facility (max usage of normal days) 4 am Jan 17 8 am 12 pm 4 pm 8 pm 12 am Jan 18 Courtesy of NTT 4 am About 7 times of normal day 8 am 12 pm 4 pm 8 pm 12 am 8 4 am Jan 19

SIP Server Overload Categorized - Proxy to Proxy Overload § Known overload caller/callee scope

SIP Server Overload Categorized - Proxy to Proxy Overload § Known overload caller/callee scope • human callee destination limits: ticket give-away • malicious caller injecting traffic SIP Proxy A SIP Proxy B SIP Proxy D SIP Proxy C § Unknown overload caller/callee scope: • proxy server capacity limits • disaster aftermath - regional callee scope • holiday greetings - global callee scope 9

SIP Server Overload Categorized – UA to Registrar Overload SIP Registrar § Avalanche restart

SIP Server Overload Categorized – UA to Registrar Overload SIP Registrar § Avalanche restart after sudden power failure and recovery • registration server capacity limits • usually affecting local/regional registerant scope • real-life example recently in a fairly wide area the east of UK § Different scenarios call for different approaches Slide 10

SIP Server Overload: Existing Overload Control Layering Application layer protocol overload control is NOT

SIP Server Overload: Existing Overload Control Layering Application layer protocol overload control is NOT new! E. g. , HTTP User level § user is given a busy signal § delayed response reduces usage § reduced functionality (e. g. , no images) § user behavior beyond protocol control Application protocol level § HTTP/SIP server busy indications § protocol specific Transport or lower layers § TCP flow control § TCP congestion control § refuse TCP connections or UDP packets § cross-layer, specific to the lower layer protocols Slide 11

SIP Server Overload: Borrow from HTTP Overload Control? Similarity • both run over TCP

SIP Server Overload: Borrow from HTTP Overload Control? Similarity • both run over TCP and SCTP (rarely) • both often have a single server hop Differences • • Conclusion sufficiently different to require different solutions SIP SIP also runs over UDP has multiple hops (proxies) has protocol-specific retransmission requests more time sensitive (ms vs. seconds) 12

SIP Server Overload: Transport Layer Overload Control for SIP? TCP flow control avoid receiver

SIP Server Overload: Transport Layer Overload Control for SIP? TCP flow control avoid receiver buffer overflow TCP congestion control avoid link congestion Refuse TCP connections Drop UDP packets o May be good for UA to registrar/proxy scenario o Not preferable for proxy to proxy connection o SIP retransmission may cause adverse effect o Different transport layer overload control options Conclusion o Application layer SIP overload control 13

SIP Server Overload: The Family Tree SIP server overload proxy to proxy overload known

SIP Server Overload: The Family Tree SIP server overload proxy to proxy overload known overload source/destination registrar server overload UA to proxy overload UA to Registrar overload diffuse overload source/destination UDP-based TCP-based 14

SIP Server Overload: The Family Tree SIP server overload proxy to proxy overload known

SIP Server Overload: The Family Tree SIP server overload proxy to proxy overload known overload source/destination application layer filter-based SIP overload control registrar server overload UA to proxy overload UA to registrar overload diffuse overload source/destination UDP-based TCP-based 15

SIP Server Overload: Filter-based Control – Push Control to the Edge Filter Spec ID:

SIP Server Overload: Filter-based Control – Push Control to the Edge Filter Spec ID: To: +1 -212 -555 -1234 Time: 9 am-10 am 2009 -1 -1 Act: accept rate=NEPA Filter Spec ID: To: +1 -212 -555 -1234 Time: 9 am-10 am 2009 -1 -1 Act: accept rate=NSPA Enterprise Network A Hotline Callee 212 -555 -1234 9 am-10 am, 2009 -1 -1 Service Provider A Filter Spec ID: To: +1 -212 -555 -1234 Time: 9 am-10 am 2009 -1 -1 Act: accept rate=NSPB Filter Spec ID: To: +1 -212 -555 -1234 Time: 9 am-10 am 2009 -1 -1 Act: accept rate= Nmax Enterprise Network B Service Provider B Charles Shen, Henning Schulzrinne, Arata Koike, A Session Initiation Protocol (SIP) Load Control Event Package, draft-shen-sipping-load-control-event-package-01. txt, IETF SIPPING Working Group, Work in Progress. Nov 3, 2008 16

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known Overload Source/Destination Scope Application Layer Filter-based SIP Overload Control Registrar Server Overload UA to Proxy Overload UA to Registrar Overload Unknown Overload Source/Destination Scope UDP-based TCP-based 17

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known Overload Source/Destination Scope Application Layer Filter-based SIP Overload Control Registrar Server Overload UA to Proxy Overload UA to Registrar Overload Unknown Overload Source/Destination Scope UDP-based TCP-based Application Layer Feedback-based SIP Overload Control 18

SIP over UDP: Simple Dropping upon overload? § SIP over UDP is dominant in

SIP over UDP: Simple Dropping upon overload? § SIP over UDP is dominant in current practice SIP Proxy A § Simply dropping requests on overload? INVITE T 1 § more messages due to retransmission 2 T 1 INVITE § e. g. , Timer A for INVITE retransmission 4 T 1 INVITE Slide 19 SIP Proxy B

SIP over UDP: Message Rejection upon Overload? § Rejecting excessive requests upon overload? •

SIP over UDP: Message Rejection upon Overload? § Rejecting excessive requests upon overload? • SIP 503 (Service Unavailable) response code to reject individual request § Trying an alternative server? • may soon be overloaded too-> cascading failure! Caller SIP Proxy A a) 500 Server Internal Error SIP Proxy B SIP Proxy C INVITE 503 Service Unavailable ACK b) INVITE Slide 20

Feedback-based SIP Overload Control Feedback Channel (In-band or out-of-band) Feedback Enforcement Feedback Info. Incoming

Feedback-based SIP Overload Control Feedback Channel (In-band or out-of-band) Feedback Enforcement Feedback Info. Incoming Load Regulated Load Feedback Algorithm Output Load Throttled Load Sending Entity (SE) Receiving Entity (RE) 21

Absolute Rate Feedback Channel (In-band or out-of-band) Feedback Enforcement λ’ λ Feedback Algorithm λ’

Absolute Rate Feedback Channel (In-band or out-of-band) Feedback Enforcement λ’ λ Feedback Algorithm λ’ μ = λ’ λ - λ’ Sending Entity (SE) Receiving Entity (RE) 22

Relative Rate Feedback Channel (In-band or out-of-band) Feedback Enforcement p λ (1 -p)λ Feedback

Relative Rate Feedback Channel (In-band or out-of-band) Feedback Enforcement p λ (1 -p)λ Feedback Algorithm μ =(1 -p)λ pλ Sending Entity (SE) Receiving Entity (RE) 23

Window Feedback Channel (In-band or out-of-band) Feedback Enforcement win λ’ λ Feedback Algorithm μ

Window Feedback Channel (In-band or out-of-band) Feedback Enforcement win λ’ λ Feedback Algorithm μ = λ’ if (win > 0) if (win < 0) λ - λ’ Sending Entity (SE) Receiving Entity (RE) 24

Feedback-based SIP Overload Control: Design Requirements § approaching ideal performance* § minimizing control parameters

Feedback-based SIP Overload Control: Design Requirements § approaching ideal performance* § minimizing control parameters Performance Metrics § Througput + Response time -> Goodput INVITE -> ACK (INVITE) delivered <= 10 s § Fairness INVITE 100 Trying 180 Ringing 200 OK ACK BYE 200 OK * All load and goodput values normalized over server capacity in this presentation 25

Window-based Feedback Control Algorithms Win-disc Window size decrement Slide 26 Win-auto upon receiving session

Window-based Feedback Control Algorithms Win-disc Window size decrement Slide 26 Win-auto upon receiving session request (INVITE) update acceptable # Window size of sessions every adjustment control interval algorithm Tuning parameters Win-cont budget queuing delay control Interval measurement interval update current # of available session slots every message arrival window size increment after processing NEW INVITE request budget queuing delay measurement Interval N/A

Rate-based Feedback Control Algorithms Rate-abs Rate adjustment algorithm update acceptable rate every control interval

Rate-based Feedback Control Algorithms Rate-abs Rate adjustment algorithm update acceptable rate every control interval Tuning parameters budget queuing delay control interval measurement interval Slide 27 Rate-occ update request acceptance ratio every control interval budget CPU occupancy control interval measurement interval OCC tuning parameters

Simulation Assumptions and Metrics § Simulator: RFC 3261 compatible simulator built on OPNET §

Simulation Assumptions and Metrics § Simulator: RFC 3261 compatible simulator built on OPNET § Call model: exponential inter-arrival w/ standard seven-message call flow § RE service capacity: 72 cps, rejecting rate: 3000 cps § UAs and SEs have infinite capacity § UDP transport, no link delay and loss § Piggyback feedback method SIP Proxy A SIP Proxy B SIP Proxy D SIP Proxy C Slide 28

SIP Overload Performance without any Feedback Control Simple Drop scenario § message dropped when

SIP Overload Performance without any Feedback Control Simple Drop scenario § message dropped when queue full Threshold Rejection scenario § queue length configured with a high and a low threshold value. Similar congestion collapse but different reasons: § Simple Drop one third of INVITE arriving at the callee all 180 RINGING and most of the 200 OK also dropped due to queue overflow Ghost ring! § Threshold Rejection no INVITE reaches the callee, RE is only sending rejection messages Slide 29

Feedback based SIP Overload Control: Best Performance Comparison across Algorithms All except rate-occ reaches

Feedback based SIP Overload Control: Best Performance Comparison across Algorithms All except rate-occ reaches unit goodput § no retransmission ever § server always busy processing messages § each single message part of a successful session rate-occ does not operate at unit goodput § not simply due to artificial 85% CPU limit § inherently occupancy not as direct a metric as needed § extremely small CI improves performance at heavy load but with many problems Slide 30

SIP Overload Control: Fairness Basic fairness assuming sources of equal importance User-centric fairness: §

SIP Overload Control: Fairness Basic fairness assuming sources of equal importance User-centric fairness: § equal success probability for each individual user § implementation: div. RE capacity proportionally to original SE load arrivals § applicability example: “Free ticket to the third caller” Provider-centric fairness: § each provider (SE) gets the same aggregate share of total RE capacity § implementation: divide RE capacity equally among SEs § applicability example: equal-share SLA Customized fairness: assuming sources of different importance § any allocation as pre-specified by SLA etc. § penalizing the specific sources in deny of service attacks Slide 31

Dynamic Load Performance w/ Provider Centric Fairness § short periods of bulk loads w/

Dynamic Load Performance w/ Provider Centric Fairness § short periods of bulk loads w/ source arrivals/departures. § Example result using rateabs algorithm § Each upstream SE share close to equal RE capacity § Fast dynamic transition Slide 32

Dynamic Load Performance w/ User Centric Fairness § Double feed architecture § Example using

Dynamic Load Performance w/ User Centric Fairness § Double feed architecture § Example using win-cont algorithm § Upstream SEs share RE capacity proportionally § Fast dynamic transition Slide 33

Feedback based SIP Overload Control: Conclusions Win-disc Optimal Steady state performance Optimal Dynamic performance

Feedback based SIP Overload Control: Conclusions Win-disc Optimal Steady state performance Optimal Dynamic performance User-centric Fairness Providercentric fairness # of tuning parameters (with Win-cont (with Win-auto Rate-abs (with double-feed) Rate-occ double-feed) 3 2 0 3 5 Charles Shen, Henning Schulzrinne, Erich Nahum, SIP Server Overload Control: Design and Evaluation, pages 149 -173, Volume 5310/2008, Principles, Systems and Applications of IP Telecommunications. Services and Security for Next Generation Networks. Slide 34

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known Overload Source/Destination Scope Application Layer Filter-based SIP Overload Control Registrar Server Overload UA to Proxy Overload Unknown Overload Source/Destination Scope UDP-based TCP-based Application Layer Feedback-based SIP Overload Control 35

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known Overload Source/Destination Scope Application Layer Filter-based SIP Overload Control Registrar Server Overload UA to Proxy Overload Unknown Overload Source/Destination Scope UDP-based Application Layer Feedback-based SIP Overload Control TCP-based Does TCP w/ small Sndbuffer solve the overload problem? How about fairness? 36

SIP over TCP § Why? § § Callee SIP Proxy Security - TLS over

SIP over TCP § Why? § § Callee SIP Proxy Security - TLS over TCP NAT/Firewall traversal Larger message sizes especially in NGN Flow/congestion control? 200 OK Slide 37 2 T 1 4 T 1 § Main differences § Much fewer but still have retransmission timer, e. g. , for 200 OK § Transport level flow control T 1 ACK

TCP Flow Control: Preventing Receiver Overflow Receiver Application Sender Application Buffer Sender sends no

TCP Flow Control: Preventing Receiver Overflow Receiver Application Sender Application Buffer Sender sends no more than the size of the receiver available buffer space Write Last. Byte. Written TCP Send Buffer TCP Read Last. Byte. Read Receive Buffer ■■■ Last. Byte. Sent Last. Byte. Acked Effective. Window = Advertised. Window – (Last. Byte. Sent – Last. Byte. Acked) Slide 38 Last. Byte. Rcvd Advertised. Window = Max. Rcv. Buffer (Last. Byte. Rcvd - Last. Byte. Read)

SIP Over TCP Testbed SIPOL benchmarking suite § automates running experiments and analyzing results

SIP Over TCP Testbed SIPOL benchmarking suite § automates running experiments and analyzing results § extensible and reproducible Open. SIPS (Formerly OPEN SIP Express Router) § open source standard of SIP proxies SIPp § open source standard of SIP workload generator Hardware testbed § Up to 22 Intel Pentium 4 or Xeon or 3 GHz machines with 1 G/2 G memory (for SE/RE/UAC/UAS) § Lightly loaded 100 M or 1 Gbps LAN § Linux kernel 2. 6. 24 Workload § Standard 7 -message signaling flow SIP Receiving Entities SIP Sending Entities 39

SIP over TCP: Default Performance UAC SE RE Clear congestion collapse after overload Slide

SIP over TCP: Default Performance UAC SE RE Clear congestion collapse after overload Slide 40 UAS

Default SIP over TCP Internals: Final Screen Log UAS q UAS: only 3821 of

Default SIP over TCP Internals: Final Screen Log UAS q UAS: only 3821 of the 25899 INV receives ACK, all the rest timed out q UAC: sends 10106 ACKs but 6285 of them too late when arrived at UAS UAC Slide 41 Case study L=150 C=65

Default SIP over TCP Internals: Post Dial Delay between INVITE sent and 180/200 received

Default SIP over TCP Internals: Post Dial Delay between INVITE sent and 180/200 received Even without considering whether ACK is delivered successfully § 73% PDD between 8 ~ 16 s § 24% PDD between 4 ~ 8 s Slide 42 Case study L=150 C=65

SIP over TCP Overload Control: Observations and Solutions q Non-atomic application session load makes

SIP over TCP Overload Control: Observations and Solutions q Non-atomic application session load makes default transport level flow admission control ineffective q INVITE much more expensive than other messages q In-session messages from different connections at different time Solution split the connections q Default system buffer leads to unacceptable delays q SE socket send buffer q RE socket receive buffer q RE server application buffer Solution minimize the buffers Slide 43

SIP Over TCP Overload Control Mechanism: DC+MB Minimized TCP socket sending buffer Physical Connection

SIP Over TCP Overload Control Mechanism: DC+MB Minimized TCP socket sending buffer Physical Connection Split Minimized TCP socket receiving buffer + minimized SIP server application buffer Session start INVITE requests Sending Entity Port 5059 Other in-session requests (ACK etc. ) Receiving Entity Port 5060 Default TCP socket sending buffer Default TCP socket Receiving buffer + default SIP server application buffer Dual Connection with Minimized Buffer (DC+MB) Slide 44

Why does DC+MB still fail and how to fix it? Start q Pressure merely

Why does DC+MB still fail and how to fix it? Start q Pressure merely shifted to SE q Ultimately blocking UAC to SE ! q Solution smart forwarding INVITE arrival? Y Reject INVITE q A simple parameter-free smart forwarding algorithm q Implementation just an easy Linux ioctl function call to test buffer emptiness Slide 45 N INVITE connection send buffer empty? Y Forward INVITE N

SIP Over TCP Overload Control Mechanism: DC+MB+SF Minimized TCP socket sending buffer Physical Smart

SIP Over TCP Overload Control Mechanism: DC+MB+SF Minimized TCP socket sending buffer Physical Smart Connection forwarding Split Minimized TCP socket receiving buffer + minimized SIP server application buffer Session start INVITE requests Sending Entity Port 5059 Other in-session requests (ACK etc. ) Receiving Entity Port 5060 Default TCP socket sending buffer Default TCP socket Receiving buffer + default SIP server application buffer Dual Connection with Minimized Buffer and Smart Forwarding (DC+MB+SF) Slide 46 2021/2/28

DC+MB+SF: Throughput UAS q Among 35999 INVITEs generated, 22019 rejected, all the rest 13980

DC+MB+SF: Throughput UAS q Among 35999 INVITEs generated, 22019 rejected, all the rest 13980 go successful, no retransmissions q RE CPU utilization is full achieving full capacity without waste UAC Slide 47 Case study L=150 C=65

DC+MB+SF: Post Dial Delay q 99% delay smaller than 60 ms q No delay

DC+MB+SF: Post Dial Delay q 99% delay smaller than 60 ms q No delay exceeds 70 ms Slide 48

SIP Over TCP Overload Control Mechanism: VDC+MB Minimized TCP socket sending buffer Minimized TCP

SIP Over TCP Overload Control Mechanism: VDC+MB Minimized TCP socket sending buffer Minimized TCP socket receiving buffer + minimized SIP server application buffer Smart Forwarding Virtual connection split Sending Entity All Requests Default TCP socket sending buffer Receiving Entity Default TCP socket Receiving buffer + default SIP server application buffer Virtual Dual Connection with Minimized Buffer and Smart Forwarding (VDC+MB+SF) Slide 49

Smart forwarding for Virtual Dual Connection Start message arrival? N Y Is an INVITE?

Smart forwarding for Virtual Dual Connection Start message arrival? N Y Is an INVITE? N Reject Y send buffer empty? Y q Forward INVITE only if send buffer is empty q Always forward non-INVITE Slide 50 Forward N

VDC+MB+SF: Throughput UAS q Among 35999 INVITEs generated, 22742 rejected, all the rest 13257

VDC+MB+SF: Throughput UAS q Among 35999 INVITEs generated, 22742 rejected, all the rest 13257 go successful (comparing to 13980 in DC+MB+SF), no retransmissions q RE CPU utilization is full achieving full capacity without waste UAC Slide 51 Case study L=150 C=65

VDC+MB+SF: Post Dial Delay Comparing with DC+MB+SF q 99. 8% delay smaller than 30

VDC+MB+SF: Post Dial Delay Comparing with DC+MB+SF q 99. 8% delay smaller than 30 ms q Overall better than DC+MB+SF Slide 52 2021/2/28

VDC+MB+SF: Buffer tuning w/ default application buffer Conclusion: q Send buffer size insignificant q

VDC+MB+SF: Buffer tuning w/ default application buffer Conclusion: q Send buffer size insignificant q Receive buffer more effective than application buffer qonly needs to tune receive buffer! MB MRB (Minimized Receive Buffer) Slide 53 w/ default receive buffer

Overall Performance VDC+MRB+SF: Using 1 SE / 3 SE / 10 SE receive buffer

Overall Performance VDC+MRB+SF: Using 1 SE / 3 SE / 10 SE receive buffer 2 K Slide 54

VDC+MRB+SF: Comparing 1 SE, 3 SE and 10 SE q Number of outstanding calls

VDC+MRB+SF: Comparing 1 SE, 3 SE and 10 SE q Number of outstanding calls and PDD increase roughly proportionally due to total buffer size increase Slide 55

VDC+MRB+SF: Receiving Buffer Impact 10 SE w/ 11 Times Overload q Throughput fine as

VDC+MRB+SF: Receiving Buffer Impact 10 SE w/ 11 Times Overload q Throughput fine as long as receive buffer not too small q Delay obviously affected q Recommended value is no more than one or two INVITE size (e. g. , 1 ~ 2 K) Slide 56

VDC+MRB+SF: Fairness q q Three SEs with incoming load ratio 3: 2: 1 When

VDC+MRB+SF: Fairness q q Three SEs with incoming load ratio 3: 2: 1 When total load is below capacity, capacity proportionally divided At light to moderate overload, division approaches equality At heavy overload, share the same portion of capacity Slide 57

Insights from Our SIP over TCP Overload Control Study q Traditional TCP flow control

Insights from Our SIP over TCP Overload Control Study q Traditional TCP flow control cannot preventing SIP congestion collapse q Causes: special SIP load characteristics § Multiple-message, session-based, non-atomic job unit distributed in different connections § Start of the session message a commitment for the rest of the messages § Real-time requirement - job fails if not finished within delay threshold q The results generalize to whole application space with similar load characteristics, e. g. , SQL q Key solution components: q Connection split for the unequal and distributed messaging problem q Upstream smart forwarding release pressure in advance q Self-minimized buffer avoid job accumulation to minimize delay q Limitation: q Solution does not address the SIP overload scenario with numerous SEs overloading a single RE, which is a separate problem Slide 58

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known

SIP Server Overload: The Family Tree SIP Server Overload Proxy to Proxy Overload Known Overload Source/Destination Scope Application Layer Filter-based SIP Overload Control Registrar Server Overload UA to Proxy Overload UA to Registrar Overload Unknown Overload Source/Destination Scope UDP-based Application Layer Feedback-based SIP Overload Control TCP-based Connection Split + Minimized Buffer + Smart Forwarding 59

SIP Server Overload: Conclusion SIP Server Overload Proxy to Proxy Overload Known Overload Source/Destination

SIP Server Overload: Conclusion SIP Server Overload Proxy to Proxy Overload Known Overload Source/Destination Scope Application Layer Filter-based SIP Overload Control Registrar Server Overload UA to Proxy Overload UA to Registrar Overload Unknown Overload Source/Destination Scope UDP-based Application Layer Feedback-based SIP Overload Control Random backoff? Offline feedback? TCP-based Connection Split + Minimized Buffer + Smart Forwarding 60

End 61

End 61

Backup slides – Vo. IP basics 62

Backup slides – Vo. IP basics 62

Vo. IP - A Typical Internet Mulitmedia Service Local Location DB DNS/ENUM Server SIP

Vo. IP - A Typical Internet Mulitmedia Service Local Location DB DNS/ENUM Server SIP Server A SIP Server B SIP Server C SIP Server D 63

Vo. IP - A Typical Internet Mulitmedia Service DNS / ENUM Lookup SIP Proxying

Vo. IP - A Typical Internet Mulitmedia Service DNS / ENUM Lookup SIP Proxying Session Creation DNS/ENUM Server SIP Server B SIP Registration Local Location DB SIP Server C SIP Server A SIP Server D Qo. S Support Media Transport GIST + Qo. S NSLP (NSIS) RTP 64

Vo. IP - A Typical Internet Mulitmedia Service DNS / ENUM Lookup SIP Proxying

Vo. IP - A Typical Internet Mulitmedia Service DNS / ENUM Lookup SIP Proxying Session Creation DNS/ENUM Server SIP Server B Local Location DB SIP Server C SIP Server A SIP Server D Qo. S Support Media Transport GIST + Qo. S NSLP (NSIS) 2/28/2021 SIP Registration RTP 65

Backup slides – SIP Overload over UDP 66

Backup slides – SIP Overload over UDP 66

Feedback-based SIP Overload Control: Design Decisions § SIP session as a basic control unit

Feedback-based SIP Overload Control: Design Decisions § SIP session as a basic control unit • Different message carry different weights, e. g. INVITE vs. non. INVITE § Characterizing SIP session • check number of INVITEs accepted § Active source estimation • directly tracking each current active SE sending incoming load § Dynamic session backlog estimation • count both INVITEs and non-INVITEs for current session backlog 67

Feedback based SIP Overload Control: Sensitivity of Budget Queuing Delay and Control Interval §

Feedback based SIP Overload Control: Sensitivity of Budget Queuing Delay and Control Interval § Small queuing delay (< ½ T 1 timer) avoids timeout and gives best results § Example results for win-disc § Similar results for win-cont and rate-abs § Sensitivity of control interval • the smaller CI, the better. § Example results for win-disc, • at DB =200 ms CI <= 200 ms sufficient to archive unit goodput in our scenario Slide 68

Backup slides – SIP Overload over UDP 69

Backup slides – SIP Overload over UDP 69

Default SIP over TCP Internals: RE Average Message Processing Rate § 200 retransmission immediately

Default SIP over TCP Internals: RE Average Message Processing Rate § 200 retransmission immediately => RTD > 500 ms § Loads not atomic => INV/BYE processing alternates; too many INVs admitted, BYEs lag behind; § Increasing number of outstanding calls => even longer delay § Default RE buffer holds up to 150 INVITEs at a time, about 5 times of allowed maximum calls in the system based on delay budget Slide 70 Case study L=150 C=65

Default SIP over TCP Internals: UAS Total Messages Processed INV Recv 200 Timeout ACK

Default SIP over TCP Internals: UAS Total Messages Processed INV Recv 200 Timeout ACK Recv Outstanding Calls INV Unexpected INV Timeout q q q 0 – 38 s: number of outstanding calls equal to number of ACKs received After 38 s: outstanding calls increase with INV and decrease with ACK/BYE processing After 69 s: INV unexpected: actually ACK and BYE messages arrived after call timeout After 101 s: call states established by unexpected ACK and BYE messages timeout Timely received num of ACKs virtually stopped growing after 77 s -> Zero throughput Slide 71 Case study L=150 C=65

DC+MB+SF: RE Average Message Processing q All messages handled in time q # outstanding

DC+MB+SF: RE Average Message Processing q All messages handled in time q # outstanding calls bt. 1 and 3 ! UAS total message processing q Each INVITE matches ACK Slide 72 Case study L=150 C=65

VDC+MB+SF: RE and UAS Message Processing RE average message processing rate q All messages

VDC+MB+SF: RE and UAS Message Processing RE average message processing rate q All messages handled in time q # outstanding calls bt. 0 and 3 comparing to bt. 1 and 3 in DC UAS total message processing q Each INVITE matches ACK Slide 73 Case study L=150 C=65 2021/2/28

VDC+MB+SF: Relaxing the Application Buffer q Throughput roughly the same q Number of bytes

VDC+MB+SF: Relaxing the Application Buffer q Throughput roughly the same q Number of bytes read from the socket each time virtually unaffected by the App. buffer size L 150 w/ default App. buffer Slide 74 L 750 w/ default App. buffer

Relaxing Receive Buffer and Application buffer Min application buffer 1300 B Default application buffer

Relaxing Receive Buffer and Application buffer Min application buffer 1300 B Default application buffer 64 K Default receive buffer 64 K Slide 75

Relaxing both Receive and Application Buffer: Post Dial Delay q Though throughputs are similar,

Relaxing both Receive and Application Buffer: Post Dial Delay q Though throughputs are similar, limiting receiving buffer is significantly more effective in reducing delay and thus preventing retransmission Slide 76

Relaxing Send Buffer Throughput under receive buffer 2 K, min application buffer 1300 B

Relaxing Send Buffer Throughput under receive buffer 2 K, min application buffer 1300 B Throughput under receive buffer 2 K, default application buffer 64 K Slide 77

Relaxing Send Buffer q Throughput under default receive buffer 64 K, Min. application buffer

Relaxing Send Buffer q Throughput under default receive buffer 64 K, Min. application buffer 1300 q Send buffer size seen under load 750, default send buffer 16 K, default application buffer 64 K, receive buffer 2 K q With a minimized number of active calls in the system, send buffer size does not matter Conclusion: only needs to tune the receive buffer! MB MRB (Minimized Receive Buffer) Slide 78

Overall Performance DC+MRB+SF vs. VDC+MRB+SF q Using 1 SE, 3 SE and 10 SE,

Overall Performance DC+MRB+SF vs. VDC+MRB+SF q Using 1 SE, 3 SE and 10 SE, receive buffer 2 K q Overall overload throughput very similar q DC underload performance slightly better than VDC in single SE case q However, VDC delay smaller than DC q E. g. , 200 retransmission seen in 10 SE L 750 scenario for DC but not for VDC Slide 79