Ch 7 Internet Transport Protocols 1 Transport Layer

  • Slides: 66
Download presentation
Ch. 7 : Internet Transport Protocols 1

Ch. 7 : Internet Transport Protocols 1

Transport Layer Our goals: r understand principles behind transport layer services: m m Multiplexing

Transport Layer Our goals: r understand principles behind transport layer services: m m Multiplexing / demultiplexing data streams of several applications reliable data transfer flow control congestion control Transport Layer Chapter 6: r rdt principles Chapter 7: r multiplex/ demultiplex r Internet transport layer protocols: m m UDP: connectionless transport TCP: connection-oriented transport • connection setup • data transfer • flow control • congestion control 2

Transport vs. network layer Transport Layer Network Layer logical communication between processes logical communication

Transport vs. network layer Transport Layer Network Layer logical communication between processes logical communication between hosts exists only in hosts exists in hosts and in routers ignores network routes data through network Port #s used for routing in destination computer IP addresses used for routing in network r Transport layer uses Network layer services m adds more value to these services 3

Multiplexing & Demultiplexing 4

Multiplexing & Demultiplexing 4

Multiplexing/demultiplexing Multiplexing at send host: gather data from multiple sockets, envelop data with headers

Multiplexing/demultiplexing Multiplexing at send host: gather data from multiple sockets, envelop data with headers (later used for demultiplexing), pass to L 3 application transport network link P 3 P 1 Demultiplexing at rcv host: receive segment from L 3 deliver each received segment to correct socket = socket application transport network P 2 = process P 4 application transport network link physical host 1 physical host 2 physical host 3 5

each datagram has source IP address, destination IP address in its header m each

each datagram has source IP address, destination IP address in its header m each datagram carries one transport-layer segment m each segment has source, destination port number in its header r host uses port numbers, and sometimes also IP addresses to direct segment to correct socket m from socket data gets to the relevant application process m appl. msg r host receives IP datagrams L 4 header L 3 hdr How demultiplexing works 32 bits source IP addr dest IP addr. other IP header fields source port # dest port # other header fields application data (message) TCP/UDP segment format 6

Connectionless demultiplexing (UDP) r Processes create sockets with port numbers r a UDP socket

Connectionless demultiplexing (UDP) r Processes create sockets with port numbers r a UDP socket is identified by a pair of numbers: (my IP address , my port number) r Client decides to contact: m a server ( peer IP-address) + m an application ( peer port #) r Client puts those into the UDP packet he sends; they are written as: m m dest IP address - in the IP header of the packet dest port number - in its UDP header r When server receives a UDP segment: m m checks destination port number in segment directs UDP segment to the socket with that port number (packets from different remote sockets directed to same socket) the UDP message waits in socket queue and is processed in its turn. answer message sent to the client UDP socket (listed in Source fields of query packet) 7

Connectionless demux (cont) client socket: port=5775, IP=B client socket: port=9157, IP=A L 5 P

Connectionless demux (cont) client socket: port=5775, IP=B client socket: port=9157, IP=A L 5 P 2 L 4 P 3 Reply L 3 L 2 message L 1 client IP: A server socket: port=53, IP = C S-IP: C D-IP: A S-IP: C D-IP: B SP: 53 DP: 9157 message SP: 53 DP: 5775 S-IP: A D-IP: C Wait for application SP: 9157 Getting DP: 53 Service SP = Source port number DP= Destination port number S-IP= Source IP Address D-IP=Destination IP Address P 1 Reply message server IP: C Getting Service IP-Header S-IP: B D-IP: C SP: 5775 DP: 53 message Client IP: B UDP-Header SP and S-IP provide “return address” 8

Connection-oriented demux (TCP) r TCP socket identified by 4 -tuple: m m local (my)

Connection-oriented demux (TCP) r TCP socket identified by 4 -tuple: m m local (my) IP address local (my) port number remote IP address remote port number r receiving host uses all four values to direct segment to appropriate socket r Server host may support many simultaneous TCP sockets: m each socket identified by its own 4 -tuple r Web servers have a different socket for each connecting client m m If you open two browser windows, you generate 2 sockets at each end non-persistent HTTP will open a different socket for each request 9

Connection-oriented demux (cont) client socket: LP= 9157, L-IP= A RP= 80 , R-IP= C

Connection-oriented demux (cont) client socket: LP= 9157, L-IP= A RP= 80 , R-IP= C L 5 server socket: LP= 80 , L-IP= C RP= 9157, R-IP= A P 1 L 4 P 4 server socket: LP= 80 , L-IP= C RP= 5775, R-IP= B P 5 P 6 L 2 client IP: A S-IP: A H 3 D-IP: C SP: 9157 H 4 DP: 80 S-IP: B D-IP: C message server IP: C message LP= Local Port , RP= Remote Port L-IP= Local IP , R-IP= Remote IP P 1 P 3 DP: 80 packet: L 1 P 2 SP: 5775 server socket: LP= 80 , L-IP= C RP= 9157, R-IP= B L 3 packet: client socket: LP= 9157, L-IP= B RP= 80 , R-IP= C “L”= Local = My “R”= Remote = Peer packet: S-IP: B D-IP: C SP: 9157 DP: 80 message Client IP: B client socket: LP= 5775, L-IP= B RP= 80 , R-IP= C 10

UDP Protocol 11

UDP Protocol 11

UDP: User Datagram Protocol r simple transport protocol r “best effort” service, UDP segments

UDP: User Datagram Protocol r simple transport protocol r “best effort” service, UDP segments may be: m lost m delivered out of order to application with no correction by UDP r UDP will discard bad checksum segments if so configured by application r connectionless: m no handshaking between UDP sender, receiver m each UDP segment handled independently of others [RFC 768] Why is there a UDP? r no connection establishment saves delay r no congestion control: m better delay & BW r simple: r small segment header m r typical usage: realtime appl. loss tolerant m rate sensitive r other uses (why? ): m m m DNS SNMP 12

UDP segment structure Total length of segment (bytes) 32 bits source port # length

UDP segment structure Total length of segment (bytes) 32 bits source port # length dest port # checksum application data (variable length) Checksum computed over: • the whole segment • part of IP header: – both IP addresses – protocol field – total IP packet length Checksum usage: • computed at destination to detect errors • in case of error, UDP will discard the segment, or 13

UDP checksum Goal: detect “errors” (e. g. , flipped bits) in transmitted segment Sender:

UDP checksum Goal: detect “errors” (e. g. , flipped bits) in transmitted segment Sender: r treat segment contents as sequence of 16 -bit integers r checksum: addition (1’s complement sum) of segment contents r sender puts checksum value into UDP checksum field Receiver: r compute checksum of received segment r check if computed checksum equals checksum field value: m NO - error detected m YES - no error detected. 14

TCP Protocol 15

TCP Protocol 15

TCP: Overview r point-to-point: m one sender, one receiver m between sockets r reliable,

TCP: Overview r point-to-point: m one sender, one receiver m between sockets r reliable, in-order byte steam: m no “message boundaries” r pipelined: m TCP congestion and flow control set window size r send & receive buffers RFCs: 793, 1122, 1323, 2018, 2581 r full duplex data: m bi-directional data flow in same connection m MSS: maximum segment size r connection-oriented: m handshaking (exchange of control msgs) init’s sender, receiver state before data exchange r flow controlled: m sender will not overwhelm receiver 16

TCP segment structure hdr length in 32 bit words 32 bits URG: urgent data

TCP segment structure hdr length in 32 bit words 32 bits URG: urgent data (generally not used) ACK: ACK # valid PSH: push data now (generally not used) RST, SYN, FIN: connection estab (setup, teardown commands) Internet checksum (as in UDP) source port # dest port # sequence number acknowledgement number head not UA P R S F len used checksum rcvr window size ptr urgent data Options (variable length) counting by bytes of data (not segments!) # bytes rcvr willing to accept application data (variable length) 17

TCP sequence # (SN) and ACK (AN) SN: m byte stream “number” of first

TCP sequence # (SN) and ACK (AN) SN: m byte stream “number” of first byte in segment’s data AN: SN of next byte expected from other side m cumulative ACK Qn: how receiver handles out-of-order segments? m puts them in receive buffer but does not acknowledge them m Host B Host A host A sends 100 data bytes SN=42 , AN=7 9, 100 data b y tes 79, SN= 142 AN= ta by 0 da , 5 host ACKs SN=14 receipt 2, AN= 129 , n o data of data , sends no data WHY? simple data transfer scenario (some time after conn. setup) host B ACKs 100 bytes and sends 50 data bytes time 18

Connection Setup: Objective r Agree on initial sequence numbers m a sender should not

Connection Setup: Objective r Agree on initial sequence numbers m a sender should not reuse a seq# before it is sure that all packets with the seq# are purged from the network • the network guarantees that a packet too old will be purged from the network: network bounds the life time of each packet m To avoid waiting for seq #s to disappear, start new session with a seq# far away from previous • needs connection setup so that the sender tells the receiver initial seq# r Agree on other initial parameters m e. g. Maximum Segment Size 19

TCP Connection Management Setup: establish connection between the hosts before exchanging data segments r

TCP Connection Management Setup: establish connection between the hosts before exchanging data segments r called: 3 way handshake r initialize TCP variables: m seq. #s m buffers, flow control info (e. g. Rcv. Window) r client : connection initiator m opens socket and cmds OS to connect it to server r server : contacted by client m has waiting socket m accepts connection m generates working socket Teardown: end of Three way handshake: Step 1: client host sends TCP SYN segment to server m specifies initial seq # m no data Step 2: server host receives SYN, replies with SYNACK segment (also no data) allocates buffers m specifies server initial SN & window size Step 3: client receives SYNACK, replies with ACK segment, which may contain data m connection (we skip the details) 20

TCP Three-Way Handshake (TWH) B A X+1 Y+1 Send Buffer Y+1 X+1 Receive Buffer

TCP Three-Way Handshake (TWH) B A X+1 Y+1 Send Buffer Y+1 X+1 Receive Buffer SYN , SN = X AN = , Y = N , S K C A N SY ACK , SN = X+1 , A N = Y+ 1 21

Connection Close r Objective of closure handshake: m each side can release resource and

Connection Close r Objective of closure handshake: m each side can release resource and remove state about the connection • Close the socket client initial close : I am d server one. A r e you release resource? close I one am d done t oo? too ! dbye o o. G close release resource 22

Ch. 7 : Internet Transport Protocols Part B 23

Ch. 7 : Internet Transport Protocols Part B 23

TCP reliable data transfer r TCP creates reliable r r service on top of

TCP reliable data transfer r TCP creates reliable r r service on top of IP’s unreliable service pipelined segments cumulative acks single retransmission timer receiver accepts out of order segments but does not acknowledge them r Retransmissions are triggered by timeout events r Initially consider simplified TCP sender: m ignore flow control, congestion control 3 -24

TCP sender events: data rcvd from app: r create segment with seq # r

TCP sender events: data rcvd from app: r create segment with seq # r seq # is byte-stream number of first data byte in segment r start timer if not already running (think of timer as for oldest un. ACKed segment) r expiration interval: Time. Out. Interval timeout: r retransmit segment that caused timeout r restart timer ACK rcvd: r if acknowledges previously un. ACKed segments m m update what is known to be ACKed start timer if there are outstanding segments 3 -25

Next. Seq. Num = Initial. Seq. Num Send. Base = Initial. Seq. Num loop

Next. Seq. Num = Initial. Seq. Num Send. Base = Initial. Seq. Num loop (forever) { switch(event) event: data received from application above create TCP segment with sequence number Next. Seq. Num if (timer currently not running) start timer pass segment to IP Next. Seq. Num = Next. Seq. Num + length(data) event: timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer event: ACK received, with ACK field value of y if (y > Send. Base) { Send. Base = y if (there are currently not-yet-acknowledged segments) start timer } } /* end of loop forever */ 3 -26 TCP sender (simplified) Comment: • Send. Base-1: last cumulatively ACKed byte Example: • Send. Base-1 = 71; y= 73, so the rcvr wants 73+ ; y > Send. Base, so that new data is ACKed Transport Layer

TCP actions on receiver events: application takes data: data rcvd from IP: r free

TCP actions on receiver events: application takes data: data rcvd from IP: r free the room in r if Checksum fails, ignore buffer segment r give the freed cells r If checksum OK, then : new numbers if data came in order: m circular numbering r update AN+WIN r WIN increases by the number of bytes taken r AN grows by the number of new in-order bytes r WIN decreases by same # if data out of order: r Put in buffer, but don’t count it for AN/ WIN 3 -27

TCP: retransmission scenarios Host A start timer for SN 92 stop timer start timer

TCP: retransmission scenarios Host A start timer for SN 92 stop timer start timer for SN 100 stop timer SN=9 2, 8 b ytes SN=1 00 Host A Host B data start timer for SN 92 Host B SN=92 , 8 byt es dat a 100 AN= , 20 b ytes d ata 0 =12 AN NO timer X loss TIMEOUT start timer for new SN 92 timer setting actual timer run , 8 byt es dat a 100 AN= stop timer time. A. normal scenario SN=92 NO timer time B. lost ACK + retransmission 3 -28

TCP retransmission scenarios (more) Host A start timer for SN 92 Host A Host

TCP retransmission scenarios (more) Host A start timer for SN 92 Host A Host B SN=92 start timer for SN 92 , 8 byt es dat a SN=10 0, 20 b SN=92 SN=1 , 8 byt es dat 00, 20 100 = N A ytes d X ata TIMEOUT =120 AN NO timer star fort 92 stop אפקה תשע"א ס"ב bytes data SN=92 , 8 byt es dat a start for 100 0 stop NO timer time C. lost ACK, NO retransmission a 0 10 = 0 AN 12 = AN loss stop timer Host B 12 N= A redundant ACK time D. premature timeout Transport Layer 3 -29

TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver TCP Receiver action Arrival

TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver TCP Receiver action Arrival of in-order segment with expected seq #. All data up to expected seq # already ACKed Delayed ACK. Wait up to 500 ms for next segment. If no data segment to send, then send ACK Arrival of in-order segment with expected seq #. One other segment has ACK pending Immediately send single cumulative ACK, ACKing both in-order segments Arrival of out-of-order segment with higher-than-expect seq. #. Gap detected Immediately send duplicate ACK, indicating seq. # of next expected byte This Ack carries no data & no new WIN Arrival of segment that partially or completely fills gap Immediately send ACK, provided that segment starts at lower end of gap Transport Layer 3 -30

Fast Retransmit r time-out period often relatively long: m Causes long delay before resending

Fast Retransmit r time-out period often relatively long: m Causes long delay before resending lost packet r detect lost segments via duplicate ACKs. m m sender often sends many segments back-to-back if segment is lost, there will likely be many duplicate ACKs for that segment r If sender receives 3 ACKs for same data, it assumes that segment after ACKed data was lost: m fast retransmit: resend segment before timer expires Trans 3 -31 port

Host A Host B seq # x 1 seq # x 2 seq #

Host A Host B seq # x 1 seq # x 2 seq # x 3 seq # x 4 seq # x 5 triple duplicate ACKs X ACK # x 2 seq X 2 timeout resend time Trans 3 -32 port

Fast retransmit algorithm: event: ACK received, with ACK field value of y if (y

Fast retransmit algorithm: event: ACK received, with ACK field value of y if (y > Send. Base) { Send. Base = y if (there are currently not-yet-acknowledged segments) start timer } else {if (segment carries no data & doesn’t change WIN) increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) { { resend segment with sequence number y count of dup ACKs received for y = 3 } } a duplicate ACK for already ACKed segment fast retransmit 3 -33 Trans port

TCP: Flow Control 34

TCP: Flow Control 34

TCP Flow Control for A’s data flow control r receive side of TCP connection

TCP Flow Control for A’s data flow control r receive side of TCP connection at B has a receive buffer: Receive Buffer data taken by application TCP data in buffer spare room AN data from IP (sent by TCP at A) WIN node B : Receive process r application process at B may be slow at reading from buffer sender won’t overflow receiver’s buffer by transmitting too much, too fast r flow control matches the send rate of A to the receiving application’s drain rate at B r Receive buffer size set by OS at connection init r WIN = window size = number bytes A may send starting at AN 3 -35

TCP Flow control: how it works non-ACKed data in buffer (arrived out of order)

TCP Flow control: how it works non-ACKed data in buffer (arrived out of order) ignored Rcv Buffer data taken by ACKed data application in buffer s p a r e r o o m AN Formulas: data from IP (sent by TCP at A) WIN node B : Receive process Procedure: r AN = first byte not received yet m sent to A in TCP header r Acked. Range = = AN – First. Byte. Not. Read. By. Appl = = # bytes rcvd in sequence & not taken r WIN = Rcv. Buffer – Acked. Range = Spare. Room r AN and WIN sent to A in TCP header r Data rcvd out of sequence is considered part of ‘spare room’ range אפקה תשע"א ס"ב r Rcvr advertises “spare room” by including value of WIN in his segments r Sender A is allowed to send at most WIN bytes in the range starting with AN m guarantees that receive buffer doesn’t overflow 3 -36

TCP: setting timeouts 39

TCP: setting timeouts 39

TCP Round Trip Time and Timeout Q: how to set TCP timeout value? r

TCP Round Trip Time and Timeout Q: how to set TCP timeout value? r longer than RTT note: RTT will vary r too short: premature timeout m unnecessary retransmissions r too long: slow reaction to segment loss m Q: how to estimate RTT? r Sample. RTT: measured time from segment transmission until ACK receipt m ignore retransmissions, cumulatively ACKed segments r Sample. RTT will vary, want estimated RTT “smoother” m use several recent measurements, not just current Sample. RTT 40

High-level Idea Set timeout = average + safe margin 41

High-level Idea Set timeout = average + safe margin 41

Estimating Round Trip Time r Sample. RTT: measured time from segment transmission until ACK

Estimating Round Trip Time r Sample. RTT: measured time from segment transmission until ACK receipt r Sample. RTT will vary, want a “smoother” estimated RTT use several recent measurements, not just current Sample. RTT Estimated. RTT = (1 - )*Estimated. RTT + *Sample. RTT r Exponential weighted moving average r influence of past sample decreases exponentially fast r typical value: = 0. 125 42

Setting Timeout Problem: r using the average of Sample. RTT will generate many timeouts

Setting Timeout Problem: r using the average of Sample. RTT will generate many timeouts due to network variations Solution: r freq. Estimted. RTT plus “safety margin” m RTT large variation in Estimated. RTT -> larger safety margin Dev. RTT = (1 - )*Dev. RTT + *|Sample. RTT-Estimated. RTT| (typically, = 0. 25) Then set timeout interval: Timeout. Interval = Estimated. RTT + 4*Dev. RTT 43

An Example TCP Session 44

An Example TCP Session 44

TCP: Congestion Control 45

TCP: Congestion Control 45

TCP Congestion Control r Closed-loop, end-to-end, window-based congestion control r Designed by Van Jacobson

TCP Congestion Control r Closed-loop, end-to-end, window-based congestion control r Designed by Van Jacobson in late 1980 s, based on the AIMD alg. of Dah-Ming Chu and Raj Jain r Works well so far: the bandwidth of the Internet has increased by more than 200, 000 times r Many versions m TCP/Tahoe: this is a less optimized version m TCP/Reno: many OSs today implement Reno type congestion control m TCP/Vegas: not currently used For more details: see TCP/IP illustrated; or read http: //lxr. linux. no/source/net/ipv 4/tcp_input. c for linux implementation 46

TCP & AIMD: congestion r Dynamic window size [Van Jacobson] m Initialization: MI •

TCP & AIMD: congestion r Dynamic window size [Van Jacobson] m Initialization: MI • Slow start m Steady state: AIMD • Congestion Avoidance r Congestion = timeout m TCP Tahoe r Congestion = timeout || 3 duplicate ACK m TCP Reno & TCP new Reno r Congestion = higher latency m TCP Vegas 47

Visualization of the Two Phases MSS threshold Congestion avoidance Congwing Slow start 48

Visualization of the Two Phases MSS threshold Congestion avoidance Congwing Slow start 48

TCP Slowstart: MI Host A initialize: Congwin = 1 MSS for (each segment ACKed)

TCP Slowstart: MI Host A initialize: Congwin = 1 MSS for (each segment ACKed) Congwin+MSS until (congestion event OR Cong. Win > threshold) RTT Slowstart algorithm Host B one segme nt two segme nts four segme nts r exponential increase (per RTT) in window size (not so slow!) r In case of timeout: m time Threshold=Cong. Win/2 49

TCP Tahoe Congestion Avoidance Congestion avoidance /* slowstart is over */ /* Congwin >

TCP Tahoe Congestion Avoidance Congestion avoidance /* slowstart is over */ /* Congwin > threshold */ Until (timeout) { /* loss event */ on every ACK: CWin/MSS+= 1/(Cwin/MSS) } threshold = Congwin/2 Congwin = 1 MSS perform slowstart TCP Tahoe 50

TCP Reno r Fast retransmit: m Try to avoid waiting for timeout r Fast

TCP Reno r Fast retransmit: m Try to avoid waiting for timeout r Fast recovery: m Try to avoid slowstart. m used only on triple duplicate event m Single packet drop: not too bad 51

TCP Reno cwnd Trace CA CA Sl. Start CA Slow Start triple duplicate Ack

TCP Reno cwnd Trace CA CA Sl. Start CA Slow Start triple duplicate Ack 52

TCP congestion control: bandwidth probing r “probing for bandwidth”: increase transmission rate on receipt

TCP congestion control: bandwidth probing r “probing for bandwidth”: increase transmission rate on receipt of ACK, until eventually loss occurs, then decrease transmission rate m continue to increase on ACK, decrease on loss (since available bandwidth is changing, depending on other connections in network) ACKs being received, so increase rate X loss, so decrease rate sending rate X X X TCP’s “sawtooth” behavior X time r Q: how fast to increase/decrease? m details to follow Trans 3 -53 port

TCP Congestion Control: details r sender limits rate by limiting number of un. ACKed

TCP Congestion Control: details r sender limits rate by limiting number of un. ACKed bytes “in pipeline”: Last. Byte. Sent-Last. Byte. Acked cwnd m cwnd: differs from rwnd (how, why? ) m sender limited by min(cwnd, rwnd) r roughly, rate = cwnd RTT cwnd bytes/sec r cwnd is dynamic, function of perceived network congestion RTT ACK(s) Trans 3 -54 port

TCP Congestion Control: more details segment loss event: reducing cwnd r timeout: no response

TCP Congestion Control: more details segment loss event: reducing cwnd r timeout: no response from receiver m cut cwnd to 1 r 3 duplicate ACKs: at least some segments getting through (recall fast retransmit) m ACK received: increase cwnd r slowstart phase: m m m start low (cwnd=MSS) increase cwnd exponentially fast (despite name) used: at connection start, or following timeout r congestion avoidance: m increase cwnd linearly cut cwnd in half, less aggressively than on timeout Trans 3 -55 port

TCP Slow Start r when connection begins, cwnd = Host A RTT 1 MSS

TCP Slow Start r when connection begins, cwnd = Host A RTT 1 MSS m example: MSS = 500 bytes & RTT = 200 msec m initial rate = 20 kbps r available bandwidth may be >> MSS/RTT m desirable to quickly ramp up to respectable rate r increase rate exponentially until first loss event or when threshold reached m double cwnd every RTT m done by incrementing cwnd by 1 for every ACK received Host B one segme nt two segme nts four segme nts time Trans 3 -56 port

TCP: congestion avoidance r when cwnd > ssthresh grow cwnd linearly m increase cwnd

TCP: congestion avoidance r when cwnd > ssthresh grow cwnd linearly m increase cwnd by 1 MSS per RTT m approach possible congestion slower than in slowstart m implementation: cwnd = cwnd + MSS^2/cwnd for each ACK received AIMD r ACKs: increase cwnd by 1 MSS per RTT: additive increase r loss: cut cwnd in half (non -timeout-detected loss ): multiplicative decrease m m true in macro picture may require Slow Start first to grow up to this AIMD: Additive Increase Multiplicative Decrease Trans 3 -57 port

TCP congestion control FSM: overview slow start cwnd > ssthresh congestion loss: timeout loss:

TCP congestion control FSM: overview slow start cwnd > ssthresh congestion loss: timeout loss: 3 dup. ACK fast recovery avoidance new ACK loss: 3 dup. ACK Trans 3 -58 port

TCP congestion control FSM: details duplicate ACK dup. ACKcount++ L cwnd = 1 MSS

TCP congestion control FSM: details duplicate ACK dup. ACKcount++ L cwnd = 1 MSS ssthresh = 64 KB dup. ACKcount = 0 slow start timeout ssthresh = cwnd/2 cwnd = 1 MSS dup. ACKcount = 0 retransmit missing segment dup. ACKcount == 3 ssthresh= cwnd/2 cwnd = ssthresh + 3 MSS retransmit missing segment new ACK cwnd = cwnd+MSS dup. ACKcount = 0 transmit new segment(s), as allowed cwnd > ssthresh L timeout ssthresh = cwnd/2 cwnd = 1 MSS dup. ACKcount = 0 retransmit missing segment . new ACK cwnd = cwnd + MSS (MSS/cwnd) dup. ACKcount = 0 transmit new segment(s), as allowed congestion avoidance duplicate ACK dup. ACKcount++ New ACK cwnd = ssthresh dup. ACKcount = 0 fast recovery dup. ACKcount == 3 ssthresh= cwnd/2 cwnd = ssthresh + 3 MSS retransmit missing segment duplicate ACK cwnd = cwnd + MSS transmit new segment(s), as allowed Trans 3 -59 port

cwnd window size (in segments) Popular “flavors” of TCP Reno ssthresh TCP Tahoe Transmission

cwnd window size (in segments) Popular “flavors” of TCP Reno ssthresh TCP Tahoe Transmission round Trans 3 -60 port

Summary: TCP Congestion Control r when cwnd < ssthresh, sender in slow-start phase, window

Summary: TCP Congestion Control r when cwnd < ssthresh, sender in slow-start phase, window grows exponentially. r when cwnd >= ssthresh, sender is in congestion- avoidance phase, window grows linearly. r when triple duplicate ACK occurs, ssthresh set to cwnd/2, cwnd set to ~ ssthresh r when timeout occurs, ssthresh set to cwnd/2, cwnd set to 1 MSS. Trans 3 -61 port

TCP throughput r Q: what’s average throughout of TCP as function of window size,

TCP throughput r Q: what’s average throughout of TCP as function of window size, RTT? m ignoring slow start r let W be window size when loss occurs. m when window is W, throughput is W/RTT m just after loss, window drops to W/2, throughput to W/2 RTT. m average throughout: . 75 W/RTT Trans 3 -62 port

TCP Fairness fairness goal: if K TCP sessions share same bottleneck link of bandwidth

TCP Fairness fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 TCP connection 2 bottleneck router capacity R Trans 3 -63 port

Why is TCP fair? Two competing sessions: (Tahoe, Slow Start ignored) r Additive increase

Why is TCP fair? Two competing sessions: (Tahoe, Slow Start ignored) r Additive increase gives slope of 1, as throughout increases r multiplicative decreases throughput proportionally Connection 2 throughput R equal bandwidth share y = x+(b-a)/4 loss: decrease window by factor of 2 congestion avoidance: additive loss: (a/2+t decrease window by factor of 2 increase 1/2+t, b/2+t 1/2+t) => y = x+(b-a)/2 congestion avoidance: additive increase (a+t, b+t) => y = x+(b-a) (a, b) Connection 1 throughput R Trans 3 -64 port

Fairness (more) Fairness and UDP r multimedia apps often do not use TCP m

Fairness (more) Fairness and UDP r multimedia apps often do not use TCP m do not want rate throttled by congestion control r instead use UDP: m pump audio/video at constant rate, tolerate packet loss Fairness and parallel TCP connections r nothing prevents app from opening parallel connections between 2 hosts. r web browsers do this r example: link of rate R supporting already 9 connections; m m new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 !! Trans 3 -65 port

Exercise r MSS = 1000 r Only one event per row Trans 3 -66

Exercise r MSS = 1000 r Only one event per row Trans 3 -66 port