Transport Layer: Computer Networking: A Top Down Approach Featuring The Internet

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Chapter 3

Transport Layer

Computer Networking:
A Top Down Approach
Featuring the Internet,
3rd edition.
Jim Kurose, Keith Ross
Addison-Wesley, July
2004.

Transport Layer 3-1


Chapter 3 outline
3.1 Transport-layer 3.5 Connection-oriented
services transport: TCP
3.2 Multiplexing and segment structure
demultiplexing reliable data transfer
flow control
3.3 Connectionless

connection management
transport: UDP

3.4 Principles of
3.6 Principles of
reliable data transfer congestion control
3.7 TCP congestion
control

Transport Layer 3-2


TCP Flow Control
flow control
sender wont overflow
receive side of TCP receivers buffer by
connection has a transmitting too much,
receive buffer: too fast

speed-matching
service: matching the
send rate to the
receiving apps drain
rate
app process may be
slow at reading from
buffer
Transport Layer 3-3
TCP Flow control: how it works
Rcvr advertises spare
room by including value
of RcvWindow in
segments
Sender limits unACKed
(Suppose TCP receiver data to RcvWindow
discards out-of-order guarantees receive
segments) buffer doesnt overflow
spare room in buffer
= RcvWindow
= RcvBuffer-[LastByteRcvd -
LastByteRead]

Transport Layer 3-4


Chapter 3 outline
3.1 Transport-layer 3.5 Connection-oriented
services transport: TCP
3.2 Multiplexing and segment structure
demultiplexing reliable data transfer
flow control
3.3 Connectionless

connection management
transport: UDP

3.4 Principles of
3.6 Principles of
reliable data transfer congestion control
3.7 TCP congestion
control

Transport Layer 3-5


TCP Connection Management
Recall: TCP sender, receiver Three way handshake:
establish connection
before exchanging data Step 1: client host sends TCP
segments SYN segment to server
initialize TCP variables: specifies initial seq #

seq. #s no data

buffers, flow control Step 2: server host receives


info (e.g. RcvWindow) SYN, replies with SYNACK
client: connection initiator segment
Socket clientSocket = new
server allocates buffers
Socket("hostname","port
specifies server initial
number");
seq. #
server: contacted by client
Socket connectionSocket =
Step 3: client receives SYNACK,
welcomeSocket.accept(); replies with ACK segment,
which may contain data

Transport Layer 3-6


TCP Connection Management (cont.)

Closing a connection: client server

close
client closes socket: FIN
clientSocket.close();

Step 1: client end system AC K


close
sends TCP FIN control FIN
segment to server

Step 2: server receives

timed wait
A CK

FIN, replies with ACK.


Closes connection, sends
FIN.
closed

Transport Layer 3-7


TCP Connection Management (cont.)

Step 3: client receives FIN, client server


replies with ACK. closing
FIN
Enters timed wait -
will respond with ACK
to received FINs AC K
closing
Step 4: server, receives FIN

ACK. Connection closed.

timed wait
A CK

closed

closed

Transport Layer 3-8


TCP Connection Management (cont)

TCP server
lifecycle

TCP client
lifecycle

Transport Layer 3-9


Chapter 3 outline
3.1 Transport-layer 3.5 Connection-oriented
services transport: TCP
3.2 Multiplexing and segment structure
demultiplexing reliable data transfer
flow control
3.3 Connectionless

connection management
transport: UDP

3.4 Principles of
3.6 Principles of
reliable data transfer congestion control
3.7 TCP congestion
control

Transport Layer 3-10


Principles of Congestion Control

Congestion:
informally: too many sources sending too much
data too fast for network to handle
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem!

Transport Layer 3-11


Causes/costs of congestion: scenario 1
Host A out
two senders, two in : original data

receivers
one router,
Host B unlimited shared
output link buffers

infinite buffers
no retransmission

R/2 large delays


when congested
maximum
achievable
R/2 R/2 throughput
Transport Layer 3-12
Causes/costs of congestion: scenario 2

one router, finite buffers


sender retransmission of lost packet

Host A in : original data out

'in : original data, plus


retransmitted data

Host B finite shared output


link buffers

Transport Layer 3-13


Causes/costs of congestion: scenario 2
perfect retransmission only when loss: > out
in
retransmission of delayed (not lost) packet makes
in
larger (than perfect case) for same
out
R/2 R/2 R/2

out R/3

out
out

R/4

R/2 R/2 R/2


in in in

a. b. c.
costs of congestion:
more work (retrans) for given goodput
unneeded retransmissions: link carries multiple copies of pkt
Transport Layer 3-14
Causes/costs of congestion: scenario 3
four senders
Q: what happens as
multihop paths in
and increase ?
timeout/retransmit in
Host A out
in : original data Host C

'in : original data, plus


retransmitted data

finite shared output


link buffers

Host B
Host D

Transport Layer 3-15


Causes/costs of congestion: scenario 3
H
o
o
s
u
t
A t

H
o
s
t
B

Another cost of congestion:


when packet dropped, any upstream transmission
capacity used for that packet was wasted!

Transport Layer 3-16


Approaches towards congestion control
Two broad approaches towards congestion control:

End-end congestion Network-assisted


control: congestion control:
no explicit feedback from routers provide feedback
network to end systems
congestion inferred from single bit indicating
end-system observed loss, congestion (IBM SNA,
delay DEC DECbit, TCP/IP
approach taken by TCP ECN, ATM)
explicit rate sender
should send at

Transport Layer 3-17


Chapter 3 outline
3.1 Transport-layer 3.5 Connection-oriented
services transport: TCP
3.2 Multiplexing and segment structure
demultiplexing reliable data transfer
flow control
3.3 Connectionless

connection management
transport: UDP

3.4 Principles of
3.6 Principles of
reliable data transfer congestion control
3.7 TCP congestion
control

Transport Layer 3-18


TCP Congestion Control
end-end control (no network How does sender
assistance) perceive congestion?
sender limits transmission: loss event = timeout or
LastByteSent-LastByteAcked 3 duplicate acks
CongWin TCP sender reduces
Roughly, rate (CongWin) after
CongWin loss event
rate = Bytes/sec
RTT three mechanisms:
AIMD
CongWin is dynamic, function
slow start
of perceived network
conservative after
congestion
timeout events
Transport Layer 3-19
TCP AIMD
multiplicative decrease: additive increase:
cut CongWin in half increase CongWin by
after loss event 1 MSS every RTT in
the absence of loss
congestion
window
events: probing
24 Kbytes

16 Kbytes

8 Kbytes

time

Long-lived TCP connection


Transport Layer 3-20
TCP Slow Start
When connection begins,
When connection begins,
CongWin = 1 MSS increase rate
exponentially fast until
Example: MSS = 500
bytes & RTT = 200 msec first loss event
initial rate is about 20
kbps
available bandwidth may
be >> MSS/RTT
desirable to quickly ramp
up to respectable rate

Transport Layer 3-21


TCP Slow Start (more)
When connection Host A Host B
begins, increase rate
exponentially until
one segm
ent

RTT
first loss event:
double CongWin every two segm
en ts
RTT
done by incrementing
CongWin for every ACK four segm
ents
received
Summary: initial rate
is slow but ramps up
exponentially fast time

Transport Layer 3-22


Refinement
Philosophy:
After 3 dup ACKs:
3 dup ACKs indicates
CongWin is cut in half
network capable of
window then grows delivering some segments
linearly timeout before 3 dup
But after timeout event: ACKs is more alarming
CongWin instead set to
1 MSS;
window then grows
exponentially
to a threshold, then
grows linearly

Transport Layer 3-23


Refinement (more)
Q: When should the
exponential
increase switch to
linear?

Congestion Window
A: When CongWin

(in segments)
gets to 1/2 of its
value before
timeout.

Implementation:
Variable Threshold
At loss event, Threshold is
set to 1/2 of CongWin just
before loss event

Transport Layer 3-24


Summary: TCP Congestion Control

When CongWin is below Threshold, sender in


slow-start phase, window grows exponentially.

When CongWin is above Threshold, sender is in


congestion-avoidance phase, window grows linearly.

When a triple duplicate ACK occurs, Threshold


set to CongWin/2 and CongWin set to
Threshold.

When timeout occurs, Threshold set to


CongWin/2 and CongWin is set to 1 MSS.

Transport Layer 3-25


TCP sender congestion control
Event State TCP Sender Action Commentary
ACK receipt Slow Start CongWin = CongWin + MSS, Resulting in a doubling of
for previously (SS) If (CongWin > Threshold) CongWin every RTT
unacked set state to Congestion
data Avoidance
ACK receipt Congestion CongWin = CongWin+MSS * Additive increase, resulting
for previously Avoidance (1/CongWin) in increase of CongWin by
unacked (CA) 1 MSS every RTT
data
Loss event SS or CA Threshold = CongWin/2, Fast recovery,
detected by CongWin = Threshold, implementing multiplicative
triple Set state to Congestion decrease. CongWin will not
duplicate Avoidance drop below 1 MSS.
ACK
Timeout SS or CA Threshold = CongWin/2, Enter slow start
CongWin = 1 MSS,
Set state to Slow Start
Duplicate SS or CA Increment duplicate ACK count CongWin and Threshold not
ACK for segment being acked changed

Transport Layer 3-26


TCP throughput
Whats the average throughout of TCP as a
function of window size and RTT?
Ignore slow start
Let W be the window size when loss occurs.
When window is W, throughput is W/RTT
Just after loss, window drops to W/2,
throughput to W/2RTT.
Average throughout: .75 W/RTT

Transport Layer 3-27


TCP Futures
Example: 1500 byte segments, 100ms RTT, want 10
Gbps throughput
Requires window size W = 83,333 in-flight
segments
Throughput in terms of loss rate:

1.22 MSS
RTT L
L = 210-10 Wow
New versions of TCP for high-speed needed!

Transport Layer 3-28


TCP Fairness
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/K

TCP connection 1

bottleneck
TCP
router
connection 2
capacity R

Transport Layer 3-29


Why is TCP fair?
Two competing sessions:
Additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally

R equal bandwidth share


Connection 2 throughput

loss: decrease window by factor of 2


congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase

Connection 1 throughput R

Transport Layer 3-30


Fairness (more)
Fairness and UDP Fairness and parallel TCP
Multimedia apps often
connections
do not use TCP nothing prevents app from
do not want rate opening parallel
throttled by congestion connections between 2
control hosts.
Instead use UDP: Web browsers do this
pump audio/video at Example: link of rate R
constant rate, tolerate
packet loss
supporting 9 connections;
new app asks for 1 TCP, gets
Research area: TCP rate R/10
friendly new app asks for 11 TCPs,
gets R/2 !

Transport Layer 3-31


Chapter 3: Summary
principles behind transport
layer services:
multiplexing,
demultiplexing
reliable data transfer
flow control Next:
congestion control leaving the network
instantiation and edge (application,
implementation in the transport layers)
Internet into the network
UDP core
TCP
Transport Layer 3-32

You might also like