As we know TCP is called reliable – connection-oriented protocol. But why ?
Basically because it keeps the data in its buffer before and after sending it, it makes sure that data was delivered in sequence and receiver send the confirmation about received data (ack). Otherwise data will be re-transmitted there are different ways to re-send the data, let’s explore some of them.
So what is happening under the hood of this massive (big overhead comparing to UDP protocol).
Some ideas about TCP congestion mechanisms :
- Were created with intention that small buffers (of devices) would overflow, packet loss will trigger and TCP would react. – All good and cool, but in 2017 we have a huge buffers and this might cause some problems here, because it takes time before they will overflow, it creates a delays.
- With very small buffers we also have a problem, in case if we have burst of packets and due to small buffer one/some of them would get lost the TCP can treat them as a congestion – afterwards it will reduce its congestion window and as the result links won’t be able to be filled completely.
- Link flapping – this stuff is common in campus networks – link flapping, fading can trick the TCP into thinking that there is an extreme congestion in the network and therefore it should do exponential re-transmissions
To avoid many bad stuff we have mechanisms like :
- sliding window – changing the window size depending on successfully received traffic
- stop and wait – one frame per ACK, basically N size of data can be sent until the next ACK
- cumulative acks – if I Acknowledge packet 3 that means I also acknowledge that I received 2 packets before this one.
- Go back to N – in case if single packet is lost we will re-transmit all the segment(is good when there are burst of losses), when senders window is larger than receivers ,protocol will use go back to N.
- Selective repeat – we will re-transmit that packet and only that which was lost
- Very important to make sure that we are not re-transmitting to early.
Mainly TCP is the end to end host based congestion control mechanism.
- It reacts to events observable of the end host
- Uses TCP’s sliding window and flow control
- Tries to figure out how many packets can safely be outstanding in the network at the time.
You can memorize very simple form of TCP congestion mechanism and build everything on top of it – at least i’m doing so, I might be wrong 🙂
AIMD – Additive increase, multiplicative decrease
- Basically if packet was received without errors and we got the ACK, we would increase the size of window field : w+1/w
- If packet was dropped we would use following formula w=w/2 – so basically after first dropped packet we will be cutting the window to half.
AIMD also helps us to fully use the links – window size expands according to AIMD to probe how many bytes the pipe can hold.
Summary for AIMD :
- Throughput of AIMD flow is sensitive to the drop probability and very sensitive to RTT – round trip time.
- With many flows each flow will follow it’s own AIMD rule.
We have several TCP flow control methods :
TCPÂ TAHOE
- slow start (on connection startup to quickly find network capacity or packet timeout)
- window starts at max MSS
- increasing window for each ACK packet
- Exponentially grow congestion window to sense network capacity
- congestion avoidance state – to carefully probe when close to maximum network capacity
- triple duplicate acks
- fast re-transmission means don’t wait for a time out to re-transmit a missing segment if you receive a tripe duplicate ACK.
FSM for Tahoe Mechanism :

TCP RENO.
Behaves identical as Tahoe on timeout
- On triple duplicate ACK it :
- sets threshold to congestion window 2
- sets congestion window to congestion win/2 – fast recoevry
- Inflate congestion window size (fast recovery)
- Retrasmit missing segments (fast retransmit)
- stay in congestion avoidance state
- TCP Reno adds in additional optimization, three duplicate ACKs don’t cause TCP to lose an RTT worth of trasnmission it waits for the missing segments to be acked.
FSM for Reno Mechanism :

Basically the difference between Tahoe and Reno is a fast recovery.
Observation signals :
- increasing ACKs : transfer is going well
- duplicate ACKs : something was lost, delayed
- timeout – bad stuff 🙂
In TCP we are also using self-clocking – with help of this sender knows that packet has left the network.
@Credits to Stanford university for providing such a great course – almost all info here is taken from Networking self paced course.Â