TCP vs UDP: a Gentle Introduction
Date Published

TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are the two main transport protocols of the TCP/IP stack. Both operate at Layer 4 (Transport Layer) of the OSI model and manage communication between applications on different devices, but with completely opposite philosophies.
- TCP is a connection-oriented protocol that guarantees reliable data delivery in the correct order. Each packet is acknowledged by the receiver (acknowledgment) and retransmitted in case of loss.
- UDP is a connectionless protocol that sends packets without guarantees of delivery, ordering, or error control. It is extremely fast but follows a “fire and forget” approach—send and forget.
The choice between the two depends on the application’s requirements: reliability (TCP) versus speed and low latency (UDP).
How TCP Works
TCP is a connection-oriented and stateful protocol. Before data can be exchanged, both endpoints establish a logical connection and maintain state about the communication.
Three-Way Handshake
Before transmitting data, TCP establishes a connection through the three-way handshake:
- SYN: The client sends a SYN (synchronize) segment to the server
- SYN-ACK: The server responds with a SYN-ACK (synchronize-acknowledge)
- ACK: The client confirms with an ACK (acknowledge)
This process ensures both parties agree on:
- Initial sequence numbers
- Communication parameters
- Readiness to transmit
Only after this process is the connection established and data can flow.
TCP Three-Way Handshake
Flow Control and Congestion Control
TCP implements a sliding window mechanism to control how much data can be sent before receiving an acknowledgment. This mechanism prevents overloading both the receiver and the network.
TCP Sliding Window & Flow Control
When the network becomes congested, TCP reduces the transmission window (slow start, congestion avoidance). This ensures that the network is not overwhelmed, but it also introduces additional latency.
Head-of-Line Blocking
Because TCP guarantees ordered delivery, if a segment is lost, all subsequent segments must wait until it is retransmitted.
Head-of-Line Blocking
This is known as Head-of-Line (HoL) blocking and can introduce additional latency.
How UDP Works
UDP takes the opposite approach: it is connectionless and stateless.
There is:
- No handshake
- No retransmission
- No ordering guarantees
Each packet (called a datagram) is sent independently.
Datagram Model
UDP sends independent datagrams. Each packet travels autonomously from source to destination without any guarantee of:
- Delivery
- Ordering
- Duplication
UDP vs TCP Packet Loss
Header UDP
The UDP header is only 8 bytes, containing:
- Source Port (2 bytes)
- Destination Port (2 bytes)
- Length (2 bytes)
- Checksum (2 bytes)
In comparison, TCP headers are at least 20 bytes (up to 60 with options).
This minimal design reduces overhead and latency.
TCP vs UDP – Key Differences
TCP vs UDP Comparison
Comparison Table
At a high level, the differences between TCP and UDP can be summarized as follows:
| Feature | TCP | UDP |
|---|---|---|
| Connection | Connection-oriented | Connectionless |
| Reliability | Guaranteed (ACK + retransmission) | Best-effort (no guarantee) |
| Ordering | Guaranteed | Not guaranteed |
| Header Size | 20-60 byte | 8 byte |
| Flow Control | Yes (sliding window) | No |
| Congestion Control | Yes | No |
| Overhead | High | Low |
| Latency | Higher | Lower |
| Multicast Support | No | Yes |
Important nuance:
TCP is not inherently “slow.” In stable networks, it can achieve very high throughput. However, its reliability mechanisms can increase latency in lossy conditions.
When to Use TCP
Use TCP when:
- Data integrity is critical
- Ordering matters
- Retransmissions are acceptable
When to Use UDP
Use UDP when:
- Low latency is critical
- Occasional packet loss is acceptable
- The application can handle reliability itself
Real-World Use Cases
TCP in Action
HTTP/HTTPS (Web):
The HTTP protocol uses TCP because every resource (HTML, CSS, images) must arrive completely and in the correct order. A single missing line of CSS can break an entire page.
Historically, HTTP runs over TCP. However, HTTP/3 runs over QUIC, which itself runs over UDP.
FTP (File Transfer):
Transferring a 1GB file requires that every single byte arrives correctly. TCP guarantees this even over unstable connections.
SSH/Telnet:
When you control a remote server, every character you type must arrive in the correct order. A command like rm -rf / cannot afford to have its letters mixed up!
Email (SMTP/IMAP/POP3):
Emails must arrive complete. Nobody wants to receive only half of an important email.
UDP in Action
DNS (Domain Name System):
DNS queries typically use UDP on port 53 for speed. Single request-response operations don't need TCP's complexity — if a packet is lost, the client simply retries.
However, DNS automatically falls back to TCP when:
- Response size exceeds 512 bytes (e.g., large record sets)
- Using DNSSEC (security extensions)
- Zone transfers between DNS servers (AXFR/IXFR)
1# DNS query via UDP (default)2dig @8.8.8.8 google.com34# Force DNS over TCP5dig @8.8.8.8 google.com +tcp
DNS primarily uses UDP for standard queries. However, it falls back to TCP for larger responses or zone transfers.
Video Streaming (YouTube, Netflix):
Losing a few video frames is acceptable—the human eye won’t notice. But buffering caused by TCP retransmissions would ruin the experience. Modern protocols like QUIC (over UDP) are replacing TCP for streaming.
Online Gaming:
In a multiplayer FPS, your character’s position from 100ms ago is irrelevant. It’s better to receive fresh (even incomplete) data than old but complete data. UDP makes this possible.
VoIP (Voice over IP):
In Zoom/Skype/Discord calls, you’d rather have slightly distorted audio than a 2-second delay. UDP combined with codecs that handle packet loss enables smooth conversations.
IoT Sensors:
A temperature sensor sending readings every second doesn’t need TCP. If one reading is lost, the next one will arrive in a second.
Modern Protocols: The Best of Both Worlds
QUIC (Quick UDP Internet Connections):
Developed by Google, QUIC implements TCP-like features (reliability, congestion control) on top of UDP. Used by HTTP/3 for both streaming and general web browsing — Chrome, Edge, and Firefox use it by default for websites that support it. Advantages include:
- Faster handshake (0-RTT): Reduces the communication rounds needed to establish a secure connection.
- Connection migration: Keeps the connection alive even if your IP changes, such as when switching from Wi-Fi to cellular data.
- No Head-of-Line (HoL) Blocking: In standard TCP, if a single packet is lost, the entire stream of data must "freeze" because the protocol requires every packet to be delivered in the exact original order. This is known as Head-of-Line Blocking—everything behind the lost packet is stuck until that packet is successfully retransmitted.
QUIC solves this by handling multiple independent streams of data over UDP. If a packet from one stream is lost, only that specific stream is affected, while others continue to flow without interruption.
WebRTC:
Uses UDP for real-time media streaming but adds error recovery mechanisms on top of UDP when needed.
Further Reading and Useful Resources
RFCs (Official Specifications):
Tools to Experiment:
- Wireshark – Analyze network traffic and see TCP/UDP in action
1# # Filter TCP traffic only2tcp34# # Filter UDP traffic only5udp67# Filter TCP handshake8tcp.flags.syn==1
- netcat (nc) - Test TCP and UDP connections
1# TCP server on port 80802nc -l 808034# UDP server on port 80805nc -lu 808067# TCP client8echo "Hello" | nc localhost 8080910# UDP client11echo "Hello" | nc -u localhost 8080
- iperf3 - TCP vs UDP performance measurement
1# Server2iperf3 -s34# Client TCP5iperf3 -c server_ip67# Client UDP with bandwidth limit8iperf3 -c server_ip -u -b 100M
