Bufferbloat: TCP Belief vs Queue Reality

The default setup produces severe bufferbloat: a 5,000-packet buffer (50× BDP) with Tail-Drop. Press Run to see the failure, then experiment with AQM algorithms and buffer sizes.

Network Parameters

10 pkts/ms
≈ 120 Mbps at 1500 B packets
10 ms
Propagation delay only, no queueing
5,012 pkts
Log scale: 10 → 30,000 packets
1
Independent AIMD senders sharing bottleneck
BDP = 100 pkts
Buffer / BDP = 50.1×
Max queue delay = 501 ms

Queue Management Algorithm

Simulating… 0 / 10 s
peak cwnd peak queue peak RTT avg throughput link util. total drops
📊

Analysis

Model notes

Transport: Fluid-model TCP AIMD. cwnd += 1/cwnd per ACK (CA), cwnd /= 2 on loss. Slow start until first loss.

AQM: Tail-Drop = drop when full. RED = EWMA queue length, probabilistic drop. CoDel = sojourn-time, 1/√n drop schedule. PIE = PI controller on delay. FQ_CoDel = per-flow sub-queues + CoDel + DRR.

Simplified: No delayed ACKs, SACK, or pacing. Sojourn estimated as queue/linkRate. Qualitative behavior is faithful.