Predictions, QUIC & the Midterm
2026-04-09
A framework that only describes is a taxonomy.
This framework makes concrete, falsifiable predictions — testable claims that follow from tracing the dependency graph.
If the predictions fail, the framework is wrong. Falsifiability gives the framework empirical teeth.
Today: three predictions, each tested against a real system. Then you apply the full method yourself.
TCP serves billions of simultaneous flows on the Internet. Adding more flows reduces per-flow throughput, but the system works.
WiFi in a lecture hall with 200 laptops? Throughput collapses. Adding more stations makes everyone worse off — dramatically.
Both use shared resources. Both use distributed coordination. Why does one scale and the other doesn’t?
The prediction: Direct coupling + distributed coordination → destructive scaling.
| TCP | WiFi | |
|---|---|---|
| Shared resource | Bottleneck link capacity | Radio channel |
| What happens on “collision” | Packets queue — time is wasted, nothing destroyed | Frames collide — both destroyed, energy wasted |
| Coupling type | Indirect (queuing) | Direct (destructive interference) |
| Coordination | Distributed (AIMD) | Distributed (CSMA/CA) |
| Scales? | Yes — degradation is graceful | No — degradation is catastrophic |
The difference is the coupling mode. Indirect coupling (queuing) degrades gracefully. Direct coupling (destruction) degrades catastrophically under distributed coordination.
This is why WiFi 6 moved toward centralized scheduling — to escape the trap Prediction 1 identifies.
TCP
Result: TCP scales. Adding flows reduces per-flow throughput but doesn’t destroy packets. Queuing is wasteful but not destructive.
WiFi (802.11 DCF)
Result: WiFi does not scale. In a dense lecture hall with 200 laptops, throughput per station collapses.
The nature of coupling — indirect (queuing) vs. direct (destructive interference) — determines whether distributed coordination can scale.
The prediction: When environmental constraints shift, the invariant under greatest pressure tends to restructure first — and the change propagates through the dependency graph.
| Shift | Pressured invariant | Restructuring |
|---|---|---|
| Bandwidth scaling (Gbps links) | State — loss signal arrives too late on high-BDP paths | Cubic: cubic growth function instead of linear |
| Datacenter RTT collapse (microseconds) | State — need earlier congestion signal than loss | DCTCP: ECN marks replace loss as measurement signal |
| Hardware programmability | Time — decisions must happen at line rate | Transport offload to SmartNICs |
The test: can the framework identify the restructuring point before seeing the redesign?
Constraint shift: high bandwidth-delay product (BDP) paths — 10+ Gbps, 50–200ms RTT. The pipe can hold 100+ MB of data in flight.
What broke: loss-based TCP (Reno, Cubic) fills buffers until a packet drops, then halves the window. On high-BDP paths with large buffers, TCP overshoots massively before getting a signal.
Framework analysis: the State invariant is under pressure — the measurement signal (loss) arrives far too late.
BBR’s restructuring:
| Loss-based TCP | BBR | |
|---|---|---|
| State (belief) | cwnd only — implicit capacity model | Explicit path model: BtlBw (bottleneck bandwidth) + RTprop (minimum RTT) |
| State (measurement) | Packet loss | Delivery rate observations + RTT measurements |
| Time | Jacobson’s SRTT | Windowed min-RTT estimator |
| Coordination | Unchanged — still distributed, endpoint-only | Unchanged |
State restructured first. Time changed as a consequence. Coordination didn’t change because the pressure was on measurement quality, not on who decides.
The prediction: Relaxing interface or coordination constraints enables tighter belief-environment coupling, which enables operation at tighter margins.
The intuition: if you can see more and coordinate more, you can operate closer to the edge without falling off.
| Environment | Coordination constraint | Measurement quality | Operating margin |
|---|---|---|---|
| Open Internet | Multi-admin, can’t mandate anything | Loss only (coarse, late) | Hundreds of ms queuing delay tolerated |
| Datacenter | Single admin, full control | ECN marks (fine-grained, early) | Target: ~0 queuing, >90% utilization |
Any system moving from multi-admin to single-admin follows this trajectory.
A datacenter is a single administrative domain — one operator controls all switches and servers.
DCTCP achieves near-zero queuing delay while maintaining >90% utilization. How?
State (measurement signal): ECN marks replace loss as congestion feedback.
On the open Internet, you cannot mandate ECN support on every router or DCTCP on every sender. The coordination constraint is what separates datacenter performance from wide-area performance.
Step 1 — Anchor: The global namespace is too large for one server (billions of records) + administrative fragmentation (no single entity owns all names)
Step 2 — Four Invariants:
| Invariant | DNS’s Answer |
|---|---|
| State | Hierarchically distributed. Authoritative servers hold records; caching resolvers hold copies with TTL. Belief = cache |
| Time | Prescribed by authority. Each record carries a TTL set by the zone admin. Resolvers count down and re-query |
| Coordination | Hierarchical delegation. Root → TLD → authoritative. Each level delegates authority to the next |
| Interface | UDP port 53, TCP fallback for large responses. Evolving: DoH, DoT encrypt queries |
Step 3 — Dependency Graph: Scale + fragmentation → forces hierarchical coordination → hierarchy enables distributed caching → cached state requires TTL-based expiry → Interface reflects query-response pattern
Step 4 — Closed-Loop Dynamics: The loop is: query → cache → serve → TTL expires → re-query. Loop period = TTL.
Step 5 — Meta-Constraints:
TCP’s headers are inspected and modified by middleboxes — firewalls, NATs, load balancers. This ossified TCP’s interface: new features are undeployable.
QUIC’s response: move transport over UDP. Encrypt all headers. Make transport opaque to the network.
The Interface invariant just changed. Now predict (2 minutes, discuss with your neighbor):
Write down your predictions before I show the answer.
| Invariant | TCP | QUIC | Why |
|---|---|---|---|
| Interface | TCP headers visible to network | UDP encapsulation, encrypted headers — opaque to middleboxes | Renegotiated to escape ossification |
| State | Connection state in kernel, tied to IP 5-tuple | Connection state in user-space. Connection migration: QUIC connection ID survives IP address changes | Interface change enables new state model |
| Time | 3-way handshake (kernel SYN processing) | 0-RTT handshake: cached credentials enable immediate resumption | New state model (cached creds) enables faster handshake |
| Coordination | Distributed — endpoint-driven | Unchanged — still distributed, still endpoint-driven | No pressure on coordination; the pressure was on interface |
The framework predicts: an interface renegotiation cascades through State and Time but leaves Coordination intact — because the environmental pressure was on interface ossification, not on who decides.
HTTP/3 runs exclusively over QUIC. What changes at the application layer compared to HTTP/2 over TCP?
The key change: HTTP/2 multiplexes streams over a single TCP connection. TCP’s head-of-line blocking means a lost packet blocks ALL streams — even those whose data arrived successfully.
HTTP/3 over QUIC: each stream is independent. A lost packet blocks only its own stream. Head-of-line blocking is eliminated.
This is a State change at the application layer that cascades from the Interface change at the transport layer. The dependency graph spans layers.
The midterm (May 5, in-class, closed-device) tests your ability to:
The exercises you’ve done in class are representative of midterm questions:
| Exercise | What it tested |
|---|---|
| ARP vs. DHCP (Lecture 2) | Four-invariant comparison, coordination reasoning |
| DNS no caching (Lecture 3) | What-if scenario, dependency tracing |
| QUIC cascade (today) | Interface renegotiation, cross-invariant cascade |
| DHCP no expiry (next exercise) | Closed-loop reasoning, time invariant |
Suppose the DHCP server grants an address permanently — no lease duration, no renewal, no reclamation.
Your task (5 minutes, work in pairs):
Hint: Think about what the lease expiry actually does in DHCP’s feedback loop.
Problem: address exhaustion. A coffee shop with 254 addresses would exhaust its pool in a day — departed clients never return their addresses.
Most affected invariant: Time
The lease duration is the mechanism that couples allocation to actual usage. Without it:
The lease IS the closed-loop mechanism. Without it, DHCP has no way to correct stale state.
Same structural pattern across all three protocols:
| Protocol | Closed-loop mechanism | What it prevents |
|---|---|---|
| DHCP | Lease expiry | Address exhaustion from departed clients |
| DNS | TTL expiry | Stale cache entries pointing to decommissioned servers |
| TCP | Timeout + retransmit | Permanent stall from lost packets |
Time-based expiry is the universal mechanism that enables closed loops in stateless networks.
After three lectures on Chapter 2:
Vocabulary — Four structural invariants (State, Time, Coordination, Interface) with the three-layer state decomposition (environment / measurement / belief)
Reasoning tools — Three design principles (Disaggregation, Closed-Loop Reasoning, Decision Placement)
An analytical method — The five-step method and the anchored dependency graph
Predictions you can test — Three falsifiable claims about how systems scale, restructure, and operate at margins
You can now analyze any networked system by identifying its anchor and tracing the cascade. That is what the midterm tests.
Next week: Medium Access & Wireless Architecture (Ch 3 + Ch 4)
Before Tuesday: Read Ch 3: Medium Access
Take-home midterm prep — Exercise 4 from Ch 2: “What if routers could send explicit rate feedback?” Trace through all four invariants and identify the deployment obstacles.