Chapter 2: First Principles for Networked Systems

Predictions, QUIC & the Midterm

Arpit Gupta

2026-04-09

A Framework That Predicts

From Taxonomy to Framework

A framework that only describes is a taxonomy.

This framework makes concrete, falsifiable predictions — testable claims that follow from tracing the dependency graph.

If the predictions fail, the framework is wrong. Falsifiability gives the framework empirical teeth.

Today: three predictions, each tested against a real system. Then you apply the full method yourself.

Prediction 1: Destructive Scaling

A Scaling Puzzle

TCP serves billions of simultaneous flows on the Internet. Adding more flows reduces per-flow throughput, but the system works.

WiFi in a lecture hall with 200 laptops? Throughput collapses. Adding more stations makes everyone worse off — dramatically.

Both use shared resources. Both use distributed coordination. Why does one scale and the other doesn’t?

Prediction 1: The Nature of Coupling

The prediction: Direct coupling + distributed coordination → destructive scaling.

TCP WiFi
Shared resource Bottleneck link capacity Radio channel
What happens on “collision” Packets queue — time is wasted, nothing destroyed Frames collide — both destroyed, energy wasted
Coupling type Indirect (queuing) Direct (destructive interference)
Coordination Distributed (AIMD) Distributed (CSMA/CA)
Scales? Yes — degradation is graceful No — degradation is catastrophic

The difference is the coupling mode. Indirect coupling (queuing) degrades gracefully. Direct coupling (destruction) degrades catastrophically under distributed coordination.

This is why WiFi 6 moved toward centralized scheduling — to escape the trap Prediction 1 identifies.

Test Case: Why Does TCP Scale But WiFi Doesn’t?

TCP

  • Shared resource? Yes — bottleneck link capacity
  • Coupling: indirect — packets queue behind each other. Time is wasted (queuing delay), but nothing is destroyed
  • Coordination: distributed (AIMD)

Result: TCP scales. Adding flows reduces per-flow throughput but doesn’t destroy packets. Queuing is wasteful but not destructive.

WiFi (802.11 DCF)

  • Shared resource? Yes — radio channel
  • Coupling: direct — simultaneous transmissions on the same frequency destroy both frames
  • Coordination: distributed (CSMA/CA)

Result: WiFi does not scale. In a dense lecture hall with 200 laptops, throughput per station collapses.

The nature of coupling — indirect (queuing) vs. direct (destructive interference) — determines whether distributed coordination can scale.

Prediction 2: Pressure-Driven Restructuring

Environmental Shifts → Most-Pressured Invariant Restructures First

The prediction: When environmental constraints shift, the invariant under greatest pressure tends to restructure first — and the change propagates through the dependency graph.

Shift Pressured invariant Restructuring
Bandwidth scaling (Gbps links) State — loss signal arrives too late on high-BDP paths Cubic: cubic growth function instead of linear
Datacenter RTT collapse (microseconds) State — need earlier congestion signal than loss DCTCP: ECN marks replace loss as measurement signal
Hardware programmability Time — decisions must happen at line rate Transport offload to SmartNICs

The test: can the framework identify the restructuring point before seeing the redesign?

Test Case: TCP to BBR — A State Redesign

Constraint shift: high bandwidth-delay product (BDP) paths — 10+ Gbps, 50–200ms RTT. The pipe can hold 100+ MB of data in flight.

What broke: loss-based TCP (Reno, Cubic) fills buffers until a packet drops, then halves the window. On high-BDP paths with large buffers, TCP overshoots massively before getting a signal.

Framework analysis: the State invariant is under pressure — the measurement signal (loss) arrives far too late.

BBR’s restructuring:

Loss-based TCP BBR
State (belief) cwnd only — implicit capacity model Explicit path model: BtlBw (bottleneck bandwidth) + RTprop (minimum RTT)
State (measurement) Packet loss Delivery rate observations + RTT measurements
Time Jacobson’s SRTT Windowed min-RTT estimator
Coordination Unchanged — still distributed, endpoint-only Unchanged

State restructured first. Time changed as a consequence. Coordination didn’t change because the pressure was on measurement quality, not on who decides.

Prediction 3: Tighter Margins

Relaxing Constraints → Tighter Belief-Environment Coupling

The prediction: Relaxing interface or coordination constraints enables tighter belief-environment coupling, which enables operation at tighter margins.

The intuition: if you can see more and coordinate more, you can operate closer to the edge without falling off.

Environment Coordination constraint Measurement quality Operating margin
Open Internet Multi-admin, can’t mandate anything Loss only (coarse, late) Hundreds of ms queuing delay tolerated
Datacenter Single admin, full control ECN marks (fine-grained, early) Target: ~0 queuing, >90% utilization

Any system moving from multi-admin to single-admin follows this trajectory.

Which Invariant Does DCTCP Restructure?

A datacenter is a single administrative domain — one operator controls all switches and servers.

DCTCP achieves near-zero queuing delay while maintaining >90% utilization. How?

State (measurement signal): ECN marks replace loss as congestion feedback.

  • Switches mark packets with CE (Congestion Experienced) when queue occupancy exceeds a threshold
  • Senders receive marks before any packet is dropped — earlier, richer signal
  • Single-admin constraint enables this: the operator can configure ECN on every switch and mandate DCTCP on every sender

On the open Internet, you cannot mandate ECN support on every router or DCTCP on every sender. The coordination constraint is what separates datacenter performance from wide-area performance.

The Five-Step Method: DNS Worked Example

Applying All Five Steps to DNS

Step 1 — Anchor: The global namespace is too large for one server (billions of records) + administrative fragmentation (no single entity owns all names)

Step 2 — Four Invariants:

Invariant DNS’s Answer
State Hierarchically distributed. Authoritative servers hold records; caching resolvers hold copies with TTL. Belief = cache
Time Prescribed by authority. Each record carries a TTL set by the zone admin. Resolvers count down and re-query
Coordination Hierarchical delegation. Root → TLD → authoritative. Each level delegates authority to the next
Interface UDP port 53, TCP fallback for large responses. Evolving: DoH, DoT encrypt queries

DNS: Steps 3–5

Step 3 — Dependency Graph: Scale + fragmentation → forces hierarchical coordination → hierarchy enables distributed caching → cached state requires TTL-based expiry → Interface reflects query-response pattern

Step 4 — Closed-Loop Dynamics: The loop is: query → cache → serve → TTL expires → re-query. Loop period = TTL.

  • TTL too long → stale records (users reach decommissioned servers)
  • TTL too short → query storms on authoritative servers
  • No adaptive gain — TTL is static, not a feedback controller
  • Global synchronization possible: popular record TTL expires simultaneously across resolvers → burst

Step 5 — Meta-Constraints:

  • Incrementally deployable — new record types pass through old resolvers
  • DNS-over-HTTPS shifts trust from network path to resolver operator — a political meta-constraint

Generative Exercise: QUIC

QUIC: Your Turn to Predict

TCP’s headers are inspected and modified by middleboxes — firewalls, NATs, load balancers. This ossified TCP’s interface: new features are undeployable.

QUIC’s response: move transport over UDP. Encrypt all headers. Make transport opaque to the network.

The Interface invariant just changed. Now predict (2 minutes, discuss with your neighbor):

  1. What happens to State? (Where does connection state live now? What new state is possible?)
  2. What happens to Time? (Can the handshake change? How?)
  3. What happens to Coordination? (Does it change? Why or why not?)

Write down your predictions before I show the answer.

QUIC: Tracing the Cascade

Invariant TCP QUIC Why
Interface TCP headers visible to network UDP encapsulation, encrypted headers — opaque to middleboxes Renegotiated to escape ossification
State Connection state in kernel, tied to IP 5-tuple Connection state in user-space. Connection migration: QUIC connection ID survives IP address changes Interface change enables new state model
Time 3-way handshake (kernel SYN processing) 0-RTT handshake: cached credentials enable immediate resumption New state model (cached creds) enables faster handshake
Coordination Distributed — endpoint-driven Unchanged — still distributed, still endpoint-driven No pressure on coordination; the pressure was on interface

The framework predicts: an interface renegotiation cascades through State and Time but leaves Coordination intact — because the environmental pressure was on interface ossification, not on who decides.

Follow-Up: HTTP/3 Over QUIC

HTTP/3 runs exclusively over QUIC. What changes at the application layer compared to HTTP/2 over TCP?

The key change: HTTP/2 multiplexes streams over a single TCP connection. TCP’s head-of-line blocking means a lost packet blocks ALL streams — even those whose data arrived successfully.

HTTP/3 over QUIC: each stream is independent. A lost packet blocks only its own stream. Head-of-line blocking is eliminated.

This is a State change at the application layer that cascades from the Interface change at the transport layer. The dependency graph spans layers.

Midterm Preview

What to Expect on the Midterm

The midterm (May 5, in-class, closed-device) tests your ability to:

  1. Identify anchors and trace dependency graphs for systems you haven’t seen before
  2. Apply the four invariants to analyze real protocols
  3. Trace what-if scenarios through the dependency graph (“what changes if X is removed/modified?”)
  4. Evaluate predictions using the three predictions as diagnostic tools

The exercises you’ve done in class are representative of midterm questions:

Exercise What it tested
ARP vs. DHCP (Lecture 2) Four-invariant comparison, coordination reasoning
DNS no caching (Lecture 3) What-if scenario, dependency tracing
QUIC cascade (today) Interface renegotiation, cross-invariant cascade
DHCP no expiry (next exercise) Closed-loop reasoning, time invariant

Exercise: Closed-Loop Reasoning

In-Class Exercise: What If DHCP Leases Never Expired?

Suppose the DHCP server grants an address permanently — no lease duration, no renewal, no reclamation.

Your task (5 minutes, work in pairs):

  1. What problems arise? Think about a coffee shop with a /24 subnet (254 usable addresses).
  2. Which invariant is most affected?
  3. Connect your answer to the closed-loop reasoning principle: what mechanism did you just remove?

Hint: Think about what the lease expiry actually does in DHCP’s feedback loop.

Exercise Discussion: DHCP Without Lease Expiry

Problem: address exhaustion. A coffee shop with 254 addresses would exhaust its pool in a day — departed clients never return their addresses.

Most affected invariant: Time

The lease duration is the mechanism that couples allocation to actual usage. Without it:

  • State: allocation table grows unbounded. Server’s belief (“this address is in use”) diverges from reality (“that device left hours ago”) — with no measurement signal to correct it
  • Time: no renewal mechanism → no way to reclaim → system is open-loop
  • Coordination: server still centralizes, but has lost the ability to reallocate

The lease IS the closed-loop mechanism. Without it, DHCP has no way to correct stale state.

Same structural pattern across all three protocols:

Protocol Closed-loop mechanism What it prevents
DHCP Lease expiry Address exhaustion from departed clients
DNS TTL expiry Stale cache entries pointing to decommissioned servers
TCP Timeout + retransmit Permanent stall from lost packets

Time-based expiry is the universal mechanism that enables closed loops in stateless networks.

Module Wrap-Up

What You Now Have

After three lectures on Chapter 2:

Vocabulary — Four structural invariants (State, Time, Coordination, Interface) with the three-layer state decomposition (environment / measurement / belief)

Reasoning tools — Three design principles (Disaggregation, Closed-Loop Reasoning, Decision Placement)

An analytical method — The five-step method and the anchored dependency graph

Predictions you can test — Three falsifiable claims about how systems scale, restructure, and operate at margins

You can now analyze any networked system by identifying its anchor and tracing the cascade. That is what the midterm tests.

Looking Ahead

Next week: Medium Access & Wireless Architecture (Ch 3 + Ch 4)

  • The anchor shifts to shared wireless medium — physics you can’t change
  • Every invariant answer restructures
  • Prediction 1 (destructive scaling) will be tested directly

Before Tuesday: Read Ch 3: Medium Access

Take-home midterm prep — Exercise 4 from Ch 2: “What if routers could send explicit rate feedback?” Trace through all four invariants and identify the deployment obstacles.