---
title: "Medium Access"
---
---
## The Shared-Medium Problem: Physics as Anchor
Every wireless network solves the same physical problem: a finite, shared radio channel must carry simultaneous transmissions from many devices. Two fundamental constraints drive everything that follows. First, the electromagnetic spectrum below 6 GHz is scarce — wireless propagates over distance, covers wide areas, and multiple devices must coexist. Second, transmission is inherently broadcast — when station A transmits, every other station within range receives the signal, whether A intended it or not. These are physics-level anchors that no amount of engineering can remove.
Medium access is the component that emerges when multiple transmitters share a physical medium. Its engineering question: *how do I share a transmission medium fairly among competing transmitters?* The anchor constraint — shared-medium physics with location-dependent sensing — is inherited from the physical layer, and it shapes every invariant answer that follows.
This chapter studies how 802.11 (unlicensed WiFi) and cellular networks (licensed spectrum) solve this problem with radically different answers to the four invariants. The difference is not technological. Both use radio. Both must coordinate access. The difference is institutional: one assumes shared, unmanaged spectrum; the other assumes licensed exclusivity. This institutional anchor cascades through every invariant answer.
The shared-medium anchor creates two irreducible measurement problems. First, **a transmitter cannot hear its own collision**. When station A transmits, its own signal at its antenna is 40–50 dB stronger than any collision signal that might arrive simultaneously. The transmitter's receiver is saturated, deaf to the interference it is causing elsewhere. This forces avoidance (listen before transmitting) rather than detection (hear if you collided). Second, **a station's measurement is location-dependent**. Station A's antenna hears strong signals from nearby stations and weak or blocked signals from distant ones. The shared medium's actual state is global (all transmissions everywhere), but each station's measurement is local (energy at one point). This gap between environment and measurement is where medium access failures live.
We trace two evolutionary paths from these anchors. 802.11 chose distributed contention — every station independently decides when to transmit by listening to the medium. Cellular chose centralized scheduling — the base station observes all devices and allocates access. Neither dominates; each is optimal under its constraints. Understanding the transition from one to the other (as WiFi has evolved from 802.11b to 802.11ax) reveals how fundamental shifts in the coordination model become necessary as the dimensionality and capacity of the shared resource expand.
---
## 802.11 DCF: Distributed Contention as Coordination
CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) is 802.11's answer to distributed medium access. The protocol is simple: before transmitting, listen to the medium. If idle, wait an additional interval (DIFS, Distributed Inter-Frame Space, 34 microseconds for 802.11a/g), then apply a random backoff. Count down the backoff counter while the medium stays idle, decrementing once per slot (20 microseconds); pause when the medium becomes busy. When the counter reaches zero, transmit. If the transmission succeeds, the receiver sends an ACK within 16 microseconds (SIFS, Short Inter-Frame Space). If no ACK arrives within a timeout (typically 50 milliseconds), assume collision, double the contention window, and retry. @fig-dcf-protocol-state-machine traces this state machine from idle sensing through backoff, transmission, and the collision recovery path.
{{< include embeds/v01_dcf_embed.qmd >}}
This protocol solves the avoidance problem. Because the sender cannot detect its own collision, it avoids collisions probabilistically by deferring when the medium is busy and by randomizing its attempt time when contention is high. The exponential backoff creates negative feedback: collisions increase backoff windows (from a minimum CW of 31 slots to a maximum of 1023), reducing future attempt rates, which reduces future collisions. The protocol is resilient and deployable — it requires no infrastructure, no clock synchronization, and no coordination a priori. Devices can join and leave the network without registration.
### The Four Invariants
**State**: Each station maintains three pieces of state. A **queue of pending frames** represents unsent data waiting for access. A **backoff counter** counts down by one every 20-microsecond slot while the medium is idle; when it reaches zero, the station attempts transmission. A **contention window (CW)** sets the upper bound for random backoff; it doubles on collision (from 31 to 63 to 127 to 255 to 511 to 1023) and resets to 31 on successful transmission. The medium itself has two observable states: busy (energy detected above the noise floor, roughly −82 dBm for 802.11a/g) or idle (energy below threshold). The internal **belief** is each station's cached reservation of future medium availability, stored in a **NAV timer** (Network Allocation Vector). When a station overhears a frame from any sender, it reads the Duration/ID field in the MAC header and sets NAV to that duration, deferring future transmissions until the timer expires. This belief is based on overheard frames only; completely hidden transmitters are not represented in NAV. NAV is a form of distributed reservation — one station's announced intent reaches others without a centralized allocation table.
**Time**: CSMA/CA uses prescribed time with event-driven execution. The standard defines DIFS (34 µs), SIFS (16 µs), and a slot time (20 µs for 802.11a/g, 9 µs for 802.11n at 40 MHz channels). Backoff counters decrement every slot while the medium is idle. Transmissions take 1–20 milliseconds depending on frame size and data rate (ranging from 6 Mbps for OFDM to 1300+ Mbps for 802.11ac). ACK/timeout feedback arrives in the hundreds of milliseconds. The protocol is synchronous at the microsecond scale (slots are tightly defined by clock division from the PHY) but asynchronous at the millisecond scale (when a station decides to attempt transmission depends on its random backoff, which is not coordinated across stations).
**Coordination**: Fully distributed, with no central authority. Each station makes independent decisions: "Is the medium idle for DIFS? Did my backoff counter expire? If yes to both, transmit." Collisions emerge naturally when two stations' local observations diverge — both sense idle, both count down to zero, both transmit at the same moment. The protocol expects collisions and recovers from them retroactively via ACK timeout. There is no pre-transmission negotiation or synchronization. The coordination is emergent: global order (access arises across many stations) emerges from local rules (each station's decision logic).
**Interface**: Stations communicate via broadcast frames. Each frame carries a **Duration field** that advertises how long the transmission will occupy the medium. Overhearing stations read this field and set their NAV, effectively implementing a distributed reservation. The Duration field is the protocol's primary global information signal — it allows a station that did not transmit the frame to infer when the medium will be free again. Frames also carry source and destination MAC addresses, a sequence number, and a CRC (for error detection). This is weak global information (only overheard frames are visible, and hidden transmitters cannot be seen), but it is the best a distributed system can do without infrastructure. The absence of infrastructure is both the protocol's strength and its limitation.
### Closed-Loop Dynamics
CSMA/CA implements a feedback loop, but it is delayed and noisy. Collision → no ACK → timeout (50 ms) → backoff increase → longer wait → fewer attempts → fewer future collisions. The loop stabilizes under low load: collisions are rare, backoff stays small, attempt rates stay high. But at high load, positive feedback dominates. Many stations attempt → many collide → all increase backoff → medium appears idle (because busy periods are short) → all attempt together → collision. The system oscillates, converging to a limit of approximately 30% channel utilization. Above this point, throughput collapses catastrophically as collisions multiply faster than transmissions succeed. This is not a bug; it is the inherent ceiling of any CSMA/CA protocol on a single shared medium.
::: {.callout-tip}
## Interactive: CSMA/CA Throughput vs. Station Count
Use the slider below to vary the number of competing stations and observe how throughput collapses as contention increases. The simulation models the binary exponential backoff dynamics described above.
:::
```{ojs}
//| label: fig-csma-throughput
//| fig-cap: "CSMA/CA throughput as a function of competing stations. As station count grows beyond ~10, collision probability rises faster than backoff can compensate, and throughput collapses toward zero."
viewof numStations = Inputs.range([1, 80], {
value: 5,
step: 1,
label: "Number of stations"
})
viewof slotTime = Inputs.range([9, 50], {
value: 20,
step: 1,
label: "Slot time (µs)"
})
{
// Analytical model: Bianchi's CSMA/CA throughput approximation
// p_tx = probability a station transmits in a slot
// For n stations with CW_min = 31:
// Approximate optimal p_tx = 1/n (but protocol uses fixed CW_min)
const CW_min = 31;
const CW_max = 1023;
const payload_bits = 8000; // 1000 bytes
const data_rate = 54e6; // 54 Mbps (802.11a/g)
const T_slot = slotTime * 1e-6;
const SIFS = 16e-6;
const DIFS = 34e-6;
const T_ack = 24e-6; // ACK duration
const T_payload = payload_bits / data_rate;
const T_success = DIFS + T_payload + SIFS + T_ack;
const T_collision = DIFS + T_payload; // no ACK, timeout
const results = [];
for (let n = 1; n <= 80; n++) {
// Approximate: each station transmits with probability tau ≈ 2/(CW_min+1)
// Bianchi steady-state: tau depends on collision prob p
// Simplified: iterate to find steady-state tau
let tau = 2.0 / (CW_min + 1);
for (let iter = 0; iter < 20; iter++) {
const p_coll = 1 - Math.pow(1 - tau, n - 1);
tau = 2.0 / (1 + CW_min + p_coll * CW_min * ((2 * (1 - Math.pow(2 * p_coll, 6))) / (1 - 2 * p_coll) ));
if (tau < 0.001) tau = 0.001;
if (tau > 1) tau = 1;
}
const p_tx = 1 - Math.pow(1 - tau, n); // prob at least one transmits
const p_s = n * tau * Math.pow(1 - tau, n - 1); // prob exactly one transmits
const p_c = p_tx - p_s; // prob collision
const S = (p_s * T_payload) /
((1 - p_tx) * T_slot + p_s * T_success + p_c * T_collision);
results.push({
stations: n,
throughput: Math.max(0, S * data_rate / 1e6),
collision_prob: p_c / Math.max(p_tx, 0.001),
highlight: n === numStations
});
}
const currentResult = results.find(r => r.stations === numStations);
const width = 640;
const height = 360;
const margin = {top: 30, right: 60, bottom: 50, left: 60};
const svg = d3.create("svg")
.attr("viewBox", [0, 0, width, height])
.attr("width", width)
.attr("height", height)
.style("font-family", "system-ui, sans-serif");
const x = d3.scaleLinear()
.domain([0, 80])
.range([margin.left, width - margin.right]);
const y = d3.scaleLinear()
.domain([0, d3.max(results, d => d.throughput) * 1.1])
.range([height - margin.bottom, margin.top]);
const y2 = d3.scaleLinear()
.domain([0, 1])
.range([height - margin.bottom, margin.top]);
// Throughput line
const line = d3.line()
.x(d => x(d.stations))
.y(d => y(d.throughput))
.curve(d3.curveMonotoneX);
// Collision probability line
const collLine = d3.line()
.x(d => x(d.stations))
.y(d => y2(d.collision_prob))
.curve(d3.curveMonotoneX);
// Area under throughput
const area = d3.area()
.x(d => x(d.stations))
.y0(height - margin.bottom)
.y1(d => y(d.throughput))
.curve(d3.curveMonotoneX);
svg.append("path")
.datum(results)
.attr("fill", "#dbeafe")
.attr("d", area);
svg.append("path")
.datum(results)
.attr("fill", "none")
.attr("stroke", "#2563eb")
.attr("stroke-width", 2.5)
.attr("d", line);
svg.append("path")
.datum(results)
.attr("fill", "none")
.attr("stroke", "#dc2626")
.attr("stroke-width", 1.5)
.attr("stroke-dasharray", "5,3")
.attr("d", collLine);
// Current station marker
svg.append("circle")
.attr("cx", x(numStations))
.attr("cy", y(currentResult.throughput))
.attr("r", 6)
.attr("fill", "#2563eb")
.attr("stroke", "white")
.attr("stroke-width", 2);
// Axes
svg.append("g")
.attr("transform", `translate(0,${height - margin.bottom})`)
.call(d3.axisBottom(x).ticks(8))
.append("text")
.attr("x", (width - margin.left - margin.right) / 2 + margin.left)
.attr("y", 40)
.attr("fill", "#333")
.attr("text-anchor", "middle")
.text("Number of Competing Stations");
svg.append("g")
.attr("transform", `translate(${margin.left},0)`)
.call(d3.axisLeft(y).ticks(6).tickFormat(d => d.toFixed(0)))
.append("text")
.attr("transform", "rotate(-90)")
.attr("x", -(height - margin.top - margin.bottom) / 2 - margin.top)
.attr("y", -45)
.attr("fill", "#2563eb")
.attr("text-anchor", "middle")
.text("Throughput (Mbps)");
svg.append("g")
.attr("transform", `translate(${width - margin.right},0)`)
.call(d3.axisRight(y2).ticks(5).tickFormat(d3.format(".0%")))
.append("text")
.attr("transform", "rotate(-90)")
.attr("x", -(height - margin.top - margin.bottom) / 2 - margin.top)
.attr("y", 50)
.attr("fill", "#dc2626")
.attr("text-anchor", "middle")
.text("Collision Probability");
// Annotation
svg.append("text")
.attr("x", x(numStations))
.attr("y", y(currentResult.throughput) - 12)
.attr("text-anchor", "middle")
.attr("fill", "#1e40af")
.attr("font-size", "12px")
.attr("font-weight", "bold")
.text(`${currentResult.throughput.toFixed(1)} Mbps @ ${numStations} stations`);
// Legend
const legend = svg.append("g").attr("transform", `translate(${width - margin.right - 150}, ${margin.top + 5})`);
legend.append("line").attr("x1", 0).attr("x2", 20).attr("y1", 0).attr("y2", 0).attr("stroke", "#2563eb").attr("stroke-width", 2.5);
legend.append("text").attr("x", 25).attr("y", 4).text("Throughput").attr("font-size", "11px").attr("fill", "#333");
legend.append("line").attr("x1", 0).attr("x2", 20).attr("y1", 18).attr("y2", 18).attr("stroke", "#dc2626").attr("stroke-width", 1.5).attr("stroke-dasharray", "5,3");
legend.append("text").attr("x", 25).attr("y", 22).text("Collision prob.").attr("font-size", "11px").attr("fill", "#333");
return svg.node();
}
```
The latency of feedback is a critical failure mode. A station detects collision only after an ACK timeout — typically 50 milliseconds. During this delay, the actual medium state has changed many times. By the time the sender backs off, new collisions may already be in flight. This delay is baked into the protocol: to avoid false timeouts due to transmitter power variability, 802.11 uses conservative timeout windows. The cost is slow adaptation to load changes. This is why a sudden burst of traffic causes temporary throughput collapse, not gradual degradation. A station only learns it made a wrong decision (transmitting into a collision) after tens of milliseconds have passed.
---
## Hidden and Exposed Terminals: Measurement Failures
The shared-medium anchor's location-dependent measurement creates two pathological cases: hidden and exposed terminals. Both reveal the fundamental asymmetry between environment state (all transmissions everywhere) and a station's local observation (its antenna's view).
### Hidden Terminal Problem
**Setup**: Three stations: A, C (cannot hear each other), and B (hears both A and C). Station A begins transmitting to B. At the same time, station C is also transmitting to B (perhaps C started just before A, or simultaneously). A's carrier sense says "medium idle" because A cannot hear C's simultaneous transmission — C is out of range. A transmits anyway. Both A's and C's frames arrive at B's receiver, colliding and destroying both. A never learns the cause — it only experiences ACK timeout and backs off. C, likewise, does not know A is transmitting because the collision signal is lost at B's receiver (B's receiver saturates and cannot decode either frame).
Neither station learns that they interfere. A backs off and retries later. If C is still transmitting (or transmits again), A will collide again. The backoff delays but does not resolve the root cause. This is why hidden terminals cause throughput collapse: two independent stations, unaware of each other, continuously interfere, each backing off independently, neither learning to coordinate.
### Exposed Terminal Problem
**Setup**: Four stations: A, B, C, D. Station C transmits to D; A hears C's transmission (A's carrier sense says busy) and defers, respecting the medium. But A's intended transmission to B would not interfere with C's transmission to D because B is geographically separate from D. A has wasted transmission opportunity. The measurement is too conservative — it prevents interference that cannot occur.
**Why it matters**: Exposed terminals reduce throughput by causing unnecessary deferral. A device that could transmit without causing harm waits instead, wasting airtime. This is inefficient but not catastrophic — no collision occurs, just underutilization. In contrast, hidden terminals cause collisions and are far worse.
**Root cause**: Both problems stem from the same root: **the measurement signal (what one antenna hears) does not reflect the environment state (all actual transmissions and their effects)**. Electromagnetic propagation depends on location, path loss, and obstacles. A's antenna at location LA hears C's transmission strongly only if C is close or unobstructed. If C is distant or blocked, A measures nothing even though C's transmission exists.
**Location-Dependent Measurement and State Invariant Failure**: The critical insight is that carrier sense—based on energy detection at one point in space—is inherently location-dependent. When A and C cannot hear each other's simultaneous transmissions to B, A has no signal of the collision occurring at B's receiver. Conversely, when A hears C's unrelated transmission, A conservatively assumes its own transmission might interfere, even though geometry makes interference impossible. The hidden terminal reveals a State invariant failure: A's belief (medium appears idle) diverges from the environment (C is transmitting). The exposed terminal reveals excessive conservatism: A's belief (medium is busy, I must wait) prevents efficiency gains that would not cause harm. @fig-hidden-exposed-state through @fig-hidden-exposed-interface illustrate both failure modes through each of the four invariants — observe how the same carrier-sense mechanism produces opposite errors depending on spatial geometry.
{{< include embeds/v08a_hidden_exposed_state_embed.qmd >}}
{{< include embeds/v08b_hidden_exposed_time_embed.qmd >}}
{{< include embeds/v08c_hidden_exposed_coordination_embed.qmd >}}
{{< include embeds/v08d_hidden_exposed_interface_embed.qmd >}}
This spatial geometry of hidden and exposed terminals shows why distributed medium access based on local carrier sense cannot fully solve medium access without additional mechanisms. The measurement is fundamentally incomplete—a single antenna samples one point in space, but the true environment is the superposition of all transmissions everywhere. This gap is inherent to any system where coordination relies on local sensing without global knowledge.
### RTS/CTS: Measurement Improvement
The framework predicts that fixing measurement/environment divergence requires either improving measurement or centralizing coordination. RTS/CTS attempts measurement improvement. Instead of transmitting data immediately, A first sends a short RTS frame to B announcing intent and duration (e.g., "I will transmit for 500 microseconds"). B replies with CTS. Overhearing stations (including C) read the CTS and set NAV, deferring. The RTS/CTS is short (40 bytes, taking ~130 microseconds at 11 Mbps in 802.11b; far less at higher rates), increasing the probability that all nearby stations hear it, so reservation visibility improves. If C overhears the CTS, C will defer even if C cannot hear A's actual data.
However, RTS/CTS is incomplete. If station C is completely hidden from both A and B — out of range of all their signals — C will not hear RTS/CTS and will not defer. The fundamental problem (location-dependent measurement) is not solved, only mitigated. This is why RTS/CTS is optional in 802.11: it trades overhead for robustness. Dense networks where hidden terminals are common enable RTS/CTS. Sparse deployments disable it to reduce overhead. Modern WiFi deployments in crowded conference rooms often enable RTS/CTS; rural WiFi typically disables it.
### Centralization as the Alternative
The alternative — centralized coordination — eliminates hidden terminals by construction. A base station knows the location and channel state of every device (via measurement feedback). It schedules transmissions and tells devices when to send and receive. There are no hidden terminals because the scheduler has perfect global information. This is the cellular approach, which we explore in §2.5.
---
## 802.11 Evolution: Movement Along the Coordination Axis
802.11's evolution from 802.11b (1999) through 802.11n (2009) to 802.11ac (2013) to 802.11ax (2021) is a systematic retreat from distributed CSMA/CA toward centralized scheduling. The driver is capacity: as channels widened from 22 MHz to 40 MHz to 80 MHz to 160 MHz, and as antenna arrays multiplied, the dimensionality of the shared medium expanded. Distributed CSMA/CA, optimized for single-antenna, narrow-band systems, cannot exploit these new dimensions.
### 802.11b (1999)
Single antenna, 22 MHz channel, DSSS (Direct-Sequence Spread Spectrum) modulation, maximum 11 Mbps data rate. Every station contends for the single shared channel via CSMA/CA with 20 microsecond slot times. A collision on the unsplit medium is catastrophic — the frame is lost. The protocol works well for sparse networks (home, small office); begins to struggle at approximately 30 devices. Above this density, collisions dominate, throughput collapses to a few Mbps, and users experience "WiFi is slow" frustration. A 22 MHz channel cannot be subdivided; there is only one shared resource.
### 802.11n (2009)
Introduces MIMO (Multiple-Input Multiple-Output) — each device may have multiple antennas, and the AP can use different spatial streams simultaneously. Also widens channels (40 MHz option, later 80 MHz). Data rates reach 600 Mbps theoretically. But CSMA/CA is unchanged — devices still compete for the medium as a single resource, unaware that multiple spatial dimensions now exist. The protocol has no mechanism to allocate devices to spatial streams. Uplink remains purely contention-based. Improvement is implicit (transmitter/receiver negotiate spatial streams dynamically), not explicit. A device that could transmit on a free spatial stream still waits if the medium appears busy due to another device using a different stream. Throughput improves somewhat due to higher modulation rates, but the contention ceiling remains near 30%.
### 802.11ac (2013)
Channels up to 80 MHz (later 160 MHz), up to 8 spatial streams, MU-MIMO downlink. Here, the AP can explicitly transmit to multiple devices simultaneously using different spatial streams in the same transmission opportunity (TXOP). A single MAC service data unit (MPDU) from the AP is carried in multiple spatial streams, each aimed (beamformed) toward a different client. But this is one-directional. Uplink still uses CSMA/CA contention. Devices compete for the medium; the AP chooses which device to send to. The coordination is asymmetric: downlink is partially scheduled (AP controls multiple simultaneous downlink transmissions), uplink is contended (devices still use CSMA/CA).
### 802.11ax (2021)
OFDMA (Orthogonal Frequency Division Multiple Access). The AP divides the channel into fine-grained resource units (RUs) — frequency subcarriers × time slots. A typical 20 MHz channel is divided into 26 RUs; an 80 MHz channel into 242 RUs. The AP explicitly allocates RUs to devices every transmission opportunity (typically 1-2 ms). No device contends; the AP issues a trigger frame announcing who transmits when and on which RU. Downlink and uplink are both scheduled. CSMA/CA is entirely eliminated. Contention is unnecessary because RUs are discrete and non-overlapping; the AP can allocate without collisions.
### Evolution as Expansion of Dimensions
The evolution mirrors increasing dimensionality and decreasing contention. CSMA/CA works for one dimension (single shared channel). When dimensions multiply (frequency subcarriers, spatial streams, time slots), the protocol cannot allocate them. A CSMA/CA transmitter makes a binary decision (transmit or defer) on a resource it now poorly understands. How many spatial streams are available? Which frequency subcarriers are free? The transmitter has no mechanism to answer these questions. The medium becomes a multi-dimensional resource that distributed binary decisions cannot effectively allocate.
Centralized scheduling, by contrast, exploits all dimensions. The AP sees all devices' channel quality on all RUs and allocates optimally (or heuristically). This requires:
1. **Measurement**: Devices must report channel state (CQI — Channel Quality Indicator) to the AP. This takes uplink bandwidth (5–10% overhead).
2. **Computation**: The scheduler must solve an allocation problem every TTI (transmission time interval, 1–2 ms). This is a combinatorial optimization; real schedulers use greedy heuristics.
3. **Infrastructure**: An AP must exist. Ad hoc networks are no longer possible in 802.11ax.
4. **Synchronization**: All devices must be synchronized to TTI boundaries. Distributed networks have no common clock; centralized networks rely on AP beacons.
802.11n → 802.11ax is not a feature addition; it is a fundamental shift from **distributed contention** to **centralized allocation**. The interface changes: instead of competing for undefined medium time, devices receive explicit grant-of-resource assignments. The state changes: instead of local backoff counters, the AP maintains a global allocation table. The time handling changes: instead of asynchronous contention-based access, allocation is synchronous and deterministic.
The shift reflects changing deployments. Early WiFi (802.11b/g) aspired to ad hoc capability — no infrastructure needed. Modern WiFi is virtually always AP-based. The institutional assumption (infrastructure exists and is managed) makes centralization feasible and attractive. By 802.11ax, the WiFi standard essentially requires an AP; ad hoc networks are rare and poorly supported.
---
## Cellular Access: Centralized from Day One
Cellular networks took the opposite path. From 1G (1981) to 5G (2019), they have been centralized schedulers. The anchor constraint is different: **licensed spectrum exclusivity**. A carrier owns a frequency band by regulatory grant. This ownership justifies infrastructure investment and eliminates the need for distributed contention — there are no competing unlicensed networks; the carrier has exclusive access.
The evolution of spectrum access schemes (FDMA → TDMA → CDMA → OFDMA) is not a retreat from distribution (distribution was never an option). It is a progression in **allocation granularity and measurement feedback speed**. Each generation disaggregated the shared resource into finer pieces and closed measurement-to-allocation feedback loops tighter.
### FDMA (1G, 1980s)
Divide spectrum into frequency channels (e.g., 30 kHz each). Assign one channel per active call for the call's duration (minutes to hours). The base station tracks which channels are in use. Capacity: approximately 200 users per 6 MHz cell (one channel per user). No collisions (non-overlapping channels). Disadvantage: poor spectrum utilization (a channel is "owned" by a call even when the call is silent, as in a phone conversation with pauses); cannot rapidly reassign. A user occupies a 30 kHz channel for 3 minutes even if speaking only 40% of the time. 70% of that channel's capacity is wasted.
### TDMA (2G, 1990s)
Further divide each frequency channel into time slots (e.g., GSM has 8 slots per 4.615 ms frame). Assign users a (frequency, slot) pair. The base station broadcasts frame timing via a synchronization channel so devices can synchronize to the frame boundary. Capacity: approximately 1,600 users per 6 MHz cell (200 channels × 8 slots). Advantage: higher utilization (slots are reused across frames; a user may transmit in slot 3 of every frame, but slot 1 is available for another user). Disadvantage: requires tight synchronization; handoff between cells is complex because slot timing must align across base stations. If cell A has frame timing offset from cell B, a device moving between them must resynchronize, causing brief disconnection.
### CDMA (3G, 2000s)
All users share the same wideband frequency but are assigned orthogonal spreading codes. User A's data is spread with a high-frequency code (e.g., [+1, -1, +1, +1, -1, -1, +1, -1]); user B's data is spread with a different code. Both transmit simultaneously on the same frequency. The receiver decodes by multiplying the received signal by its own code, recovering only its own data (and seeing other users' signals as noise-like interference). Capacity: thousands of users per 5 MHz cell (spreading factor determines how many codes fit; factor ranges from 16 to 128, meaning 16–128 simultaneous users, though many at lower rates due to interference). Advantage: soft handoff (a device can be on multiple cells at once; the network combines their signals). Disadvantage: **near-far problem** — a strong signal (user near the base station) drowns out weak signals (user far away); requires tight power control feedback every 200 milliseconds. A mobile 100 meters from the base station transmits at +20 dBm; a mobile 1 km away (10× farther) would receive 20 dB more path loss and must transmit at +40 dBm to be received at the same power. But +40 dBm becomes noise to other users. Power control loops run continuously to keep all users' signals at comparable strength at the base station.
### OFDMA (4G/5G, 2010s–present)
Divide spectrum into small subcarriers (15 kHz spacing in LTE; 30 or 60 kHz in 5G NR) and 1 ms time slots (or 0.5 ms in 5G NR). A resource block (RB) is one subcarrier in one slot. The scheduler allocates RBs to users. Capacity: tens of thousands of users per 20 MHz cell (LTE has 100 RBs per 20 MHz; each can be allocated to a different user; reused across slots, so the same RB can serve different users in different time slots). Advantage: highest utilization, fine-grained allocation, fast (per-TTI) adaptation to channel changes. Disadvantage: requires frequent channel feedback (every 1 ms each user reports CQI), powerful scheduler, and high signaling overhead. A mobile device must measure its channel quality on every RB (100 measurements per 20 MHz) and report to the base station (roughly 100 × 4 bits = 400 bits per ms, or ~400 Kbps if using full bandwidth). This feedback overhead is not negligible, but it is justified by the allocation gains.
### Progression Explained
The progression is from static allocation (FDMA, channels held for call duration) to dynamic allocation (OFDMA, RBs reallocated every 1 ms). This requires tighter measurement loops:
- **FDMA**: No feedback (channel quality is static for a call).
- **CDMA**: Power control feedback (base station measures pilot power at each user, sends power up/down commands every 200 ms). **Feedback timescale: 200 ms.**
- **OFDMA**: CQI feedback for scheduling (users measure their channel on each RB and report it, quantized to 4 bits, every 1 ms). **Feedback timescale: 1 ms.** This 200× tighter loop enables the scheduler to adapt much faster to changing channel conditions.
As measurement loops tighten, schedulers can adapt faster to changing conditions, improving spectral efficiency. A mobile entering a shadow can be reallocated away from that frequency within 1 ms (OFDMA) versus 200 ms (CDMA) versus not at all (FDMA).
### Framework Analysis
The anchor constraint is **spectrum scarcity + licensed exclusivity**. A carrier has regulatory authority over a frequency band and the obligation to manage it efficiently. This enables centralization: the carrier operates the base station scheduler, and devices comply with allocations. No distributed contention is legal or necessary.
The measurement problem in cellular is different from WiFi. There are no hidden terminals (the base station is central, seeing all devices). Instead, the measurement challenge is **channel quality**: each device's data rate depends on its distance from the base station, fading, interference, and other factors. The scheduler needs to know each device's channel quality to allocate efficiently. CDMA solved this via power control feedback (users report pilot power, the scheduler observes it, and sends power commands). OFDMA solved it via CQI reporting (users measure their own channel quality on each RB and report it explicitly, usually quantized to 4 bits per RB per report). The scheduler observes all CQI reports and allocates RBs to maximize throughput.
This measurement-allocation loop is a **closed-loop system**: observe CQI → allocate RBs → users transmit → throughput observed → next CQI-based allocation. It is faster than 802.11 (1 ms loop vs. 50 ms timeout) and more accurate (explicit channel information vs. inferred via ACK absence).
---
## Contrast: WiFi vs. Cellular
The two paths represent different answers to the same problem. The choice is not technical; it is institutional.
| **Dimension** | **802.11 (Unlicensed)** | **Cellular (Licensed)** |
|---|---|---|
| **Spectrum** | Shared, unmanaged (2.4, 5, 6 GHz) | Licensed, exclusive to operator |
| **Infrastructure** | Optional (ad hoc possible) | Required (base station necessary) |
| **Coordination** | Distributed (CSMA/CA → OFDMA) | Centralized from day one |
| **Time handling** | Asynchronous contention | Synchronous, TTI-based scheduling |
| **State** | Local (per-device queue, backoff) | Global (base station allocation table) |
| **Measurement** | Distributed (carrier sense, NAV) | Centralized (CQI reports) |
| **Throughput limit** | ~30% utilization (CSMA/CA) | >70% utilization (OFDMA, with scheduling) |
| **Latency** | Variable (backoff delay, can reach 100s ms) | Predictable (scheduling delay ~1 ms) |
| **Deployment cost** | Low (no base station needed) | High (base stations, spectrum license) |
| **Scaling** | Poor (collisions increase w/ density) | Good (RB allocation scales) |
Both reach similar spectral efficiency limits when optimized (OFDMA for cellular, OFDMA for WiFi 802.11ax), but the paths diverge. WiFi started distributed and is gradually centralizing as capacity demands grew. Cellular was always centralized because the regulatory and economic model (licensed exclusive spectrum) made centralization inevitable and justified the infrastructure cost.
---
## Last-Mile Access Technologies: A Taxonomy
The shared-medium problem appears throughout access networks. The physical medium (radio spectrum, cable plant, fiber) determines the coordination model. We classify last-mile technologies by their medium and the coordination requirement it imposes.
### WiFi (802.11)
**Medium**: Shared radio spectrum (2.4 GHz, 5 GHz, 6 GHz).
**Evolution**: CSMA/CA (802.11b/g/n) → partially centralized (802.11ac) → fully centralized OFDMA (802.11ax).
**Scalability**: Tens of devices (802.11b/g, poor above ~30 devices due to collision dominance); improving with 802.11ax OFDMA (feasible for hundreds in dense deployment).
**Advantage**: Low infrastructure cost; flexible deployment (AP-based or ad hoc); high throughput in modern versions.
**Disadvantage**: Shared spectrum (unlicensed, coexists with other networks); performance degrades with density; vulnerable to hidden terminal problem; affected by interference from microwave ovens, cordless phones.
### Cellular (4G/5G)
**Medium**: Licensed radio spectrum (2–6 GHz, operator-managed).
**Evolution**: FDMA → TDMA → CDMA → OFDMA (centralized throughout).
**Scalability**: Thousands of devices per cell; frequency division by RB allocation scales.
**Advantage**: Predictable performance; QoS guarantees; high spectral efficiency; mobility across cells (handoff).
**Disadvantage**: High infrastructure cost (base stations, spectrum licenses, network operations); scheduler complexity; measurement overhead (CQI feedback).
### Cable (DOCSIS 3.1)
**Medium**: Hybrid — downstream is broadcast (fiber+coax, all modems receive all data; filtering at modem); upstream is shared (coax, many modems contend for request slots).
**Coordination**: Asymmetric. Downstream: broadcast, no contention. Upstream: CSMA/CA-like request mechanism; a CMTS (Cable Modem Termination System) centralizes upstream grant allocation.
**Scalability**: Good (thousands of modems per CMTS; spatial diversity via multiple CMTS in network).
**Advantage**: Leverages existing cable infrastructure; asymmetric (downstream > upstream) matches typical usage (video delivery is downlink-heavy).
**Disadvantage**: Upstream request latency (~10–20 ms for request processing + grant); shared upstream contention; less flexible than dedicated fiber.
### Fiber (FTTH)
**Medium**: Dedicated, point-to-point connections (one fiber per home, dedicated wavelength or timeslot).
**Coordination**: Minimal (no sharing, no arbitration needed).
**Scalability**: Excellent (no shared resource limits; scales with fiber deployment density).
**Advantage**: Lowest latency; highest throughput; no contention; symmetrical upstream/downstream.
**Disadvantage**: Highest cost (civil works, trenching, fiber deployment); requires right-of-way agreements; takes 5–10 years to deploy at city scale.
### Latency Breakdown
WiFi (CSMA/CA): Contention delay (median ~5 ms, 95th percentile ~50 ms) + transmission time (frame size / data rate, typically 1–10 ms).
Cellular (OFDMA): Scheduling delay (TTI cycle, ~1 ms) + transmission time (typically 1–10 ms). More predictable due to centralized scheduling.
Cable (DOCSIS): Request latency (~10 ms) + grant latency (~10 ms) + transmission time.
Fiber: Just transmission time (no contention, lowest latency).
---
## Closed-Loop Dynamics Across Systems
The framework's closed-loop reasoning principle applies to all medium access systems. We compare the feedback structures and their consequences.
### CSMA/CA
**Loop**: Collision → ACK timeout (50 ms) → backoff increase → reduced attempt rate → fewer collisions.
**Feedback latency**: 50 ms (the timeout is conservative to avoid false positives due to power variability).
**Stability**: Unstable under high load (positive feedback dominates: all defer → medium idle → all attempt → collision). Oscillations and throughput collapse above ~30% utilization.
**Timescale for adaptation**: seconds (a station backing off from CW=1023 takes 10–50 seconds to retry all backedoff attempts).
### 802.11ax OFDMA
**Loop**: Device channel quality measured (CQI report) → AP scheduler allocates RUs → devices transmit on allocated RUs → throughput observed → next allocation (1 ms loop).
**Feedback latency**: 1 ms (one TTI).
**Stability**: Stable because the scheduler controls all allocation; there is no contention, hence no collision feedback. Collisions are impossible by design (non-overlapping RUs).
**Timescale for adaptation**: milliseconds. A device entering a fade can be reallocated within 1–2 ms.
### Cellular OFDMA
**Loop**: Device reports CQI (1 ms) → scheduler allocates RBs based on CQI + queue depth + fairness policy → users transmit → success/failure observed → next allocation.
**Feedback latency**: 1 ms (per-TTI scheduling).
**Stability**: Stable; centralized control, no contention.
**Timescale for adaptation**: milliseconds; similar to 802.11ax.
### Cable DOCSIS
**Loop**: Modem sends request → CMTS receives request → CMTS grants upstream slot → modem transmits → success observed → next request.
**Feedback latency**: ~10–20 ms (request processing + grant signaling delay).
**Stability**: Stable (CMTS controls upstream allocation, no contention on granted slots).
**Timescale for adaptation**: 10–20 ms; slower than wireless scheduling but faster than 802.11 CSMA/CA.
### Pattern: Tighter Loops Enable Higher Utilization
**The prediction**: Tighter feedback loops enable stable operation at higher utilization. CSMA/CA (50 ms feedback) stabilizes at ~30% utilization. OFDMA (1 ms feedback) stabilizes at >70%. The measurement speed and decision speed determine throughput ceiling.
This is why 802.11ax WiFi (with CQI feedback and OFDMA) can operate at far higher capacity than 802.11n (CSMA/CA). The measurement/decision loop is 50× faster, enabling the scheduler to keep all devices occupied without collisions. Conversely, any system with slow feedback (e.g., satellite systems with 250 ms round-trip latency) cannot use tight feedback loops and must resort to mechanisms that operate at coarser timescales (e.g., explicit reservation protocols, not CSMA/CA).
---
## Generative Exercises
**Exercise 1: Satellite Medium Access**
Geostationary satellites have approximately 250 ms round-trip latency. Propose a medium access protocol for multiple ground stations transmitting to a satellite. Which coordination model (distributed contention or centralized scheduling) is feasible? What happens to the closed-loop feedback timescale if you use CSMA/CA with a 250 ms timeout?
How would you redesign? Consider three alternatives:
1. **Pure CSMA/CA with long timeout**: Each ground station listens, waits for DIFS, applies random backoff (where backoff slot time would need to be ~100 ms to accommodate the 250 ms RTT). What is the contention window? Is it practical?
2. **Centralized scheduler with CQI reporting**: The satellite maintains a reservation table and allocates "transmission windows" to ground stations. Ground stations report their channel quality via feedback channel. What is the minimum control overhead? How does the 250 ms latency affect the tightness of the scheduler loop?
3. **Reservation-based protocol**: Ground stations reserve future transmission slots by sending a request to the satellite, which grants slots. How long is the request-grant cycle? Can you achieve >30% utilization?
Trace the dependency graph: latency anchor → feasible measurement signals → coordination choices → closed-loop stability → throughput ceiling.
**Exercise 2: WiFi at Scale**
A large enterprise WiFi network deploys 802.11n (CSMA/CA) APs at high density (one AP every 30 meters). The network has 1,000 devices distributed across the building. Predict the performance degradation compared to a deployment with the same devices on 802.11ax (OFDMA) APs.
What invariant answers change between 802.11n and 802.11ax?
- **State**: 802.11n has per-device backoff counters and NAV timers; 802.11ax has centralized RU allocation table at the AP.
- **Time**: 802.11n is asynchronous contention (random backoff); 802.11ax is synchronous scheduling (every 1–2 ms, AP allocates RUs).
- **Coordination**: 802.11n is fully distributed; 802.11ax is fully centralized (at AP).
- **Interface**: 802.11n frames carry Duration field; 802.11ax trigger frames carry explicit RU allocations.
How does the measurement problem (hidden terminals, carrier sense accuracy) change with density?
In a dense 802.11n deployment (APs every 30 m), each device hears many AP and device signals, but not all (hidden terminals are common in vertical deployments and between buildings). Carrier sense becomes unreliable. With 802.11ax, the local AP schedules all its devices, eliminating hidden terminal collisions from devices served by the same AP (though inter-AP interference remains).
Sketch the throughput curve (devices vs. throughput per device) for both. For 802.11n, expect an initial linear increase (low contention, minimal collisions) followed by a sharp cliff (above ~30 devices per AP, collisions dominate). For 802.11ax, expect a gradual increase plateauing at ~70–80% utilization (due to AP scheduler capacity and CQI overhead), not a cliff.
**Exercise 3: Cellular Scheduler Design**
A cellular base station scheduler must allocate RBs to devices. You have:
- 100 RBs available (e.g., 20 MHz LTE with 100 RBs)
- 50 active devices, each with CQI feedback (how many bits/RB each can reliably decode, ranging from 2 bits/RB at bad SNR to 8 bits/RB at good SNR)
- A fairness requirement (no device starves; each device gets at least one RB per scheduling round)
- A latency requirement (allocate within 1 ms)
Design a greedy allocation algorithm:
```
for each RB:
find device with highest CQI on this RB that (1) has pending data and (2) has not exceeded its fair share in this round
allocate RB to that device
```
This algorithm is simple, greedy (no global optimization), but works in practice.
What measurement signals drive this?
- **CQI reports** from each device (explicit signal about channel quality)
- **Queue depth** at each device (implicit signal about data availability; in uplink, devices report buffer occupancy; in downlink, the BS observes traffic queues)
- **Fair share** counter (number of RBs already allocated to each device in this round)
What happens if CQI reports are delayed (e.g., 5 ms old)? The scheduler is allocating based on stale information. A device that was at bad SNR 5 ms ago but has moved to good SNR now will be allocated fewer RBs than optimal. Conversely, a device that was at good SNR but has now faded will receive RB allocation and may fail to decode (retransmission required). Throughput and latency degrade. This is why cellular systems use frequent CQI reports (every 1–5 ms in LTE; down to 250 µs in 5G for time-division duplex systems where channel reciprocity is high).
How does the measurement latency affect scheduler quality? Shorter latency enables adaptation to faster fading (mobility-induced or multipath). High-speed mobility (trains, cars on highways) causes channel fading on timescales of 10–100 ms. A scheduler with 5 ms feedback loop can track this. A scheduler with 50 ms feedback loop oscillates (allocates, observes bad result, reallocates). A scheduler with 500 ms feedback loop fails (channel changes too fast to track).
---
## Summary: Anchor Constraints and Coordination Models
Every medium access system is constrained by physics (shared vs. dedicated medium, spectrum licensing, latency). The anchor constraint determines feasible coordination models. The coordination model, combined with the measurement mechanisms available, determines throughput, latency, and scalability.
**802.11**: Anchor is shared, unlicensed spectrum + local sensing. Coordination evolved from fully distributed (DCF, CSMA/CA) to centralized (OFDMA). Throughput limit improved from ~30% (CSMA/CA) to >70% (OFDMA). The evolution was driven by capacity demands and the maturation of AP-based deployments.
**Cellular**: Anchor is licensed spectrum + centralization justified by exclusive operator ownership. Coordination is centralized from day one. Feedback loops tightened (FDMA → TDMA → CDMA → OFDMA) to exploit new dimensions and improve utilization. Modern cellular achieves >80% utilization under optimal conditions.
**Cable**: Anchor is hybrid medium (broadcast downstream, shared upstream). Coordination is asymmetric: downstream broadcast (no contention), upstream centralized (CMTS grants). Mirrors the asymmetry of typical usage (video dominates downstream).
**Fiber**: Anchor is dedicated, point-to-point medium. No sharing, hence no coordination problem. Simplest architecture, highest throughput, highest cost.
The framework predicts that as a shared medium's dimensionality expands (from 1D frequency to 2D frequency+time to 3D frequency+time+space), distributed contention becomes increasingly inefficient, and centralized scheduling becomes necessary. This prediction is borne out by WiFi's evolution: 802.11b (1D, CSMA/CA) → 802.11n (3D with MIMO but still CSMA/CA) → 802.11ax (3D with OFDMA).
---
## References
- Abramson, N. (1970). "The ALOHA System — Another Alternative for Computer Communications." *Proc. AFIPS Fall Joint Computer Conference*.
- Bianchi, G. (2000). "Performance Analysis of the IEEE 802.11 Distributed Coordination Function." *IEEE Journal on Selected Areas in Communications*, 18(3), 535–547.
- Clark, D. (1988). "The Design Philosophy of the DARPA Internet Protocols." *Proc. ACM SIGCOMM*.
- DOCSIS 3.1. "Data Over Cable Service Interface Specifications." CableLabs.
- Jacobson, V. (1988). "Congestion Avoidance and Control." *Proc. ACM SIGCOMM*.
- IEEE 802.11-2020. "Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications." IEEE Standards Association.
- 3GPP TS 36.300. "Evolved Universal Terrestrial Radio Access (EUTRA) and Evolved Universal Terrestrial Radio Access Network (EUTRAN): Overall Description." 3GPP TSG RAN.
- 3GPP TS 38.300. "NR: NR and NG-RAN Overall Description." 3GPP TSG RAN.
- Jain, R., Chlamtac, I., Hagouel, J., and Shen, C. (1985). "A Quantitative Measure of Fairness and Discrimination for Resource Allocation in Shared Computer Systems." DEC Technical Report.
- McKeown, N. et al. (2008). "OpenFlow: Enabling Innovation in Campus Networks." *ACM SIGCOMM Computer Communication Review*, 38(2).
- Nichols, K. and Jacobson, V. (2012). "Controlling Queue Delay." *ACM Queue*, 10(5).
- Ramakrishnan, K., Floyd, S., and Black, D. (2001). "The Addition of Explicit Congestion Notification (ECN) to IP." *RFC 3168*.
- Saltzer, J.H., Reed, D.P., and Clark, D.D. (1984). "End-to-End Arguments in System Design." *ACM Trans. Computer Systems*, 2(4).
- Wiener, N. (1948). *Cybernetics: Or Control and Communication in the Animal and the Machine*. MIT Press.
---
*This chapter is part of "A First-Principles Approach to Networked Systems" by Arpit Gupta, UC Santa Barbara, licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).*