Channel Capacity: Shannon's Ultimate Speed Limit for Communication

simulation advanced ~15 min
Loading simulation...

Formula

C = B \cdot \log_2(1 + \text{SNR})
\eta = \frac{C}{B} = \log_2(1 + \text{SNR}) \quad \text{bits/s/Hz}
\text{SNR}_{\text{dB}} = 10 \cdot \log_{10}\left(\frac{S}{N}\right)
C_{\text{BSC}} = 1 - H(p) = 1 + p\log_2(p) + (1-p)\log_2(1-p)
How fast can you communicate? Not how fast can you speak or type, but what is the fundamental physical limit on the rate of reliable information transfer through a noisy channel? Claude Shannon answered this question definitively in 1948 with the channel capacity theorem. The Shannon-Hartley formula C = B·log₂(1 + S/N) is breathtakingly elegant. Channel capacity C (in bits per second) depends on just two physical parameters: the bandwidth B (how wide the frequency range) and the signal-to-noise ratio S/N (how strong the signal relative to noise). Double the bandwidth, double the capacity. Double the SNR, and you gain roughly one extra bit per second per hertz. The theorem has two parts, and both are essential. The achievability result says that for any rate R < C, there exists a coding scheme that achieves arbitrarily low error probability. The converse says that for any rate R > C, no coding scheme can avoid errors. Together, they establish C as a sharp threshold between possible and impossible. This simulator visualizes both the theoretical limit and practical modulation schemes. The left panel shows how capacity scales with SNR, with the Shannon limit as a smooth curve and practical modulations as stepped lines below it. The gap between a modulation scheme and the Shannon curve represents the efficiency lost by using a finite constellation. The right panel shows the constellation diagram — the geometric representation of the modulation scheme. Each dot represents a possible transmitted symbol. At high SNR, the noise clouds around each point are tight and well-separated. As SNR decreases, the clouds expand and begin to overlap, making it impossible for the receiver to distinguish between symbols. This is the geometric intuition behind the capacity limit: you can only pack as many distinguishable symbols as the noise allows.

FAQ

What is Shannon's channel capacity theorem?

The Shannon-Hartley theorem (1948) states that the maximum rate of reliable communication over a continuous channel with bandwidth B and signal-to-noise ratio S/N is C = B·log₂(1 + S/N) bits per second. This is an absolute limit: below C, error-free communication is possible with appropriate coding; above C, it is mathematically impossible regardless of the coding scheme used.

What is a constellation diagram?

A constellation diagram plots the possible transmitted symbols as points on a two-dimensional plane with in-phase (I) and quadrature (Q) axes. BPSK has 2 points, QPSK has 4 points in a square, 16-QAM has 16 points in a 4×4 grid, and 64-QAM has 64 points in an 8×8 grid. Channel noise smears each point into a cloud. When clouds overlap, the receiver cannot distinguish symbols, causing errors.

How does SNR affect data rate in practice?

SNR determines which modulation schemes can operate reliably. At low SNR, only simple modulations like BPSK work (1 bit/symbol). As SNR increases, denser modulations become viable: QPSK at moderate SNR (2 bits/symbol), 16-QAM at higher SNR (4 bits/symbol), 64-QAM at high SNR (6 bits/symbol). Modern systems like Wi-Fi 6 and 5G adaptively switch modulation based on measured channel conditions.

Why is the Shannon limit important for 5G and beyond?

The Shannon limit defines the theoretical maximum throughput for any wireless system. 5G technologies approach this limit through massive MIMO (increasing effective SNR), millimeter-wave bands (increasing bandwidth), and advanced LDPC/polar codes (approaching capacity with practical decoding). The remaining gap to Shannon's limit is typically less than 1 dB in modern systems.

Sources

View source on GitHub