Quantum computing exploits two phenomena with no classical analog — superposition and entanglement — to perform certain calculations exponentially faster than any classical computer. The mathematics requires only linear algebra over the complex numbers, but the physical interpretation is deeply strange.
Qubits: The Basic Unit
A classical bit is 0 or 1. A qubit can be in any linear combination of both:
|psi> = alpha*|0> + beta*|1>
where alpha and beta are complex numbers satisfying |alpha|^2 + |beta|^2 = 1. The notation |·> (called a ket) comes from Dirac's bra-ket formalism. The basis states are column vectors:
|0> = [1, 0]^T |1> = [0, 1]^T
When we measure the qubit, we get 0 with probability |alpha|^2 and 1 with probability |beta|^2. Measurement is irreversible — it collapses the superposition to a definite outcome. This is the central weirdness: the state has genuine indefiniteness until the moment of observation.
The Bloch Sphere
Any single-qubit pure state can be written as:
|psi> = cos(theta/2)*|0> + exp(i*phi)*sin(theta/2)*|1>
This parametrization maps every qubit state to a point on a unit sphere — the Bloch sphere. The north pole is |0>, the south pole is |1>, and points on the equator are equal superpositions with different relative phases. Mixed states (statistical mixtures due to decoherence) live inside the sphere, with the maximally mixed state at the center.
Quantum Gates
Quantum gates are unitary matrices — transformations that preserve the norm of the state vector (required so probabilities always sum to 1). Unitarity also means every quantum gate is reversible, unlike many classical logic gates.
Hadamard (H): Creates equal superposition from a basis state.
H = (1/sqrt(2)) * [[1, 1], [1, -1]]
H|0> = (|0> + |1>)/sqrt(2)
H|1> = (|0> - |1>)/sqrt(2)
Pauli-X: The quantum NOT gate — flips |0> and |1>.
X = [[0, 1], [1, 0]]
Phase gates (S, T): Rotate the phase without changing measurement probabilities in the Z basis.
S = [[1, 0], [0, i]] T = [[1, 0], [0, exp(i*pi/4)]]
These gates are the quantum equivalent of rotations on the Bloch sphere, and they form a universal gate set when combined with the Hadamard.
Entanglement and Multi-Qubit Systems
An n-qubit system lives in a 2^n-dimensional complex Hilbert space. Two qubits:
|psi> = a00*|00> + a01*|01> + a10*|10> + a11*|11>
The CNOT gate (Controlled-NOT) acts on two qubits — it flips the second (target) if and only if the first (control) is |1>:
CNOT = [[1,0,0,0],
[0,1,0,0],
[0,0,0,1],
[0,0,1,0]]
Applying H to the first qubit, then CNOT, starting from |00>:
|00> --H--> (|0>+|1>)/sqrt(2) ⊗ |0> --CNOT--> (|00>+|11>)/sqrt(2)
This is a Bell state — a maximally entangled pair. Measuring the first qubit immediately determines the second, regardless of the distance between them. Bell's theorem (1964) proved mathematically that no classical local hidden variable model can reproduce these correlations. Entanglement is a genuine non-classical resource.
Quantum Interference
The real power of quantum computing comes from interference. A quantum algorithm is designed so paths leading to wrong answers cancel (destructive interference) while paths to correct answers reinforce (constructive interference).
Grover's algorithm exploits this geometrically. Starting from a uniform superposition over all N = 2^n states, the algorithm alternately applies an oracle reflection (marking the target state with a phase flip) and an inversion-about-average operation. Each pair of operations rotates the state vector by angle 2*arcsin(1/sqrt(N)) toward the target. After approximately (pi/4)*sqrt(N) iterations, a measurement reveals the target with high probability.
The speedup: O(sqrt(N)) versus O(N) for classical search. For N = 2^100, classical search takes 2^100 steps; Grover's takes about 2^50 — still enormous, but exponentially faster.
Shor's Algorithm: Factoring in Polynomial Time
Peter Shor's 1994 algorithm factors large integers in polynomial time — O((log N)^3) — compared to the best known classical algorithm's sub-exponential O(exp(cbrt(log N))). This would break RSA encryption, which relies on factoring being computationally hard.
The key step is quantum phase estimation: finding the period r of the modular exponentiation function f(x) = a^x mod N. Once r is known, the prime factors of N follow from classical number theory (Euler's theorem). Phase estimation uses the Quantum Fourier Transform (QFT):
QFT|j> = (1/sqrt(N)) * sum_k exp(2*pi*i*j*k/N) * |k>
The QFT can be implemented in O((log N)^2) quantum gates — exponentially more efficient than the classical FFT's O(N log N).
Complexity Classes and Limitations
Quantum computing does not make all hard problems easy. Current understanding:
- BQP (bounded-error quantum polynomial time) is believed to contain some problems outside classical P, including factoring
- BQP is widely believed NOT to contain NP-complete problems — quantum computers probably cannot solve satisfiability problems efficiently
- Grover's quadratic speedup is provably optimal for unstructured search — no quantum algorithm can do better
The precise relationship between BQP and NP remains one of the deepest open questions in theoretical computer science.
Decoherence: The Engineering Challenge
Real qubits interact with their environment, causing decoherence — the gradual destruction of superposition as quantum information leaks into the surroundings. Maintaining coherence requires extraordinary isolation: superconducting qubits operate at 10-15 millikelvin (colder than interstellar space), shielded from electromagnetic noise.
Quantum error correction can protect logical qubits using redundancy across many physical qubits. The surface code, for example, encodes one logical qubit in roughly 1000 physical qubits and can correct errors as long as the physical error rate is below a threshold of about 1%. Current leading systems achieve error rates of 0.1-1%, making fault-tolerant quantum computing a near-term engineering challenge rather than a theoretical impossibility.
The path from today's noisy intermediate-scale quantum (NISQ) devices to fault-tolerant machines capable of running Shor's algorithm on cryptographically relevant inputs is likely a decade or more of intensive engineering. But the theoretical foundations are solid — and the implications, when it arrives, are profound.