Quantum Supremacy
- Jeremy Rutman
- 13 minutes ago
- 7 min read
The promise of quantum computers is that they would be able to solve exponentially-hard problems in linear time.
What “Quantum Supremacy” Means (Technically)
Quantum supremacy refers to the point at which a quantum computer performs a specific computational task that is practically impossible for a classical computer to perform in a reasonable time.
Two clarifications matter:
It does not mean quantum computers are better at everything.
It usually refers to one very specific task, often designed to be hard for classical machines but easy for quantum hardware.
The term was popularized around experiments where quantum devices sampled from random quantum circuits faster than classical supercomputers could simulate them.
Today, many researchers prefer the term “quantum advantage”, which implies usefulness rather than a one-time milestone.
What Quantum Supremacy Is Not
It is not:
Breaking encryption instantly
Running general software faster
Replacing classical computers
It is closer to demonstrating that quantum hardware can access computational states classical machines cannot efficiently represent.
Demonstrating supremacy shows that quantum mechanics can be harnessed to do computations that classical physics cannot efficiently emulate.
Quantum Computing Hardware Is Not One Race: A Clear-Eyed Look at the Major Approaches
Quantum computing is often described as a race. In practice, it looks more like a set of parallel engineering programs solving different optimization problems. Each hardware platform makes a different tradeoff between speed, stability, scalability, and manufacturability. No current approach dominates across all dimensions, and understanding the differences matters if you care about real computational usefulness rather than headline qubit counts.
This post walks through the major hardware approaches in use today and explains, in practical engineering terms, where each one is strong and where it struggles.
What Actually Determines Whether a Quantum Computer Is Useful
Before comparing platforms, it helps to clarify what actually drives performance. Raw qubit count alone is not predictive. The systems that perform best in real algorithm benchmarks tend to balance several factors at once:
Two-qubit gate fidelity (often the dominant error source)
Coherence time (how long quantum information survives)
Connectivity (how many qubits can interact directly)
Gate speed (how many operations fit inside coherence time)
Calibration stability (how often the system must be retuned)
Control system quality (software + electronics + compiler stack)
Metrics such as Quantum Volume were designed to capture this combined behavior rather than any single hardware property.
THE LEADING METHODS
Superconducting Qubits: Fast, Manufacturable, Calibration Heavy
Superconducting qubits, used by companies such as IBM, Google, and Rigetti, are fabricated using lithographic processes similar to semiconductor manufacturing. The qubits are electrical circuits built around Josephson junctions and operated at millikelvin temperatures inside dilution refrigerators.
The biggest advantage of superconducting systems is speed. Single-qubit gates typically run in tens of nanoseconds, and two-qubit gates are often completed in a few hundred nanoseconds. This allows many operations to occur before decoherence destroys the quantum state.
Manufacturing is another strength. Because these devices use chip-fabrication processes, there is a credible path to producing large numbers of qubits with improving yield over time.
The main drawback is coherence time. Superconducting qubits typically maintain coherence for tens to a few hundred microseconds. That is long enough for many NISQ-era circuits, but short enough that error correction overhead becomes expensive. In addition, these systems require complex cryogenic infrastructure and large numbers of microwave control lines, which become difficult to scale mechanically and thermally.
Connectivity is usually limited to nearest-neighbor layouts, which increases circuit overhead because algorithms must insert SWAP operations to move quantum states across the chip.
In practice, superconducting systems currently offer the best combination of speed and moderate scale, which explains their dominance in many near-term demonstrations.
Trapped Ion Qubits: Exceptionally Clean Physics, Slower Operation
Trapped ion systems, developed by groups such as Quantinuum and IonQ, store qubits in the internal states of individual ions suspended in electromagnetic traps. Quantum gates are performed using precisely controlled laser or microwave interactions.
Ions (charged atoms like Yb⁺ or Ca⁺) are confined using electric fields inside a linear ion trap (a type of Paul trap).
Inside that trap:
Ions repel each other (Coulomb repulsion)
The trap confines them overall
The equilibrium state is a line of evenly spaced ions
That line is called the ion chain.
Typical spacing:
~2–10 microns between ions
You can literally image this with a microscope camera - it looks like a row of glowing dots.
The main advantage of trapped ions is fidelity. Single-qubit and two-qubit gate fidelities are among the highest in the field, and coherence times can reach seconds or longer. In addition, ions in the same chain can interact with each other directly, providing effectively all-to-all connectivity. This dramatically reduces circuit routing overhead.
The cost of this precision is speed. Two-qubit gates are often thousands of times slower than in superconducting systems. As a result, even though ions maintain coherence longer, the number of operations that can be performed per second is lower.
Another challenge is engineering complexity. Laser systems must remain stable and precisely aligned, and scaling beyond a single ion chain requires shuttling architectures or photonic interconnects between modules.
Today, trapped ion systems tend to excel in metrics like Quantum Volume because fidelity and connectivity strongly influence those measurements.
Neutral Atom Arrays: Massive Scale Potential, Fidelity Still Improving
Neutral atom systems trap individual atoms in optical tweezers and use highly excited Rydberg states to generate interactions. Companies such as QuEra and Pasqal, along with multiple academic labs, are pushing this approach.
The main attraction is scale. Neutral atom arrays can already reach thousands of qubits in experimental settings. The geometry is also flexible because optical tweezers can reposition atoms dynamically.
Compared to superconducting and ion systems, gate fidelities are still improving. Two-qubit gates are slower than superconducting but generally faster than trapped ions. Atom loss and uniformity across large arrays also remain engineering challenges.
Neutral atom systems are particularly promising for analog quantum simulation and large-scale digital systems if fidelity continues improving.
Photonic Quantum Computing: Natural for Networking, Hard for Deterministic Gates
Photonic quantum computing uses photons as qubits, encoded in properties such as polarization, path, or phase. Some implementations use discrete photons, while others use continuous-variable squeezed states.
The biggest advantage is environmental robustness. Photons do not strongly interact with their environment, allowing room-temperature operation and natural compatibility with long-distance quantum communication.
The main limitation is deterministic interaction. Photons do not easily interact with each other, so many photonic entangling operations are probabilistic. This can dramatically increase resource overhead.
Scaling photonic systems will likely depend on large-scale optical integration, high-efficiency photon sources, and low-loss detectors, all of which are active engineering areas.
Silicon Spin Qubits: The Manufacturing Bet
Silicon spin qubits attempt to encode quantum information in electron or nuclear spins inside semiconductor devices. The long-term appeal is compatibility with existing semiconductor manufacturing infrastructure.
These systems offer small physical qubit size and potentially long coherence times, especially for nuclear spins. However, two-qubit gate fidelity and large-scale fabrication uniformity are still developing.
If manufacturing challenges are solved, silicon spin systems could eventually achieve very high density, but this is still a medium- to long-term prospect.
The Key Tradeoff: Speed vs Fidelity vs Scale
Different platforms optimize different parts of the same equation.
Superconducting systems maximize operation speed and benefit from manufacturing maturity.Trapped ions maximize fidelity and connectivity but sacrifice speed.Neutral atoms maximize raw qubit count and geometry flexibility.Photonics maximizes environmental robustness and networking compatibility.Silicon spin systems maximize long-term manufacturability potential.
No system currently maximizes all of these simultaneously.
The Often Overlooked Reality: Systems Engineering Now Dominates Physics
Across all platforms, the biggest bottlenecks are increasingly similar:
Control electronics scaling
Calibration automation
Crosstalk management
Software stack maturity
Error correction overhead
The field is transitioning from pure physics challenges toward large-scale system integration challenges.
Where Each Platform Likely Fits in the Near Term
Superconducting systems are likely to remain strong for fast, moderate-scale algorithm experiments.Trapped ions are strong candidates for early high-quality logical qubit demonstrations.Neutral atoms may dominate extremely large qubit arrays if fidelity continues improving.Photonic systems may become central to distributed quantum networks.Silicon spin qubits remain a strategic long-term bet tied to semiconductor manufacturing.
The Most Important Takeaway
Quantum computing is not converging toward a single hardware winner yet. Instead, different architectures are specializing around different definitions of “useful computation.” The eventual dominant platform, if one emerges, will likely be determined by which approach can implement fault-tolerant logical qubits at reasonable physical overhead and operational cost.
Right now, each platform is still optimizing a different part of that equation.
Below we show a graph of 'Moore's Law for Quantum Computers' , namely a roughly exponential growth in the number of qubits over time. Quantum volume would be better to plot here but the necessary information required to determine the quantum volume is far less readily available than the number of qubits.

Data for this graph is shown in the following table, with some more info and links to the relevant groups or research papers:
Date | qubits | Quantum volume | method | group | model/link |
1998.3 | 2 | NMR | jones & mosca | ||
2000.4 | 5 | NMR | TU Munich | ||
2000.4 | 7 | NMR | Los Alamos | ||
2001.4 | 7 | NMR vandersypen et al | |||
2005.9 | 8 | trapped ions | Univ Innsbruck | ||
2006.3 | 12 | Perimeter Institute | |||
2010.7 | 14 | trapped ions | Univ Innsbruck | ||
2016.4 | 9 | superconducting | |||
2017.4 | 17 | Josephson junction | IBM | ||
2017.8 | 50 | Josephson junction | IBM | ||
2018.8 | 11 | Josephson junction | Alibaba | ||
2018.2 | 72 | Josephson junction | |||
2018.9 | 79 | trapped ions | IonQ | ||
2018.4 | 49 | silicon spin | Intel | Tangle Lake | |
2019.8 | 53 | Superconducting transmon | Sycamore Nature 574, 505–510 (2019) | ||
2021.4 | 66 | photonic squeezed states | University of Science and Technology of China | Jiuzhang 2 | |
2021.8 | 113 | photonic squeezed states | University of Science and Technology of China | ||
2021.8 | 127 | Josephson junction | IBM | Eagle | |
2021.4 | 4096 | Quantinuum | H1-2 | ||
2022.8 | 433 | Josephson junction | IBM | Osprey | |
2022.4 | 32 | Josephson junction | Rigetti | Aspen | |
128 | trapped ions | ||||
2023.4 | 176 | photonic squeezed states | China | Zuchongzhi-2 | |
2023.9 | 133 | IBM | |||
2025.2 | 105 | Superconducting Transmon | |||
2026.2 | 56 | 8388608 | trapped ions | ||
2026.2 | 6100 | N optical tweezers | CalTech |



Comments