Quantum computing continues to climb from promise to practicality, from NISQ to FASQ.
An Inside Look According to a recent research by physicists Jens Eisert and John Preskill, although quantum computing is developing quickly, it will take much longer to create fault-tolerant, application-scale systems and will need to address significant conceptual and practical issues. Although the researchers show regulated multi-qubit operations, they caution that existing devices are still constrained by noise and scalability issues. They also emphasize advancements in hardware and algorithms. The first true quantum advantages will probably appear in research simulation before spreading to commercial use, they say, and no single hardware platform now exhibits a clear route to dominance.
There is no denying that quantum computing is quickly overcoming challenges that were long believed to be insurmountable, as seen by the steady flood of announcements about scientific advancements from universities and quantum startups throughout the world. But according to a recent study by physicists John Preskill of the California Institute of Technology and Jens Eisert of Freie Universität Berlin, the path from noisy lab prototypes to machines that can perform practical tasks will be more difficult, longer, and unpredictable than many in the field expect.
This article outlines the conceptual and technical differences between fault-tolerant application-scale quantum (FASQ) systems and today’s noisy intermediate-scale quantum (NISQ) computers, which are required for reliable, lengthy, error-free computations. Along this difficult path, the researchers identify four associated obstacles:
- From active error detection and repair to error mitigation.
- From basic error correction to fault tolerance that is scalable.
- From primitive heuristics to sophisticated, validated algorithms.
- From exploratory simulators to quantum simulation’s believable edge.
Advancement With Boundaries
It is said that the present generation of quantum hardware is an amazing engineering achievement. Superconducting qubit, trapped ion, and neutral atom experiments with optical tweezers have exceeded 100 qubits, and some platforms for neutral atoms have hundreds of qubits. Metrics such as gate error rates are used to evaluate the quality of the hardware; in certain superconducting, trapped ion, and neutral-atom technologies, two-qubit gate error rates are close to 0.1%, whereas single-qubit gate error rates are at least an order of magnitude higher.
Despite these seeming advancements, quantum computations that are both economically feasible and practically beneficial have not yet been accomplished. Since modern NISQ devices are not error corrected, their comparatively high gate error rates significantly restrict their computing capacity. Large circuits, for example, need to be sampled numerous times due to the two-qubit gate error rates of today; the number of repeats needed increases exponentially with the circuit capacity.
Error Mitigation as a Stopgap
Researchers rely on quantum error mitigation (QEM) approaches, which employ statistical classical post-processing to extract a meaningful result from noisy circuits, until full error correction is feasible. The accessible circuit capacity can be greatly increased using methods such as probabilistic error cancellation (PEC) and zero-noise extrapolation (ZNE), which may enable circuits with 10,000 gates or more.
QEM is not a scalable solution, though; it is merely a bridge. For highly deep quantum circuits, mitigation is not feasible due to the exponential growth of the needed sample overhead cost with circuit size.
Nevertheless, useful benchmarking trials have been made possible by error mitigation. Although this accomplishment is “not of great practical interest,” Google Quantum AI showed that today’s NISQ processors can accomplish some tasks that are above the capabilities of classical supercomputers by using QEM to run random circuit sampling tasks with 103 qubits and 40 layers of two-qubit gates.
The megaquop regime (about 106 quantum operations) began to take shape at this time, and early fault-tolerant machines may be able to accomplish certain jobs that are beyond the capabilities of classical, NISQ, or analogue quantum devices.
The Fault Tolerance Gap
The change from NISQ to FASQ is said to be an especially difficult one. To operate a wide range of practical applications, FASQ systems need to take advantage of quantum error-correcting codes. According to the notion of fault-tolerant quantum computation, if physical error probabilities fall below a fixed accuracy threshold (about 10-2 for the surface code), logical error rates can be made infinitely minimal.
Error correction has enormous overhead costs in practice. The surface code is used to demonstrate this in the paper. For a 1,000 logical-qubit processor, approximately 361 physical qubits are required for each logical qubit, resulting in a total requirement of around one million physical qubits if one targets a logical error rate of 10-11 and physical error rates are 10-3. The biggest chips available now only contain hundreds of qubits.
This scaling problem involves time overhead as well as hardware overhead (several physical qubits per logical qubit). Numerous iterations of noisy syndrome measurements are required to confirm logical operations. In order to decode error syndromes and preserve stability, a fault-tolerant quantum machine will necessarily be a hybrid system that needs a significant amount of quick classical computing capability.
The researchers point out that there isn’t a certain benefit to any one hardware platform when it comes to scaling to big, fault-tolerant systems. Atomic devices (trapped ions, Rydberg tweezer arrays) provide longer coherence durations and greater connectivity, whereas superconducting circuits give faster speed.
Alternative qubit designs, such as fluxonium qubits, cat qubits (which suppress bit-flip errors), or topologically shielded qubits, that significantly reduce physical gate error rates, could reduce the overhead. Additionally, although they frequently rely on geometrically non-local physical operations, alternative error-correcting systems like quantum low-density parity-check (qLDPC) codes provide higher encoding rates than the surface code.
Algorithms and Research in Science
The community needs to go from primitive heuristics to sophisticated, provable algorithms in the software space. Variational quantum algorithms (VQAs), hybrid approaches in which quantum processors carry out circuits with tunable parameters optimized by a classical computer, have been the focus of efforts to achieve quantum usefulness in the NISQ era.
Clear proof of VQAs’ quantum advantage has proven elusive despite years of research. The situation known as the “barren plateau,” in which cost function gradients disappear, could be a hindrance to training. Using “warm starts,” which initialize parameters using approximations from classical algorithms, may enhance the performance of VQAs.
The study recommends concentrating on “proof pockets”—small, well-defined subproblems where quantum approaches can rigorously establish advantage—instead of aiming for instantaneous end-to-end utility.
Formal methods such as Grover’s search algorithm and Shor’s factoring algorithm, which offer mathematically demonstrated speedups but need enormous resources, might be executed by completely error-corrected FASQ machines in addition to NISQ.
According to Eisert and Preskill, rather than in finance or encryption, the first truly practical applications will appear in scientific modelling, specifically in physics, chemistry, and materials research. The “strongly correlated” area of study where traditional approaches fall short is the focus of quantum simulation.
Rydberg atom arrays and ultracold-atom platforms are examples of analogue quantum simulators that are already effective instruments for scientific research, especially when it comes to examining quantum dynamics that are far from equilibrium. Given the high overhead cost of fault tolerance, analogue platforms will continue to offer significant discovery potential well into the future, even though digital quantum simulators (circuit-based) will eventually offer more flexibility and error correction capabilities.