Fault-Tolerant Quantum Computing FTQC
A quantum computing system’s ability to function with defects and faults is called FTQC. Complex techniques and architectures enable quantum calculations to perform reliably even with faulty or error-prone elements. Large-scale, practical quantum computers may require this skill.
Quantum computing qubits are vulnerable to noise and decoherence. Fault tolerance involves real-time fault detection and repair, unlike “utility-scale” or Noisy Intermediate-Scale Quantum (NISQ) computers, which are susceptible to flaws that limit circuit complexity and size.
Realising quantum computers that can execute deeper, larger circuits required to address issues too complicated for classical computing alone is essential to achieving FTQC. It represents the capacity to carry out computations with arbitrary low rates of logical error.
What Is the Process of Fault-Tolerant Quantum Computing?
Quantum Error Correction (QEC) is the foundation of FTQC. Fault tolerance is a more comprehensive idea that guarantees that repairs may be implemented consistently without creating new errors, whereas error correction concentrates on finding and fixing faults.
Classical error correction, such as Richard Hamming’s 1947 Hamming code, which employed redundancy (adding parity bits) to guard against errors, served as the model for the basic concept. Classical error codes are essential to digital technology and can facilitate dependable computing if the error rate is sufficiently low.
In QEC, entanglement is used to encode logical qubits which stand for the data that needs to be protected into a greater number of physical qubits. To increase the logical qubit’s resistance to noise, this encoding divides its state among physical qubits.
Three steps are typically involved in the QEC process:
- A “syndrome,” a bit-string that provides information about potential faults without disclosing the quantum state, is obtained by measuring auxiliary qubits that interact with the physical qubits.
- Decoding: The process of analysing the syndrome classically to identify the necessary corrective operations.
- Correction: Restoring the original encoded state by performing operations on the physical qubits.
In a race against noise, this QEC procedure needs to be continuously repeated. However, noise is a feature of all quantum computer activities, including the extraction and correction of syndromes. Care must be taken when designing protocols to prevent escalating error rates.
Algorithmic systems known as Quantum Error Correction Codes (QECC) are used to encode data, identify faults, and fix them. QECCs, like conventional error correction, only function when the error rate is low enough. Different mistake rates can be tolerated by different codes. Among the examples given are:
- The 9-qubit code of Shor (early, impractical, very low tolerance).
- Kitaev’s increased tolerance toric coding (1997).
- Surface codes, which encode single qubits into grids and can withstand reasonably high rates, are feasible implementations of toric codes that require a large number of physical qubits for every logical qubit.
- Similar to surface codes, gross codes are more efficient and allow for more logical qubits with fewer physical qubits. They also have a similar noise tolerance.
- Steane’s Code (CSS code, which employs seven physical qubits for one logical qubit and is capable of detecting and correcting two faults; nevertheless, it requires additional study on control systems, syndrome measurements, scalability, and fault tolerance application).
- Although it wasn’t fault tolerance per se, a 15-qubit Hamming code attempt with 19 qubits was described as a step in that direction.
The “distance” of QECCs shows how successful they are. Up to its distance minus one, a code can detect faults; up to its distance split by two, rounded down, it can rectify errors.
FTQC necessitates the meticulous design of Fault-Tolerant Quantum Gates in addition to error correction. These gates must carry out operations on logically encoded data in a manner that stops errors from spreading and doesn’t make already-existing faults worse. It is also necessary to implement a universal set of quantum gates on logical qubits. There are some gate operations that are more difficult than others. Certain encoded operations, frequently using gate teleportation, can be carried out fault-tolerantly using magic states, which are unique quantum states that are prepared and checked separately.
Physical qubit error rates must be low enough to achieve fault tolerance. Fault-tolerant procedures and dependable, quick syndrome extraction must be supported by connections. Very low latency is required for decoding.
The Importance and Crucial Function of FTQC
For quantum technology to advance, fault tolerance is essential for a number of reasons:
Scalability:
It is essential to the development of large-scale, useful quantum computers. Increasing qubit counts alone is insufficient because, if left unchecked, mistakes can increase with qubit numbers. In large-scale devices that can execute circuits with hundreds of millions of logical operations on hundreds or thousands of qubits, the objective is fault tolerance. One logical qubit could be made up of about 1000 physical qubits.
Reliability:
It guarantees precise outcomes for intricate quantum algorithms.
Extended Quantum Computation:
It makes it possible for quantum calculations to continue for long stretches of time.
Quantum Advantage/Supremacy:
It increases accessibility to quantum supremacy and quantum advantage over traditional computers. NISQ computers are not the route to quantum advantage or utility and can yield pure noise after just a few gates.
Commercial viability:
Some of the most valuable applications and use cases require it.
Overcoming NISQ Limitations:
Algorithms offering exponential computational gains cannot be supported by NISQ devices due to their lack of coherence. NISQ error mitigation strategies frequently rely on software and have scalability problems. A longer-term and more complete answer is fault tolerance.
Fault-tolerant computing guarantees that quantum information is sufficiently protected from the environment while limiting the local propagation of mistakes and permits arbitrarily low logical error rates.
Present Situation and Investigations
As stated in a blog post dated May 30, 2025, complete fault-tolerant quantum computers are not yet commercially available as of 2024. Up to now, only extremely small-scale claims have been made about fault-tolerant quantum computers. Demonstrations of fundamental fault-tolerant procedures in research environments, however, are making great strides. The field is developing quickly.
Enhancing QEC code performance, creating hardware-efficient codes customised for particular hardware, and raising error thresholds are examples of cutting-edge research.
With 60 neutral atom qubits, a group from Harvard, MIT, and QuEra Computing achieved 99.5% accuracy, exceeding the >99% error-corrected fidelity level required for two-qubit entangling gates.
Fault tolerance was demonstrated by research employing 16 physical qubits, encoding two logical qubits each from seven physical qubits with flag qubits. This might potentially reduce the need for auxiliary qubits in logical gates.
A 15-qubit Hamming code experiment with 19 total qubits was described in a npj Quantum Information publication, pointing out that it was not true fault tolerance.
In their approach to FTQC, vendors such as Quandela are taking advantage of the special qualities of their photonic qubits and generators. In their pursuit of FTQC with neutral atoms, QuEra Computing emphasises characteristics like as transversal gates, lengthy coherence durations, and qubit shuttling for mid-circuit measurements that allow error correction. Fault-tolerant quantum computing is another goal that IBM is actively pursuing.
Challenges
It is quite difficult to achieve fault tolerance. Among the difficulties are:
- The brittleness of quantum information and qubits’ vulnerability to decoherence and noise.
- There are many other causes of mistakes, including as crosstalk, decoherence from deep circuits, hardware flaws (fabrication defects), and ambient noise (local to cosmic).
- Software-based problems that lead to inaccurate pulse scheduling, such as compilation and transpilation.
- Because of the no-cloning theorem, not all tried-and-true classical error correcting techniques can be used.
- Only below specific error levels do QEC codes function.
- Significant overhead in the form of extra qubits and processing power is necessary for fault tolerance. Approximately 1000 physical qubits may be needed to encode a single logical qubit.
- Fault-tolerant system design and implementation are challenging and intricate processes.
- It can be difficult to connect logical qubits, which are abstractions of numerous physical qubits.
FTQC Applications
Classically intractable problems may be solved with the help of FTQC. These are practical issues that call for extensive classical resources, and exact answers are frequently more useful than approximations. Among the possible uses are:
- Molecular simulation for material development and drug discovery.
- processing large amounts of qubit-mapped classical data to improve AI.
- addressing challenging combinatorial optimisation issues.
- Bringing near-real-time insights to the banking sector and upending it.
- Enhancing unpredictability to increase the security of cryptographic keys.
- Possibility of increased sustainability because it uses less energy than HPC for comparable activities.
- Supporting applications for sensing and quantum communications.
As early digital computer engineers were unable to predict today’s applications, it is likewise anticipated that the most important uses of FTQC will be ones that have not yet been thought of.