A family of methods has developed that takes advantage of the noise present in near-term quantum devices while the quantum computing community waits for the development of fault-tolerant hardware. These techniques, known as Noise-Adaptive Quantum Algorithms (NAQAs), are intended to take advantage of quantum noise instead of squelching it. The nature of NAQAs, their history, the major works that have influenced the field, and possible future paths are all examined in this article.
NAQA Meaning
A class of methods known as Noise-Adaptive Quantum Algorithms (NAQAs) is intended to take advantage of the intrinsic noise in near-term quantum devices rather than to reduce it. The development of fault-tolerant hardware is still awaited by the quantum computing community, and NAQAs have surfaced as a means of using the noise inherent in existing QPUs.
You can also read China Launches Thin Film Lithium Niobate & CHIPX Pilot Line
This is a thorough description of NAQAs:
Fundamental Idea:
Real-world QPUs function in noisy surroundings, in contrast to the ideal situation where a noise-free quantum system would produce a single, optimal low-energy solution. Multiple low-energy solutions may result from this noise, and when constraints are applied, individual bitstring samples may not represent appropriate solutions. NAQAs combine data from several noisy outputs rather than eliminating these faulty samples. Before committing to a single sample, this aggregation is utilized to modify the initial optimization issue by utilizing quantum correlation. This helps the quantum system find better solutions. An essential component of NAQAs is the deliberate reuse of noisy data.
Origins and Comparison
From a conceptual standpoint, NAQAs are similar to the traditional Cross-Entropy Method (CEM). CEM uses candidate sampling and iterative refinement to simulate distributions without the need of physical noise. The goal of both CEM and NAQAs is to direct the search process by adjusting to outputs that are noisy.
Among the main distinctions between CEM and NAQAs are:
Whereas NAQAs measure quantum bitstrings and calculate their cost, CEM uses a noisy cost function to evaluate sampled candidates. The sample distribution is updated by CEM to favour top performance, while NAQAs find new attractor states and adjust the cost Hamiltonian appropriately. Whereas NAQAs seek to minimize cost under a modified Hamiltonian, CEM aims to maximize performance over a probability distribution. Importantly, although NAQAs use noise to find attractor states, CEM averages over noise.
The “Quantum-Assisted Greedy Algorithms” presented in the reference, which fixed variables based on consensus across several sample bitstrings and were tested on D-Wave’s QPU, are where the conceptual foundations of NAQAs may be found. The term “Noise-Directed Adaptive Remapping” (NDAR), which forms the basis of further research, was first used in the groundbreaking study, reference.
Difference with ADAPT Algorithms:
It’s crucial to remember that NAQAs are essentially different from the ADAPT family of algorithms, which includes ADAPT-VQE and ADAPT-QAOA. By altering the way the search space is explored (for example, by choosing a different “mixing” Hamiltonian) as the algorithm advances, ADAPT algorithms adjust to the structure of the issue. These ADAPT methods were not initially created to take advantage of noise from the QPU and have mainly demonstrated success in simulations without noise. In contrast, real, noisy quantum gear is used to specifically evaluate the NAQA series.
You can also read Advances In Quantum Metrology With Bosonic Noisy Systems
NAQA Framework:
An iterative procedure is used in a general pseudocode for NAQAs:
- Generate a batch of samples by utilising a quantum program. In addition to being modular, the sampling step can potentially use “stochastic optimisation” instead of just quantum systems.
- issue Adaptation: Modify the optimisation issue in light of the sampleset’s findings. Fixing the values of specific variables by examining correlations between samples and determining the attractor state and using a bit-flip gauge transformation are two popular methods for this.
- In order to guide the algorithm towards promising answers without unduly limiting the solution space, it is necessary to collect and aggregate information from several noisy samples. This is the most nuanced and difficult component.
- Re-optimization: Address the altered optimisation issue once more.
- Repeat: Continue doing this until the quality of the solution is sufficient or it stops getting better.
- Both gate-based and annealing-based quantum computers can use this paradigm. Even though this repetitive process adds more computing complexity, NAQAs are nevertheless useful for short-term devices and, in noisy conditions, frequently produce far better results than conventional techniques like vanilla QAOA, albeit with a longer runtime.
Benefits:
- Modularity and Simplicity: The NAQA architecture is theoretically simple and adaptable.
- Enhanced Solution Quality: Research shows that NAQAs perform better than baseline methods like vanilla QAOA, especially in noisy settings.
Drawbacks:
- Computational Overhead: These techniques may require a lot of processing power, and many important studies leave out runtime information, which raises possible performance issues.
- Adaptation Cost: Step 2, which entails altering the optimization problem, can be particularly taxing, especially when eigenvalue decompositions or other operations are needed. This is because these operations scale cubically with the number of samples (O(n³)).
Limitations/Unknowns:
- Transferability: Although NAQAs demonstrate good performance on Sherrington-Kirkpatrick (SK) Ising models, it is unclear how well they translate to real-world scenarios, which frequently result in Ising models with power-law degree distributions.
- Comparative Benchmarks: There aren’t many comparisons with other algorithms that consider noise. Comparing NAQAs to other methods, such the hardware-independent MaxCut solver from Q-CTRL, may yield insightful results.
You can also read Pasqal Roadmap to Scalable Neutral-Atom Quantum Computing
Future Directions:
Because NAQAs are adaptable, they can be improved by combining with other developments in optimization. To further improve solution quality, future work might, for example, layer post-processing techniques like shimming and calibration refinement upon NAQAs or incorporate techniques like ADAPT-QAOA in the sample generation step (Step 1). As noise-aware techniques continue to advance, further benefits are anticipated, and the modularity of NAQAs offers a rich avenue for future study.