By successfully demonstrating the Krylov Quantum Diagonalization (KQD) algorithm on an IBM Heron quantum processor, researchers from IBM and the University of Tokyo have accomplished a major milestone in the field of quantum computing. In this pioneering experiment, one of the largest many-body systems ever simulated on a quantum processor was the Heisenberg model on a 2D heavy-hex lattice of up to 56 Qubit. The results of this study, which were published in Nature Communications, highlight how KQD can help close the crucial gap between the present state of small-scale quantum demonstrations and future fault-tolerant quantum computing.
The Krylov Quantum Diagonalization Approach
A quantum modification of popular classical Krylov subspace algorithms, KQD is a hybrid quantum-classical algorithm. Estimating low energies of quantum many-body systems, especially ground-state energies, is its main objective.
KQD presents a strong alternative to conventional Variational Quantum Algorithms (VQAs), which have the drawbacks of unreliable convergence guarantees and unfeasible iterative optimisation for scaling to bigger systems. KQD avoids the substantial quantum error correction that Quantum Phase Estimation (QPE) requires for circuit depths necessary for worthwhile issues, despite the fact that QPE offers theoretical precision.
KQD’s central concept consists of two primary steps:
- Quantum Subroutine: A subspace of the many-body Hilbert space is created using the quantum processor. By applying powers of the time evolution operator ($U \coloneqq e^{-iHdt}$) to an initial reference state ($|\psi_0\rangle$), a Krylov basis is produced using Trotterized unitary evolutions.
- Classical Post-processing: In order to solve a generalised eigenvalue problem and estimate the approximate low-lying energy eigenstates, these matrix elements projections of the Hamiltonian and overlap matrix within the subspace are classically diagonalised after being computed on the quantum computer.
KQD’s exponential convergence to the ground state energy is a major benefit. Even with significant noise, the error resulting from projecting into this unitary Krylov space diminishes exponentially with increasing Krylov dimension. Due to its convergence guarantee and capacity to estimate time evolutions using very shallow circuits, KQD is especially well-suited for pre-fault-tolerant quantum devices that are already in use.
Experimental Implementation on IBM Hardware
The IBM_montecarlo system, an IBM Heron R1 processor, was used for the trials. Compared to earlier generations, this 133-qubit device has reduced crosstalk and faster two-qubit gates with its fixed-frequency transmon qubits coupled via adjustable couplers.
You can also read Quantinuum Universal Gate Set Quantum Computing
A key system in condensed matter physics, the spin-1/2 antiferromagnetic Heisenberg model on heavy-hexagonal lattices, was the focus of the study. They took advantage of the U(1) symmetry of the model, which translates to the conservation of Hamming weight (or particle number), to streamline the quantum circuits and improve viability on noisy hardware. This reduced circuit depth by enabling controlled initialisations of the reference state as opposed to intricate controlled time evolutions.
In the studies, particle-number sectors $k=1$, $k=3$, and $k=5$ were utilised. Computational basis states with the appropriate number of particles were the initial states.
- The $k=1$ experiment employed a 57-qubit CPU.
- The $k=3$ experiment used a subset of 45 qubits.
- The $k=5$ experiment used 43 qubits.
Notably, the 5-particle experiment’s greatest subspace dimension was 850,668, which is close to 20 qubits and larger than the complete Hilbert space size of 19 qubits. This shows that even when calculations are limited to symmetry sectors, KQD may reach enormous effective Hilbert spaces. To ensure that all tests were completed within the 24-hour recalibration window of the device, the Krylov space size was set at $D=10$ for all experiments.
Simplifying the circuit was essential. In accordance with the lattice’s three-coloring of edges, the heavy-hexagonal lattice structure made it possible to implement both controlled preparation and Trotterized time evolutions with just three different two-qubit gate layers. As a result, fewer noise models were needed for mistake mitigation. In order to balance accuracy and circuit depth, two second-order Trotter steps were employed for time evolutions for $k=3, 5$ (resulting in nine two-qubit layers).
You can also read Oxford Instruments Sells Nanoscience Late In Financial Time
Advanced Error Mitigation Strategies
In the noisy intermediate-scale quantum (NISQ) era, effective error management is critical to success. The group put into practice a complex set of error mitigation and suppression strategies:
- Probabilistic Error Amplification (PEA): This method extrapolates data back to the optimal, zero-noise limit by learning the system’s noise and amplifying it at varying intensities.
- Twirled Readout Error Extinction (TREX): This technique converts noise into a quantifiable multiplicative factor by using random Pauli bit flips prior to measurement, thereby mitigating state preparation and measurement (SPAM) errors.
- Pauli Twirling: Makes mistake characterisation and mitigation easier by converting the noise channel into a more straightforward Pauli noise channel.
- Dynamical Decoupling: To lessen undesired interactions and cross-talk, pulse sequences are introduced during qubit idle moments.
At noise amplification values of 1, 1.5, and 3, 300 spun instances with 500 shots each were employed for the single-particle experiment. In order to control total runtime resulting from larger circuit sizes, 100 twirled instances with 500 shots each were used for the multi-particle tests ($k=3, 5$) at factors of 1, 1.3, and 1.6.
Results and Implications
Even with noise, the practical results validated theoretical predictions by confirming the expected exponential convergence of the ground state energy with increasing Krylov dimension. Within experimental error limits, the predicted energies for the 3-particle and 5-particle sectors agreed with noiseless classical simulations.
You can also read Solid-State Quantum Emitters The Future Of Quantum Tech
The findings of the $k=1$ experiment, however, deviated somewhat from the actual lowest energy. This suggests that effective leakage out of the $k=1$ symmetry subspace was probably induced by noise, underscoring a possible danger of depending on symmetry conservation on noisy hardware. The authors speculate that the subsequent $k=3$ and $k=5$ tests’ greater consistency may be due to better device calibration.
This study represents a significant breakthrough in quantum simulation, surpassing prior end-to-end quantum algorithm demonstrations for ground state issues by more than two orders of magnitude in effective Hilbert space dimension and more than a factor of two in qubit count. It shows that significant convergence may be attained in spite of hardware noise, offering vital information for creating algorithms appropriate for the time before fault tolerance. The heavy-hexagonal architecture worked well with the circuit structure of KQD, which further emphasises the value of algorithm-hardware co-design.
The KQD algorithm provides a solid foundation for quantum simulation, placing quantum diagonalisation algorithms to supplement traditional approaches in computational quantum sciences and possibly allowing valuable quantum simulations before complete fault tolerance is achieved.
You can also read Trapped-Ion Quantum Computing Solved Protein Folding Issues