Gottesman–Kitaev–Preskill States
With a particular emphasis on their demonstration of a scalable building block for fault-tolerant quantum computers, Xanadu is a leader in photonic quantum computing. The accomplishment centers on the on-chip production of Gottesman–Kitaev–Preskill states and was first reported in a January 2025 Nature publication before being summed up in a June 2025 news piece. “First-of-its-kind achievement” and “key step towards scalable fault-tolerant quantum computing” are the terms used to characterize this accomplishment.
Understanding GKP States and Their Importance
GKP states are photonic qubits that can withstand errors. These are intricate quantum states made up of several photons stacked in particular configurations. Because of its special structure, known quantum error correction techniques can be used to detect and repair small defects like phase shifts or photon loss. GKP states are “the optimal photonic qubit” because they enable quantum logic operations and error correction “at room temperature and using relatively straightforward, deterministic operations,” according to Zachary Vernon, CTO at Xanadu, who emphasises their importance. It has always been difficult to produce Gottesman–Kitaev–Preskill States of a high enough calibre on an integrated platform. By removing that obstacle, this innovation advances the field of architectures for continuous-variable quantum computing.
You can read also QUAlibrate: Advanced Quantum Control & Calibration Software
In contrast to probabilistic entanglement techniques that necessitate repeated tries and intricate feed-forward control, GKP states allow operations using linear optics and measurement algorithms, making them essential for fault-tolerant computing. As fundamental building elements for quantum networks that link various chips or modules, or for creating larger cluster states for measurement-based computing, they blend in perfectly with hybrid systems.
Scaling quantum systems is made easier by their intrinsic compatibility with optical fibre, which allows them to be dispersed among different system components or even throughout data centres. This demonstration represents a turning point in the development of photonic quantum computing, providing a different approach from superconducting and trapped-ion platforms and pushing these systems closer to the error thresholds needed for utility-scale quantum machines.
The Aurora System: A Photonic Quantum Computing Architecture
“Aurora” is a “sub-performant scale model of a quantum computer” that exemplifies Xanadu’s work. All of the required basic components are integrated into this system as separate, scalable, rack-deployed modules that are connected by fibre optics. Aurora provides 12 physical qubit modes every clock cycle by utilising 35 photonic devices, 84 squeezers, and 36 photon-number-resolving (PNR) detectors. With the exception of the cryogenic PNR detection array, the entire system is controlled by a single server computer and fits into four typical server racks.
The following are important technological elements and their functions in Aurora:
Silicon Nitride Waveguides: The technology uses silicon nitride waveguides, which have extremely low optical losses. The 300 mm wafers used to make these waveguides are common in commercial semiconductor production. Newer designs of the chips, which are based on the 200-mm silicon-nitride waveguide platform from Ligentec SA, show promise for improved squeezing performance and even lower chip-fiber coupling losses.
Photon-Number-Resolving (PNR) Detectors: Detectors have an efficiency of above 99%. Their foundation consists of 36 transition edge sensor (TES) arrays kept in 12-mK dilution coolers. With a low miscategorization error, these TES detectors can detect photon counts up to seven and run at a repetition rate of 1 MHz. Despite being extremely effective, PNR detection efficiencies of more than 99% are nevertheless required to satisfy the architecture’s strict loss limits for the P1 path.
Optical Packaging: Loss-optimized optical packaging, which includes precise alignment, specialised chip mounting, and effective fibre connections, was given a lot of attention. This guarantees that during routing and measurement, the sensitive quantum information won’t be weakened.
Refinery Array: Six photonic integrated circuits (PICs) built on a thin-film lithium-niobate substrate make up this subsystem. Based on feedforward instructions from the PNR detection system, two binary trees of electro-optic Mach-Zehnder modulator switches in each refinery dynamically choose the best output state. Future generations of Aurora refinery chips are anticipated to incorporate homodyne detectors in order to carry out the entire adaptive breeding strategy, even if the existing chips use probability-boosting multiplexing and Bell pair synthesis.
Interconnects: Fiber-optical delay lines stabilised by phase and polarisation are essential for linking the refinery to the QPU modules and refinery modules. In the cluster state, these delays allow temporal entanglement and serve as buffers for information heralding.
You can read also Equal1’s Bell-1: New Silicon Quantum Server For Data Centers
Experimental Demonstrations and Results
The Aurora system’s primary features were benchmarked through two major experiments.
- Gaussian Cluster State Synthesis: To create a 12 × N-mode Gaussian cluster state, the system was configured to exclusively send squeezed states to the QPU array. This benchmark synthesised and measured a macronode cluster state with 86.4 billion modes by continually collecting data for two hours at a clock rate of 1 MHz. The nullifier variances continuously stayed below the vacuum noise threshold in spite of large optical losses (about 14 dB), offering compelling proof of squeezing and confirming the entanglement inside the cluster state.
- Repetition Code Error Detection: Using low-quality GKP states, this experiment illustrated the system’s feedforward and non-Gaussian-state synthesis capabilities. The QPU decoder analysed the results of the system’s two (foliated) repetition code checks in real time. In order to make adaptive choices about the measurement basis for the next time step, the decoder calculated bit values and phase error probabilities.
Limitations and Future Outlook
Even with these striking examples, there is still a sizable “component performance gap” between existing capabilities and the demanding requirements of fault tolerance. The purity and coherence of quantum states are compromised by optical loss, which is the main limiting factor. In contrast to optimum designs for fault-tolerant operation, which need loss budgets of roughly 1%, the Aurora system suffered losses of approximately 56% for heralding pathways (P1) and over 95% for heralded optical paths (P1 and P2).
Future initiatives by Xanadu will focus on:
- Hardware Improvements: To increase fidelity and lower optical loss, chip fabrication procedures are improved, waveguide geometry is optimised, and packaging is refined. This includes improving the insertion loss of each photonic component by 20-30 times (on a decibel scale) over the current condition.
- Architectural Refinements: Investigating cutting-edge hardware-level improvements to boost photon generation and detection rates, as well as sophisticated error mitigation strategies to offset known sources of loss and imperfection.
- Integration and Scaling: integrating the newly developed GKP generation methods with the networking capabilities, error correction protocols, and logic gates that were previously shown in the Aurora system. The business is steadfast in its belief in a modular approach to quantum computing, in which scalable, semiconductor-compatible platforms may be used to mass-produce, manipulate, and monitor error-correcting components.
Xanadu’s work lays out a clear way to transcend the fault-tolerant threshold and scale photonic quantum computers to address practical applications, even if existing quantum hardware, across all platforms, is still in the noisy intermediate-scale quantum (NISQ) period. A realistic photonic architecture may be scaled and modularised with developments in fiber-optical networking, classical control electronics, and photonic-chip fabrication. In order to identify the most hardware-efficient and imperfection-tolerant systems, optical GKP-based architectures must be continuously optimised.
.