Quantum Feature Maps
A major breakthrough in the field of quantum machine learning (QML), iterative quantum feature maps (IQFMs) provide a novel hybrid classical-quantum architecture that addresses some of the main drawbacks of implementing quantum models on existing hardware. Researchers from Fujitsu Research, including Nasa Matsumoto, Quoc Hoan Tran, Koki Chinzei, Yasuhiro Endo, and Hirotaka Oshima, created this framework. The main advantages and conclusions of their research are outlined in the news story “Quantum-enhanced Machine Learning Boosts Performance with Iterative Feature Maps,” which was released by Quantum News on June 25, 2025.
You can also read Superconducting Quantum Materials And Systems Center
Fundamentally, IQFMs expand on the idea of Quantum Feature Maps (QFMs). Using the increasingly huge Hilbert space available to quantum computers, QFMs are quantum circuits that convert classical data into quantum states. The transformation enables QML models to function as universal approximators of continuous functions and potentially achieve exponential speedups for certain classification problems. It is inspired by classical machine learning (ML) techniques that map input data into new feature spaces for enhanced separability.
However, there are many obstacles in the way of using deep QFMs in practice. Existing quantum technology has intrinsic limitations and is vulnerable to circuit noise. Moreover, the conventional variational quantum algorithms (VQAs) that are employed to train these models frequently experience computational bottlenecks, especially when it comes to obtaining precise gradient estimation, which necessitates substantial quantum resources and can result in problems like becoming trapped in local minima or running into “barren plateaus” in the optimisation landscape.
In order to address these issues, IQFMs integrate shallow quantum feature maps (QFMs) with classically calculated augmentation weights in an iterative manner to create deep learning systems. The quantum resources needed for learning are purposefully reduced by this hybrid design.
You can also read A 2D Quantum Simulator Captures Real-Time ‘String Breaking’
- Hybrid Architecture: IQFMs connect each shallow QFM to the next via measurement outputs, which are subsequently processed by classical augmentation, providing an alternative to only depending on deep quantum circuits. This structure is more flexible for near-term quantum computers and improves expressiveness.
- Classical Augmentation Weights: An important innovation of IQFMs is that they only optimise the weights of the classical augmentation connecting the QFMs, not the quantum circuits’ variational parameters. This solution circumvents a major drawback of conventional QML algorithms and significantly reduces quantum computational runtime by shifting the computationally taxing operation of gradient estimation to classical processors. While the classical augmentation parameters (𝑾_l) can be trained, the quantum circuit parameters (𝜽_l) are fixed, usually to random values.
- Contrastive Learning: As a crucial representation learning method, contrastive learning is integrated into IQFMs. When given comparable inputs, the model is trained to produce similar representations; when given unrelated data, it produces divergent representations. Contrastive learning improves resistance to noise in IQFMs by concentrating on key data similarities and differences, stabilising feature extraction even across noisy quantum circuits, and reducing variability brought on by hardware flaws or quantum measurements.
- An “anchor” feature vector is created for a given input using a supervised contrastive-learning methodology. A “negative” sample (one with a different label) produces a representation that is pushed farther away from the anchor, whereas a “positive” sample (one with the same label) is processed to produce a representation that is encouraged to be closer to the anchor. By minimising a certain contrastive loss function, this is accomplished.
- Layer-wise Training: IQFMs use a layer-by-layer training methodology to supplement contrastive learning. Rather of simultaneously optimising every parameter, which would require a significant amount of quantum resources, this method trains the classical augmentation weights for every QFM layer in turn. In addition to avoiding the “barren plateaus” phenomena that is common in VQAs where gradients disappear in deep quantum circuits this greatly lowers computational complexity.
You can also read Karnataka Funds ₹48 Crore for Quantum Research Park phase 2
Quantum Feature Extraction Process: Quantum measurements in several bases are used to extract features from each QFM block. This entails mapping classical features into a quantum state using an embedding circuit (𝒰_𝚿), entangle and mix the data using a preprocessing circuit (P_l), and then alter the measurement basis using a parameterised circuit (Ω_l). The procedure produces a feature vector (𝒈_l) that is based on the measurement operators’ expectation values. A concatenation of feature vectors is obtained by doing measurements in bases other than the computational (Pauli-Z) basis in order to enhance the feature set. In addition to improving classification performance, this multi-basis method can stop some quantum correlations from being simulated classically.
Versatility and Performance: IQFMs show versatility by supporting both classical and quantum data categorisation tasks.
- Quantum Data Classification: IQFMs continuously beat Quantum Convolutional Neural Networks (QCNN) in trials including quantum phase recognition tasks (Task A and Task B, which categorise ground states of Hamiltonians into discrete quantum phases) in terms of test accuracy. This indicates that random measurement bases in conjunction with traditional post-processing are sufficiently potent, even in the absence of optimising the QFM circuits themselves.
- Robustness to Noise: IQFMs outperformed QCNN in the presence of both statistical mistakes from a small number of measurement shots and physical RX noise, which is the application of random rotations on data. IQFMs outperformed QCNN in terms of accuracy at greater noise levels. Visualisations shown that contrastive learning results in stronger discriminative representations by creating more cohesive clusters in the feature space.
- Classical Data Classification: IQFMs performed similarly to classical neural networks with comparable architectures on the Fashion-MNIST test. In order to effectively handle comparatively big datasets, a modular IQFMs design was used for classical data. This architecture divides and processes classical data in parallel by several QFMs. Large-scale tasks on near-term quantum devices with restricted qubits are possible due to this modular design, which only permits the implementation of individual subcircuits on quantum devices.
You can also read ColibriTD Launches QUICK-PDE Hybrid Solver On IBM Qiskit
Future Outlook: The creation of IQFMs marks a significant step towards achieving the full potential of quantum-enhanced machine learning, making it a desirable choice for practical applications with limited computational resources and inconsistent data quality. Additionally, the researchers point out that the architecture of IQFMs circumvents the incompatibility of back-propagation with quantum circuits, which typically lack accessible intermediate states for gradient computation, and that alternative training techniques, such as Direct Feedback Alignment (DFA), might be investigated.