XML / RSS feed — for news readers and subscription
Coherence of a hole-spin flopping-mode qubit in a circuit quantum electrodynamics environment
That author's affiliation: Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Científicas, Madrid, Spain Institution (first & last author): University of Grenoble Alpes, CEA, Grenoble INP, IRIG-Pheliqs, Grenoble, France
Coupling semiconductor qubit devices to microwave resonators provides a way to transfer quantum information over long distances. A flopping-mode qubit that combines strong coupling to photons with good coherence properties has now been demonstrated.
Squeezing, trisqueezing and quadsqueezing in a hybrid oscillator–spin system
Higher-order interactions in quantum harmonic oscillator systems can result in useful effects, but they are hard to engineer. An experiment on a single trapped ion now demonstrates how spin can mediate higher-order nonlinear bosonic interactions.
A Comprehensive Analysis of Accuracy and Robustness in Quantum Neural Networks
That author's affiliation: FPT University First author institution: FPT University Last author institution: Texas Tech University
Quantum Machine Learning (QML) has recently emerged as a highly promising research frontier. Within this domain, Quantum Neural Networks (QNNs),characterized by Variational Quantum Circuits (VQCs) at their core and featuring layers of quantum gates optimized by classical algorithms, have garnered significant attention. However, a rigorous and exhaustive evaluation of their practical performance remains largely incomplete. In this study, we conduct a comprehensive comparative analysis of three prominent hybrid classical-quantum architectures: Quantum Convolutional Neural Networks (QCNN), Quantum Recurrent Neural Networks (QRNN), and Quantum Vision Transformers (QViT), focusing on the critical dimensions of generalization, accuracy, and robustness. Our findings provide novel insights that address previous evaluative gaps. Notably, while these models exhibit exceptional performance on low-feature datasets such as MNIST, their learning efficacy degrades significantly when transitioned to high-feature datasets. Furthermore, convolutional-based models like QCNN appear less effective on high-dimensional data than other machine learning architectures. Additionally, while all models are susceptible to adversarial noise, traditional architectures, such as recurrent and convolutional networks, demonstrate superior resilience. Conversely, in the presence of quantum noise, the transformer-based architecture proves its strength by maintaining high robustness against measurement noise, channel noise, and finite-shot effects, whereas other architectures suffer marked performance declines. These results provide a granular perspective on the current state of the field and underscore the critical importance of tailoring model selection to the constraints of contemporary Noisy Intermediate-Scale Quantum (NISQ) environments.
Hardware-Efficient Quantum Optimization for Transportation Networks via Compressed Adiabatic Evolution
That author's affiliation: Rensselaer Polytechnic Institute Institution (first & last author): Rensselaer Polytechnic Institute
Transportation systems such as urban logistics, vehicle routing, and infrastructure planning require solving large-scale combinatorial optimization problems under complex constraints. Problems such as the vehicle routing problem (VRP), traveling salesman problem (TSP), and facility location problem (FLP) involve large discrete search spaces and the need to generate multiple feasible solutions in real time. In this work, we develop a hardware-grounded hybrid quantum optimization framework that uses Approximate Quantum Compilation (AQC) to compress early segments of digitized adiabatic evolution into shallow circuits. The compressed prefix is combined with variational layers, enabling a systematic study of how initialization, circuit depth, and expressivity interact on near-term quantum hardware. All experiments are performed on an IBM gate-based quantum computer, and circuits are evaluated as stochastic generators of candidate transportation plans. Results show that moderate prefix compression reduces two-qubit gate depth while maintaining or improving feasible solution discovery, particularly for routing problems. These benefits depend on compatibility between the compressed prefix and the variational ansatz: while standard QAOA effectively leverages AQC initialization, linear-chain QAOA shows limited improvement. Overall, this work demonstrates that hybrid AQC-QAOA methods provide a practical pathway for hardware-efficient quantum optimization, positioning quantum algorithms as candidate generators within transportation decision-making workflows.
qSHIFT: An Adaptive Sampling Protocol for Higher-Order Quantum Simulation
That author's affiliation: Korea University Institution (first & last author): Korea University
Quantum simulation is a cornerstone application for quantum computing, yet standard methods face a trade-off between circuit depth and accuracy: Trotterization depth scales with the number of Hamiltonian terms $L$, while sampling-based qDRIFT is restricted to $O(t^2)$ error scaling. Here, We introduce qSHIFT, an adaptive sampling protocol that overcomes these limitations. By adaptively updating sampling distributions, qSHIFT maintains $L$-independent gate complexity while achieving an improved error scaling of $O(t^{1+r})$ for an adjustable parameter $r$. This performance is enabled by a classical subroutine solving $L^r$ linear equations per sampling round. Numerical demonstrations confirm the $O(t^{1+r})$ scaling, showcasing qSHIFT as a resource-efficient framework for high-precision quantum simulation. Furthermore, the protocol's reduced circuit depth enhances its compatibility with physical error mitigation, making it a promising candidate for implementation on near-term quantum devices. In addition to its role as a standalone algorithm, qSHIFT can provide a high-precision foundation for modular quantum frameworks such as qSWIFT or Krylov quantum diagonalization.
System-Level Design of Scalable Fluxonium Quantum Processors with Double-Transmon Couplers
That author's affiliation: City University of Hong Kong Institution (first & last author): The Quantum Science Center of Guangdong-Hong Kong-Macau Greater Bay Area
Fluxonium qubits combine long coherence times with strong anharmonicity, making them a promising platform for scalable superconducting quantum processors. Recent experiments have demonstrated high-fidelity operations in multi-qubit processors while suppressing stray qubit interactions using fluxonium-transmon-fluxonium (FTF) architectures. However, scaling such systems to larger arrays is constrained by a trade-off between achievable coupling strength, crosstalk suppression and qubit-qubit spacing required for wiring in a two-dimensional architecture. Multimode couplers, such as the double-transmon coupler (DTC), provide a promising pathway to overcome this limitation by enabling stronger interactions without compromising qubit spacing and isolation. Here, we develop a quantitative design framework for fluxonium-based quantum processors employing DTCs. Central to this work is a frequency-partitioned architecture that places qubit transitions, tunable-coupler excitations, and resonator modes in well-separated spectral regions. This structured allocation reduces parameter interdependence and enables the concurrent optimization of gate operations, readout, and qubit reset. By formulating device design as a multi-objective optimization problem under realistic experimental constraints and fabrication-induced disorder, we develop a tractable sequential workflow and determine a feasible parameter regime that simultaneously supports high-fidelity single- and two-qubit gates, fast qubit reset, and robust dispersive readout. These results establish a system-level architectural methodology that links circuit parameters to processor-level performance, and provide an experimentally actionable pathway toward scalable fluxonium quantum processors.
Large-Scale Quantum Circuit Simulation on an Exascale System for QPU Benchmarking
That author's affiliation: Forschungszentrum Jülich Institution (first & last author): Forschungszentrum Jülich
Recent advances in quantum computing have enabled the development of quantum processors with hundreds of qubits. However, noise continues to limit the amount of useful information that can be extracted from these systems, making it essential to identify the regime in which experimental outputs remain reliable. In this work, we benchmark Quantinuum Helios-1, a 98-qubit trapped-ion quantum processing unit, using the linear ramp quantum approximate optimization algorithm (LR-QAOA). To this end, we perform large-scale noiseless simulations on JUPITER, Europe's first exascale supercomputer, for circuits of up to 48 qubits and 3,384 two-qubit gates. These simulations, executed on 4,096 nodes equipped with 16,384 GH200 superchips and high-bandwidth CPU-GPU interconnects, provide a reference for validating experimental results at the edge of classical tractability. We find that, up to 48 qubits, Helios-1 remains in a noise-tolerant region, i.e., its samples cannot be clearly distinguished from those coming from a noiseless simulation. We then extend the analysis to larger system sizes using experimental data only, and apply a mean-of-means resampling procedure with a 3$\sigma$ threshold to determine whether the QPU output is statistically distinguishable from random sampling. This analysis identifies a regime of coherent performance up to 93 qubits (12,834 two-qubit gates), beyond which, at 95 qubits, the outputs become statistically indistinguishable from random sampling. These results demonstrate how exascale classical simulation can be used to validate quantum processors, and provide a quantitative boundary between noise-tolerant and random regimes in quantum processors.
Multi-Objective Optimization by Quantum-Annealing-Inspired Algorithms
That author's affiliation: Southern University of Science and Technology First author institution: Southern University of Science and Technology Last author institution: Harvard University
Combinatorial optimization is widely regarded as a primary application for near-term quantum processors, although a definitive demonstration of the practical quantum advantage remains elusive. Recent studies have reported that both gate-based quantum circuits and quantum annealers can outperform state-of-the-art classical heuristics on multi-objective optimization (MO-MaxCut) problems. However, these studies did not fully account for the substantial pre- and post-processing overheads intrinsic to quantum solvers, leading to incomplete comparisons between quantum and classical approaches. In this work, we re-examine the same benchmark suite using GPU-based quantum-annealing-inspired algorithms (QAIAs), which, analogously to quantum processors, generate probabilistic samples and thus serve as formidable classical contenders. Our results show that QAIAs can sample candidate solutions approximately two orders of magnitude faster than previously studied quantum processors. In terms of end-to-end runtime, QAIAs also surpass industry-leading classical solvers, thereby establishing themselves as the superior performers among the quantum and classical solvers evaluated thus far for the MO-MaxCut instances.
Parameterized Quantum Circuits as Feature Maps: Representation Quality and Readout Effects in Multispectral Land-Cover Classification
That author's affiliation: National and Kapodistrian University of Athens Institution (first & last author): National and Kapodistrian University of Athens
We investigate variational quantum classifiers (VQCs) for land-cover classification from multispectral satellite imagery, adopting a feature-map perspective in which the quantum circuit defines a nonlinear data embedding while the readout determines how this representation is exploited. Using the EuroSAT-MS dataset, we perform a systematic one-vs-one evaluation across all class pairs under a controlled experimental protocol, comparing classical baselines (logistic regression, SVMs, neural networks) with VQCs employing both linear readout and quantum-kernel SVM strategies. Our results show that, while VQCs with linear readout do not outperform strong classical baselines such as RBF-SVM, the same trained quantum feature map can significantly improve performance when reused within a kernel-based decision framework. A qubit-count sweep further reveals saturation effects consistent with the mismatch between exponential Hilbert space dimension and linear parameter scaling. Overall, our findings highlight that the effectiveness of quantum models depends critically on the interplay between representation and readout, and that meaningful gains may arise from combining learned quantum feature maps with classical decision mechanisms rather than seeking direct replacement of classical models.
Towards Quantum Optimised Malware Containment
That author's affiliation: University of North Carolina - Chapel Hill First author institution: University of North Carolina - Chapel Hill Last author institution: University of Oxford
The containment of malware in computing networks may be naturally formulated as a network influence minimisation problem, in which one seeks to limit the expected spread of an infection while balancing the operational cost of disabling network connections. Classical approaches often rely on Monte Carlo simulation of stochastic diffusion processes and greedy optimisation over candidate edge removals, resulting in significant computational overhead due to repeated influence evaluations. In this work, we propose a hybrid quantum approach which combines Quantum Amplitude Estimation (QAE) and Grover Minimum Finding (GMF) to provide quadratic improvements in both the estimation and optimisation components of the problem. Specifically, QAE replaces classical Monte Carlo simulation, reducing the sampling complexity of influence estimation from $O(1/\varepsilon^2)$ to $O(1/\varepsilon)$ for a target additive error $\varepsilon \ll 1$, while GMF reduces the number of candidate evaluations required to identify optimal edge removals from $O(|E_C|)$ to $O(\sqrt{|E_C|})$. We present a formal problem definition, describe the construction of the corresponding quantum oracles, and analyse the resulting complexity improvements under standard oracle assumptions. Preliminary experiments, including classical simulation of QAE and small-scale execution of Grover search on real quantum hardware, support the expected theoretical scaling. While practical implementation at scale requires fault-tolerant quantum devices, our results demonstrate that quantum algorithms offer a promising long-term direction for accelerating stochastic network optimisation problems such as malware containment.
Observation of Non-Markovian Evolution of Tripartite Quantum Steering
That author's affiliation: Technion-Israel Institute of Technology First author institution: South China Sea Institute of Oceanology, CAS Last author institution: Hangzhou Normal University
The memory effects in open quantum systems can induce information backflow and revive quantum correlations, thereby providing a powerful way to protect and recover useful quantum resources in realistic noisy environments. However, such dynamics remains experimentally unexplored in multipartite quantum steering. Here we observe different non-Markovian evolution of tripartite quantum steering using Greenberger-Horne-Zeilinger-type mixed states, covering both death and revival processes. In particular, we experimentally demonstrate the more intricate asymmetric steering structure of tripartite quantum steering through different bipartitions, which do not arise in bipartite systems. Our results provide foundational insights into the hierarchical and directional structures in multipartite quantum steering, and highlight its potential as a useful resource for asymmetric quantum information processing.
Classical simulation of free-fermionic dynamics and quantum chemistry with magic input
That author's affiliation: University of Gdansk First author institution: Center for Theoretical Physics PAS, Warsaw Last author institution: Center for Theoretical Physics, Polish Academy of Sciences
Establishing the precise computational boundary between classically tractable fermionic systems and those capable of genuine quantum advantage is a central challenge in quantum simulation. While injecting non-Gaussian ``magic" inputs into free-fermion circuits is widely expected to generate intractable complexity, we identify a physically motivated intermediate regime. Supported by rigorous bounds and numerical evidence, we show that for a class of paired non-Gaussian fermionic states, essential quantum simulation primitives -- transition amplitudes, overlaps, and arbitrary-weight number correlators -- can be efficiently approximated to additive error under free-fermionic dynamics. This tractability stems from an algebraic reduction that compresses exponentially large multiparticle interference into a single coefficient of a multivariate Pfaffian polynomial. Because these classical estimators match the intrinsic $O(1/\sqrt{K})$ statistical uncertainty of quantum hardware utilizing $K$ measurement shots, they constitute a practical benchmark. Building on this foundation, we construct an additive-error estimator for high-weight Wilson observables in the noninteracting quench of recent trapped-ion experiments, providing a rigorous classical benchmark. Extending this to quantum chemistry, we demonstrate that core overlap-based subroutines for antisymmetrized products of strongly orthogonal geminals admit exact Pfaffian reductions. Ultimately, these results sharpen the boundary of quantum advantage, establishing that the paired-electron scaffold is effectively dequantized and clarifying exactly where quantum resources are indispensable.
Simulating dynamics of RLC circuits with a quantum differential-algebraic equations solver
That author's affiliation: IBM Quantum First author institution: IBM Quantum Last author institution: MIT-IBM Watson AI Lab
We introduce a quantum algorithm for simulating the dynamics of electrical circuits consisting of resistors, inductors and capacitors (aka RLC circuits) along with power sources. Given oracle access to the connectivity of the circuit and values of the electrical elements, our algorithm prepares a quantum state that encodes voltages and current values either at a specified time or the history of their evolution over a time-interval. For an RLC circuit with $N$ components, our algorithm runs in time $\textsf{polylog}(N)$ under mild assumptions on the connectivity of the circuit and values of its components. This provides an exponential speed-up over classical algorithms that take $\textsf{poly}(N)$ time in the worst-case. Our algorithm can be used to estimate energy across a set of components or dissipated power in $\textsf{polylog}(N)$ time, a problem that we prove is BQP-hard and therefore unlikely to be efficiently solved by classical algorithms. The main challenge in simulating the dynamics of RLC circuits is that they are governed by differential-algebraic equations (DAEs), a coupled system of differential equations with hidden algebraic constraints. Consequentially, existing quantum algorithms for ordinary differential equations cannot be directly utilized. We therefore develop a quantum DAE solver for simulating the time-evolution of linear DAEs. For RLC circuits, we employ modified nodal analysis to create a system of DAEs compatible with our quantum algorithm. We establish BQP-hardness by demonstrating that any network of classical harmonic oscillators, for which an energy-estimation problem is known to be BQP-hard, is a special case of an LC circuit. Our work gives theoretical evidence of quantum advantage in simulating RLC circuits and we expect that our quantum DAE solver will find broader use in the simulation of dynamical systems.
Identifying vulnerable nodes and detecting malicious entanglement patterns to handle st-connectivity attacks in quantum networks
That author's affiliation: Institut Polytechnique de Paris Institution (first & last author): Institut Polytechnique de Paris
Problems in distributed system security often map naturally to graphs. The concept of centrality assesses the importance of nodes in a graph. It is used in various applications. Cooperative game theory has also been used to create nuanced and flexible notions of node centrality. However, the approach is often computationally complex to implement classically. We describe a quantum approach to approximating the importance of quantum nodes that maintain a target connection in a quantum network. We detail a method for quickly identifying high-importance nodes that can be targeted by adversaries. The approximation method relies on quantum subroutines for st-connectivity, approximating Shapley values, and finding the maximum of a list. We consider a malicious actor targeting a subset of nodes to perturb the system functionality. Our method identifies the nodes that are most important in keeping nodes s and t connected. Once we have identified high-importance nodes, we require methods to identify when those nodes are compromised. We describe how Quantum Support Vector Machine (QSVM) classifiers can be used to detect malicious behavior in quantum networks. In particular, we describe the detection of entanglement attacks in quantum repeaters. We show that our initial assessment approach can be complemented by QSVM classifiers to identify and report anomalous situations related to malicious manipulation of entanglement swapping. Finally, we explore the potential complexity benefits of our quantum approach compared with classical and probabilistic methods. We also release all the simulation code in a companion GitHub repository.
Iceberg Beyond the Tip: Co-Compilation of a Quantum Error Detection Code and a Quantum Algorithm
That author's affiliation: JPMorganChase First author institution: Rutgers University Last author institution: JPMorganChase
The rapid progress in quantum hardware is expected to make them viable tools for the study of quantum algorithms in the near term. The timeline to useful algorithmic experimentation can be accelerated by techniques that use many noisy shots to produce an accurate estimate of the observable of interest. One such technique is to encode the quantum circuit using an error detection code and discard the samples for which an error has been detected. An underexplored property of error-detecting codes is the flexibility in the circuit encoding and fault-tolerant gadgets, which enables their co-optimization with the algorthmic circuit. However, standard circuit optimization tools cannot be used to exploit this flexibility as optimization must preserve the fault-tolerance of the gadget. In this work, we focus on the $[[k+2, k, 2]]$ Iceberg quantum error detection code, which is tailored to trapped-ion quantum processors. We design new flexible fault-tolerant gadgets for the Iceberg code, which we then co-optimize with the algorithmic circuit for the quantum approximate optimization algorithm (QAOA) using tree search. By co-optimizing the QAOA circuit and the Iceberg gadgets, we achieve an improvement in QAOA success probability from $44\%$ to $65\%$ and an increase in post-selection rate from $4\%$ to $33\%$ at 22 algorithmic qubits, utilizing 330 algorithmic two-qubit gates and 744 physical two-qubit gates on the Quantinuum H2-1 quantum computer, compared to the previous state-of-the-art hardware demonstration. Furthermore, we demonstrate better-than-unencoded performance for up to 34 algorithmic qubits, employing 510 algorithmic two-qubit gates and 1140 physical two-qubit gates.
Hybrid quantum-classical framework for Betti number estimation with applications to topological data analysis
That author's affiliation: Stony Brook University Institution (first & last author): Stony Brook University
Topological data analysis (TDA) is a rapidly growing area that applies techniques from algebraic topology to extract robust features from large-scale data. A key task in TDA is the estimation of (normalized) Betti numbers, which capture essential topological invariants. While recent work has led to quantum algorithms for this problem, we explore an alternative direction: combining classical and quantum resources to estimate the Betti numbers of a simplicial complex more efficiently. Assuming the classical description of a simplicial complex, that is, its set of vertices and edges, we propose a hybrid quantum-classical algorithm. The classical component enumerates all simplices, and this combinatorial structure is subsequently processed by a quantum algorithm to estimate the Betti numbers. We analyze the performance of our approach and identify regimes where it potentially achieves polynomial to exponential speedups over existing quantum methods, at the trade-off of using more ancilla qubits. We further demonstrate the utility of normalized Betti numbers in concrete applications, highlighting the broader potential of hybrid quantum algorithms in topological data analysis.
Efficient Quantum Fully Homomorphic Encryption
That author's affiliation: Dalian University of Technology First author institution: Dalian University of Technology Last author institution: Beihang University
Quantum fully homomorphic encryption (QFHE) promises secure delegated quantum computation but has been impeded by the prohibitive quantum resource demands of existing constructions. This paper introduces a unified framework that achieves an \textbf{exponential improvement} in efficiency by synergistically integrating three theoretical tools: \textbf{modular arithmetic programs (MAP)}, the \textbf{garden-hose model}, and \textbf{measurement-based quantum computation (MBQC)}. Our central innovation is a novel MAP tailored to the algebraic structure of Learning-with-Errors (LWE) decryption. Unlike generic approaches that incur exponential overhead, our MAP computes the inner product $\langle \boldsymbol{sk}, \boldsymbol{c} \rangle \bmod q$ by tracking a partial sum modulo $q$, requiring only $O(\log q)$ bits of state width. This yields branching programs of width $O(\log \lambda)$ and length $O(\lambda \log \lambda)$, thereby reducing the size of the essential quantum gadget from $O(\lambda^{2.58})$ to $O(\lambda \log^2 \lambda)$ EPR pairs -- a concrete improvement factor of $2^{15}$ to $2^{18}$ for standard security parameters. Critically, we demonstrate that LWE decryption is not a \textbf{symmetric function}, necessitating our specialized MAP design beyond prior symmetric-function optimizations. The framework provides a direct mapping from the MAP to an efficient gadget via the garden-hose model, with MBQC furnishing the deterministic control flow for homomorphic evaluation. The resulting QFHE scheme supports \textbf{fully classical clients}, relies solely on the \textbf{classical LWE assumption} (avoiding circular security or quantum hardness assumptions), and maintains compactness. This work dramatically lowers the quantum resource barrier for practical QFHE, paving the way for realistic privacy-preserving quantum cloud computing.
A Fully Quantum Algorithm for Image Edge Detection
That author's affiliation: University of Waterloo Institution (first & last author): University of Waterloo
This work introduces a novel quantum algorithm for gradient-based edge detection that operates entirely within the quantum circuit model. Grayscale images are encoded using the Novel Enhanced Quantum Representation (NEQR), allowing exact arithmetic on pixel intensities. Directional gradients are computed by generating superpositions of neighboring pixels via cyclic shift operations and performing subtraction with an exact quantum arithmetic circuit. To refine accuracy, we introduce a direction-aware shifting mechanism that aligns edges with the darker side of intensity transitions. Our novel Quantum Partitioning Algorithm enables efficient in-place thresholding of edge candidates. This work exhibits polynomial-time improvements and optimizes the ancilla count compared to previous NEQR-based quantum edge detection algorithms. These results demonstrate a resource-efficient and fully quantum approach to edge detection, highlighting a practical quantum advantage in image processing.
Do Quantum Transformers Help? A Systematic VQC Architecture Comparison on Tabular Benchmarks
Variational quantum circuits (VQCs) are a leading approach to quantum machine learning on near-term devices, yet it remains unclear which circuit architecture yields the best accuracy-parameter trade-off on classical tabular data. We present a systematic empirical comparison of four VQC families -- multi-layer fully-connected (FC-VQC), residual (ResNet-VQC), hybrid quantum-classical transformer (QT), and fully quantum transformer (FQT) -- across five regression and classification benchmarks. Our key findings are: \textbf{(i)}~FC-VQCs achieve 90-96\% of the $R^2$ of attention-based VQCs while using 40-50\% fewer parameters, and consistently outperform equal-capacity MLPs (mean $R^2{=}0.829$ vs.\ MLP$_{720}$'s $0.753$ on Boston Housing, 3-seed average); \textbf{(ii)}~FC-VQC's Type~4 inter-block connectivity provides partial cross-token mixing that approximates the role of attention -- explicit quantum self-attention yields only marginal gains on most datasets while significantly increasing parameter count; \textbf{(iii)}~expressibility saturates at circuit depth~${\approx}\,3$, explaining why shallow VQCs already cover the Hilbert space effectively; \textbf{(iv)}~LayerNorm on the fully quantum transformer improves classification accuracy, suggesting normalization is important when all operations are quantum; \textbf{(v)}~in our noise study on Boston Housing, FQT degrades gracefully under depolarizing noise while QT collapses. All results are validated across three random seeds. These findings provide practical architectural guidance for deploying VQCs on near-term quantum hardware.
Quantum Prediction of Transport Dynamics in Discretized State Spaces
That author's affiliation: Fraunhofer FKIE Institution (first & last author): Fraunhofer FKIE
We propose a gate-based quantum algorithm for the prediction step of Bayesian state estimation based on the Fokker-Planck equation on a discretized position-velocity state space. The probability density is encoded in the amplitudes of a quantum state, enabling a compact representation of high-dimensional distributions. Exploiting the circulant structure of finite-difference operators, the evolution is realized in the spectral domain using quantum Fourier transforms and phase rotations. A key result is that the drift component can be implemented exactly in amplitude space, leading to an accurate reproduction of the classical transport dynamics. In contrast, the diffusion term does not admit a linear representation in amplitude space due to the nonlinear relation between probability density and wave function. To enable a quantum implementation, we introduce a unitary surrogate based on a Wick rotation, transforming diffusion into a dispersive phase evolution. This yields a fully unitary propagation that can be implemented efficiently on a gate-based quantum computer. The proposed method is evaluated numerically for different scenarios and shows strong agreement with the exact solution of the Fokker-Planck equation. The approach demonstrates the potential of quantum computing for Bayesian state estimation, as the representable state space grows exponentially with the number of qubits. This allows the efficient representation and propagation of probability densities that would otherwise require complex tensor decompositions on classical hardware, making the method a promising candidate for high-dimensional filtering problems.
Catalytic Enhancement of Coherence Fraction in Noisy Quantum Channels and Characterization of Strictly Incoherent Operations
That author's affiliation: University of Calcutta Institution (first & last author): University of Calcutta
In realistic quantum information processing tasks, quantum states are inevitably affected by environmental noise, leading to decoherence and degradation of useful quantum resources. The coherence fraction, which serves as an important figure of merit for several quantum protocols, may decrease significantly after the action of a noisy channel. Such degradation can result in unsatisfactory performance in real-world applications. In this work, we investigate whether catalysis can be used to pre-process the input state to enhance the coherence fraction of an output state from a quantum channel. Specifically, we study whether using a processed state $\rho_s'$ as the input to a quantum channel $\Lambda$, instead of the original state $\rho_s$, can yield an output state $\Lambda(\rho_s')$ whose coherence fraction exceeds that of $\Lambda(\rho_s)$. We analyze the conditions under which such an improvement is possible. We also provide a practical application of our setup for the phase discrimination task. Furthermore, we establish a necessary and sufficient condition for an incoherent state preserving CPTP(Completely Positive Trace Preserving) map $\mathcal{E}$ to be a particular type of Strictly Incoherent Operation (SIO). This characterization provides a new structural understanding of SIO and clarifies its role in coherence manipulation. Our results offer practical insights into coherence preservation and enhancement in noisy quantum processes and may be useful for optimizing quantum information protocols under realistic conditions. We also provide numerical examples to support our claims.
Optimization Using Locally-Quantum Decoders
That author's affiliation: Google First author institution: Google Last author institution: MIT
It was pointed out in [JSW+25] that widely-studied optimization problems such as D-regular max-k-XORSAT can be reduced to decoding of LDPC codes, using quantum algorithms related to Regev's reduction. LDPC codes have very good decoders, such as Belief Propagation (BP), and this therefore makes D-regular max-k-XORSAT an enticing target for this class of quantum algorithms. However, BP was found insufficient to achieve quantum advantage. Here, we develop an intrinsically quantum decoding technique, which decodes classical LDPC codes subject to coherent superpositions of bit flip errors. For average-case instances of D-regular max-k-XORSAT drawn from Gallager's ensemble, this quantum decoder strongly outperforms classical belief propagation at many values of k and D. For some (k,D) the approximate optima achievable using this decoder surpass both Prange's algorithm and simulated annealing. However, we stop short of achieving quantum advantage because we identify an enhancement to Prange's algorithm that recovers a precise tie, much as a precise tie was observed between the standard version of Prange's algorithm and a more limited version of locally-quantum decoding in [CT24].
Experimental high-dimensional multi-qubit Bell non-locality on a superconducting quantum processor
Combining recent advances in superconducting quantum hardware, we explore quantum correlations in a previously inaccessible regime by observing \emph{simultaneously} high-dimensional and many-body Bell non-locality. We report a high-confidence Bell violation in the correlations between two $d=64$-dimensional systems encoded in twelve qubits. For system sizes up to $d=32$, the strength of the observed nonlocal correlations exceeds the quantum upper bound for $d=2$ systems, providing direct evidence of high-dimensional nonlocality. Furthermore, we demonstrate that the observed violation is genuinely collective: all qubits contribute to the nonlocal correlations, while most pairwise correlations across the bipartition remain Bell-local. Our work illustrates how present-day quantum processors enable the exploration of fundamental predictions of quantum mechanics in previously inaccessible regimes and, in turn, how fundamental quantum effects can be used to benchmark their performance.
Dynamical preparation of U(1) quantum spin liquids in an analogue quantum simulator
That author's affiliation: Ludwig-Maximilians-Universität München Institution (first & last author): Ludwig-Maximilians-Universität München
Locally constrained gauge theories underpin our understanding of fundamental interactions in particle physics and the emergent behaviour of quantum materials. In strongly correlated systems, they can give rise to quantum spin liquids that lack conventional order and are defined by coherent superpositions of an extensive number of many-body configurations. Realising and probing such exotic states experimentally is an outstanding challenge both in solid-state and synthetic quantum systems, not least due to the difficulty of detecting the fragile coherences between many-body states. Here, we report a large-scale (>3,000 sites) realisation of a two-dimensional U(1) lattice gauge theory with ultracold atoms in a square optical superlattice and demonstrate non-equilibrium preparation of extended regions of U(1) quantum spin liquids. We demonstrate Gauss's law validity in a quench experiment, enabled by a new microscopy technique for detecting doubly occupied sites. We observe characteristic real-space correlations and momentum-space pinch points, hallmarks of the emergent U(1) gauge structure. Using round-trip interferometric protocols, we directly observe large-scale coherence between many-body configurations, providing strong evidence for quantum spin liquid regions extending over ~100 lattice sites. Our results establish non-equilibrium quantum simulation protocols as a powerful route for accessing and probing exotic, highly-entangled states beyond those hosted by the engineered Hamiltonian in thermal equilibrium.
Completeness of qufinite ZXW calculus, a graphical language for finite-dimensional quantum theory
That author's affiliation: University of Oxford First author institution: University of Oxford Last author institution: China Agricultural University
Finite-dimensional quantum theory serves as the theoretical foundation for quantum information and computation. Mathematically, it is formalized in the category FHilb, comprising all finite-dimensional Hilbert spaces and linear maps between them. However, there has not been a graphical language for FHilb which is both universal and complete and thus incorporates a set of rules rich enough to derive any equality of the underlying formalism solely by rewriting. In this paper, we introduce the qufinite ZXW calculus - a graphical language for reasoning about finite-dimensional quantum theory. We set up a unique normal form to represent an arbitrary tensor and prove the completeness of this calculus by demonstrating that any qufinite ZXW diagram can be rewritten into its normal form. This result implies the equivalence of the qufinite ZXW calculus and the category FHilb, leading to a purely diagrammatic framework for finite-dimensional quantum theory with the same reasoning power. In addition, we identify several domains where the application of the qufinite ZXW calculus holds promise. These domains include spin networks, interacting mixed-dimensional systems in quantum chemistry, quantum programming, high-level description of quantum algorithms, and mixed-dimensional quantum computing. Our work paves the way for a comprehensive diagrammatic description of quantum physics, opening the doors of this area to the wider public.
New aspects of quantum topological data analysis: Betti number estimation, and testing and tracking of homology and cohomology classes
That author's affiliation: University of Massachusetts Boston Institution (first & last author): State University of New York at Stony Brook
We introduce several new quantum algorithms for estimating homological invariants, specifically Betti numbers and persistent Betti numbers, of a simplicial complex given via a structured classical input. At the core of our algorithm lies the ability to efficiently construct the block-encoding of Laplacians (and persistent Laplacians) based on the classical description of the given complex. From such block-encodings, Betti numbers (and persistent Betti numbers) can be estimated. The complexity of our method is polylogarithmic in the number of simplices in both simplex-sparse and simplex-dense regimes, thus offering an advantage over existing works. Moreover, prior quantum algorithms based on spectral methods incur significant overhead due to their reliance on estimating the kernel of combinatorial Laplacians, particularly when the Betti number is small. We introduce a new approach for estimating Betti numbers based on homology tracking and homology property testing, which enables exponential quantum speedups over both classical and prior quantum approaches under sparsity and structure assumptions. We further initiate the study of homology triviality and equivalence testing as natural property testing problems in topological data analysis, and provide efficient quantum algorithms with time complexity nearly linear in the number of simplices when the rank of the boundary operator is large. In addition, we develop a cohomological approach based on block-encoded projections onto cocycle spaces, enabling rank-independent testing of homology equivalence. This yields the first quantum algorithms for constructing and manipulating r-cocycles in time polylogarithmic in the size of the complex. Together, these results establish a new direction in quantum topological data analysis and demonstrate that computing topological invariants can serve as a fertile ground for provable quantum advantage.
Quantum speed limit for observables from quantum asymmetry
Quantum asymmetry and coherence are genuinely quantum resources that are essential to realize quantum advantage in information technologies. However, all quantum processes are fundamentally constrained by quantum speed limits, which raises the question on the corresponding bounds on the rate of consumption of asymmetry and coherence. In the present work, we derive a formulation of the quantum speed limit for observables in terms of the trace-norm asymmetry of the time-dependent quantum state relative to the observable. This quantum speed limit can be directly observed in experiment through weak value measurement and provides a lower bound to the quantum Fisher information about the parameter conjugate to the observable. It can be further related to quantum coherence relative to the eigenbasis of the observable. We obtain a complementary relation for the speed of three mutually unbiased observables for a single qubit. As an application, we derive a notion of a quantum thermodynamic speed limit.
Thermodynamic Recycling of Algorithmic Failure Branches: Quantum-Computer Demonstration with Quantum Error Correction
Thermodynamic trade-off relations dictate fundamental limits on the performance of thermodynamic tasks through costs such as heat dissipation. Here, we propose a framework called thermodynamic recycling to circumvent these limits in quantum processors by exploiting failure branches of quantum algorithms, which are usually discarded. The key component is an athermal bath naturally generated during the resetting of a failure branch. By coupling this bath to a target system prior to relaxation, thermodynamic tasks can be performed beyond conventional thermodynamic limits. We apply this framework to information erasure and derive the reduction in heat dissipation analytically. As a demonstration, we implement our framework on IBM's superconducting quantum processor by combining the Harrow--Hassidim--Lloyd algorithm with three-qubit quantum error correction, thereby reducing the heat dissipated in erasing syndrome information. Despite substantial noise and errors in current hardware, our method achieves erasure with heat dissipation below the Landauer limit. This work establishes an operational connection between quantum computing and quantum thermodynamics for resource-efficient quantum computation.
Symplectic perspective to quantum computing for Hamiltonian systems
That author's affiliation: Massachusetts Institute of Technology First author institution: National Technical University of Athens Last author institution: Massachusetts Institute of Technology
This work develops a symplectic framework for quantum computing to be applied to classical Hamiltonian systems, exploiting the intrinsic geometric compatibility between unitary quantum evolution and symplectic phase-space dynamics in a two-fold way. The first part is devoted in establishing an exact correspondence between quantum evolution and classical Hamiltonian flow on a Kahler manifold. This correspondence enables a geometric quantization scheme that identifies a family of classical Hamiltonian systems admitting exponentially compressed quantum representations-appropriate for quantum simulation. In the second part we demonstrate that Liouville-integrable Hamiltonian dynamics induce finite-dimensional unitary evolution through action-angle variables and Koopman-von Neumann encoding. This allows efficient quantum representation and parallel evolution of large phase-space ensembles, where entangled encodings provide exponential compression in ensemble size and enable quantum speed-ups in observable estimation via amplitude estimation techniques. For non-integrable systems, Lie canonical perturbation theory is incorporated to construct near-symplectic transformations that map dynamics to approximately integrable forms, preserving unitary evolution up to a controlled error. We derive the resulting quantum computational complexity of the proposed quantum-symplectic scheme, revealing both an exponential compression in memory requirements and a potential polynomial speed-up with respect to the system size. Finally, the transport evolution equation governing the quantum phase-space observables is obtained.
Complete characterization of perfect quantum strategies in quantum magic rectangle games
That author's affiliation: University of Tennessee Institution (first & last author): University of Tennessee
We provide a complete structural characterization of perfect quantum strategies for arbitrary quantum magic rectangle games. We derive necessary and sufficient conditions that jointly constrain the shared state and measurement operators, establishing a unified analytical framework for perfect nonlocal strategies in this setting. Our results show that all perfect quantum solution states (PQSS) must exhibit a specific algebraic--combinatorial structure, ruling out a priori assumptions about particular entangled resources and clarifying the full class of states compatible with perfect correlations. We further show that perfect quantum strategies do not exist for $2 \times n$ quantum magic rectangle games with odd $n$, and introduce a corresponding quantum magic rectangle inequality to characterize optimal non-perfect strategies. While our results are structural, they may provide a foundation for future developments in quantum information and quantum cryptography based on perfect nonlocal correlations.
High-fidelity collisional quantum gates with fermionic atoms
That author's affiliation: Max Planck Society Institution (first & last author): Max Planck Institute of Quantum Optics
Quantum simulations of electronic structure and strongly correlated quantum phases are widely regarded as among the most promising applications of quantum computing. These computations naturally benefit from native fermionic encodings, which intrinsically restrict the Hilbert space to physical states consistent with fermionic statistics and conservation laws like particle number and magnetization independent of gate errors. While ultracold atoms in optical lattices are established as powerful analog simulators of strongly correlated fermionic matter, neutral-atom platforms have concurrently emerged as versatile, scalable architectures for spin-based digital quantum computation. Unifying these capabilities requires high-fidelity gates that preserve motional degrees of freedom of fermionic atoms, paving the way for a new generation of programmable fermionic quantum processors. Here we demonstrate collisional entangling gates with fidelities up to 99.75(6)% and Bell state lifetimes exceeding $10\,s$, realized via controlled interactions of fermionic atoms in an optical superlattice. Using quantum gas microscopy, we microscopically characterize spin-exchange and pair-tunneling gates, and realize a robust, composite pair-exchange gate, a fundamental primitive for quantum chemistry simulations. Our results establish controlled collisions in optical lattices as a competitive and complementary approach to high entangling gate fidelities in neutral-atom quantum computers. When embedded within a fermionic architecture, this capability enables the preparation of complex quantum states and advanced readout protocols for a new class of scalable analog-digital hybrid quantum simulators. Combined with local addressing, these gates mark a crucial step towards a fully digital fermionic quantum computer based on the controlled motion and entanglement of fermionic neutral atoms.
A Course on the Introduction to Quantum Software Engineering: Experience Report
That author's affiliation: Toronto Metropolitan University Institution (first & last author): Toronto Metropolitan University
Quantum computing is increasingly practiced through programming, yet most educational offerings emphasize algorithmic or framework-level use rather than software engineering concerns such as testing, abstraction, tooling, and lifecycle management. This paper reports on the design and first offering of a cross-listed undergraduate--graduate course that frames quantum computing through a software engineering lens, focusing on early-stage competence relevant to software engineering practice. The course integrates foundational quantum concepts with software engineering perspectives, emphasizing executable artifacts, empirical reasoning, and trade-offs arising from probabilistic behaviour, noise, and evolving toolchains. Evidence is drawn from instructor observations, supplemented by anonymous student feedback, a background survey, and inspection of student work. Despite minimal prior exposure to quantum computing, students were able to engage productively with quantum software engineering topics once a foundational understanding of quantum information and quantum algorithms, expressed through executable artifacts, was established. This experience report contributes a modular course design, a scalable assessment model for mixed academic levels, and transferable lessons for software engineering educators developing quantum computing curricula.
Quantum-HPC Software Stacks and the openQSE Reference Architecture: A Survey
That author's affiliation: Oak Ridge National Laboratory Institution (first & last author): Oak Ridge National Laboratory
Quantum resources are increasingly integrated into high-performance computing (HPC) and cloud environments, but quantum high-performance computing (QHPC) software stacks remain isolated, often proprietary, full-stack solutions lacking common interfaces across runtime, resource management, orchestration, and execution layers. This paper analyzes nine production QHPC stacks and identifies common design patterns and emerging requirements, covering deployment models, application interaction patterns, SDK support, and readiness for fault-tolerant operation. The survey exposes consistent needs in runtime abstraction, resource management, interconnect semantics, and observability. Based on these findings, we propose the open quantum-HPC software ecosystem ( openQSE) reference architecture as a first step toward unifying the state-of-the-practice. openQSE defines a set of layer boundaries that allow different implementations to interoperate while preserving deployment flexibility, and is structured to support both current noisy intermediate-scale quantum (NISQ) workloads and future fault-tolerant quantum computing (FTQC) systems without changes to upper-layer application interfaces.
Ghost Degrees of Freedom Without Quantum Runaway: Exact Moment Bounds from an Operator Conservation Law
That author's affiliation: University of California, Santa Cruz Institution (first & last author): University of California, Santa Cruz
We prove an exact quantum conservation law for a harmonic oscillator coupled to a ghost degree of freedom: a second classical conserved quantity lifts to a quantum operator that commutes with the Hamiltonian with no hbar corrections, yielding a rigorous, state-independent upper bound on the mean squared phase-space radius for all time and every quantum state with finite initial second moments. The proof uses only canonical commutation relations and the Leibniz rule; it requires no confining potential, no spectral assumptions, and no perturbative expansion. The interaction studied here is bounded and vanishes at large separations, the generic situation in effective field theory, yet this suffices to guarantee quantum stability in the sense of bounded second moments. Three independent numerical frameworks (Heisenberg picture, Schrodinger picture, and Fock-space diagonalization) confirm wavepacket confinement below the analytic bound, a real energy spectrum, and Poisson level statistics numerically consistent with an integrable structure. The absence of a confining potential means the proof is silent on spectral discreteness and the existence of a ground state; those questions, addressed for polynomial confining interactions in concurrent work, remain open for the interaction class studied here and represent the sharpest targets for future work. Ghost quantum instability is therefore not an inevitable consequence of a wrong-sign kinetic term but depends critically on the interaction structure.
Bayesian Phase Stabilization at the Shot-Noise Limit for Scalable Quantum Networks
That author's affiliation: University of Science and Technology of China Institution (first & last author): University of Science and Technology of China
High-precision optical phase stabilization in quantum networks is fundamentally constrained by the strict photon-flux and duty-cycle limits required to avoid disturbing fragile quantum states. This challenge becomes especially critical when coordinating multiple independent light sources for multi-step quantum protocols. Here, we develop an integrated phase-stabilization framework that incorporates a Bayesian phase estimator to optimally extract information from sparse single-photon detection events. This approach outperforms conventional maximum-likelihood estimation and achieves the shot-noise limit under minimal photon flux. The framework enables real-time correction of combined phase noise from both nodal lasers and transmission fibers, facilitating a two-step excitation protocol for heralded entanglement generation between separate trapped-ion nodes via single-photon interference. Operating with a detected photon rate of approximately 1 MHz and a duty cycle less than or equal to 6.5%, the system maintains interferometric visibility greater than 97% over fiber links of 10 km and 100 km. This phase control yields deterministic ion-ion entanglement with parity contrast exceeding 85% at both distances, enabling device-independent quantum key distribution. Moreover, the resulting memory-memory entanglement at 10 km survives beyond the average time required to establish it -- a fundamental requirement for quantum repeaters. This work establishes a robust and scalable foundation for practical long-distance quantum networks.
Suppressing the Erasure Error of Fusion Operation in Photonic Quantum Computing
That author's affiliation: CUHK Institution (first & last author): CUHK
Photonic quantum computing provides a promising route toward quantum computation by naturally supporting the measurement-based quantum computation (MBQC) model. In MBQC, programs are executed through measurements on a pre-generated graph state, whose construction largely depends on probabilistic fusion operations. However, fusion operations in PQC are vulnerable to two major error sources: fusion failure and fusion erasure. As a result, MBQC compilation must account for both error mechanisms to generate reliable and efficient photonic executions. Prior state-of-the-art MBQC compilation, represented by OneAdapt, is designed for all-photonic architectures and mainly focuses on handling fusion failures. Nevertheless, it does not explicitly model fusion erasures induced by photon loss, which can be substantially more damaging than fusion failures. To mitigate fusion erasure errors, we introduce a new MBQC compilation scheme built upon the spin qubit quantum memory. We propose tree-encoded fusion, an encoding strategy that suppresses erasure errors during graph-state generation. We further incorporate this scheme into a compiler framework with algorithms that reduce the execution overhead of quantum programs. We evaluate the proposed framework using a realistic PQC simulator on six representative quantum algorithm benchmarks across multiple program scales. The results show that tree-encoded fusion achieves better robustness than alternative fusion-encoding strategies, and that our compiler provides exponential improvement over OneAdapt. In addition, we validate the feasibility of our approach through a proof-of-concept demonstration on real PQC hardware.
Quantum jump correlations in long-range dissipative spin systems
We characterize nonequilibrium phases in long-range dissipative spin systems through the statistical properties of quantum jump trajectories. While the average dynamics governed by the Lindblad master equation provides access to steady-state expectation values of order parameters, the quantum trajectory framework reveals features encoded in the spatial and temporal correlations of detection events. Focusing on a model exhibiting a paramagnetic-to-ferromagnetic phase transition, we investigate the full counting statistics of quantum jumps using a tilted Lindbladian approach. We combine this with cluster mean-field and cumulant expansion techniques, which allow us to capture, respectively, the short- and long-range structure of jump correlations. In addition, we study the waiting-time distributions of detection events. We show that quantum jump correlations display clear signatures of the underlying phases and reveal distinct dynamical features across the transition. Our results highlight the potential of trajectory-resolved observables as probes of collective behavior in open quantum many-body systems and provide new insights into the role of long-range interactions in shaping nonequilibrium dynamics.
Speed-oriented quantum circuit backend
That author's affiliation: Leibniz Universität Hannover Institution (first & last author): Leibniz Universität Hannover
We present a new software package for efficient quantum circuit generation, designed to achieve optimal runtime performance. Despite being in an early stage of development, our implementation demonstrates significant advantages over existing tools. Using the quantum Fourier transform (QFT) as a benchmark, we show that our backend can generate circuits for systems with up to 2000 qubits faster than widely used frameworks such as Qiskit and Q#. This improvement is particularly relevant for applications where classical preprocessing time, including circuit generation, must be minimized to not diminish any potential quantum advantage - for example, in combinatorial optimization tasks. Additionally, our software provides high-level primitives for bit- and integer-level manipulations, offering a simplified interface for integration with high-level quantum programming languages.
Entanglement of two optical emitters mediated by a terahertz channel
That author's affiliation: Universidad Autónoma de Madrid First author institution: Universidad Autónoma de Madrid Last author institution: Institute of Fundamental Physics IFF-CSIC
Quantum technologies in the terahertz (THz) require a coherent interface between addressable qubits and THz quantum channels -- a capacity that so far, remains largely underdeveloped. Here, we propose and demonstrate the generation of steady-state entanglement between polar quantum emitters, mediated by THz photons. We exploit strong visible-light driving of the emitters to create Rabi-split dressed eigenstates whose energy separation can be optically tuned into the THz regime. The polar nature of the emitters activates THz transitions within these eigenstates, allowing them to couple to a THz photonic mode that induces collective dissipative dynamics. A coherent driving and control of these effective THz emitters is achieved by using a sideband optical drive with detuning close to the THz transition frequency. The resulting interplay of collective dissipation and driving activates a mechanism to generate steady-state entanglement with high values of the concurrence ($C>0.9$), attainable under experimentally feasible parameters. Crucially, both coherent manipulation and quantum state tomography are implemented entirely through optical means, avoiding direct THz control and detection. This establishes a hybrid visible-THz quantum interface in which a THz channel mediates qubit-qubit entanglement (a key operational requirement for THz quantum technologies) while remaining optically accessible.
Replay-buffer engineering for noise-robust quantum circuit optimization
That author's affiliation: University of Helsinki Institution (first & last author): University of Helsinki
Deep reinforcement learning (RL) for quantum circuit optimization faces three fundamental bottlenecks: replay buffers that ignore the reliability of temporal-difference (TD) targets, curriculum-based architecture search that triggers a full quantum-classical evaluation at every environment step, and the routine discard of noiseless trajectories when retraining under hardware noise. We address all three by treating the replay buffer as a primary algorithmic lever for quantum optimization. We introduce ReaPER$+$, an annealed replay rule that transitions from TD error-driven prioritization early in training to reliability-aware sampling as value estimates mature, achieving $4-32\times$ gains in sample efficiency over fixed PER, ReaPER, and uniform replay while consistently discovering more compact circuits across quantum compilation and QAS benchmarks; validation on LunarLander-v3 confirms the principle is domain-agnostic. Furthermore we eliminate the quantum-classical evaluation bottleneck in curriculum RL by introducing OptCRLQAS which amortizes expensive evaluations over multiple architectural edits, cutting wall-clock time per episode by up to $67.5\%$ on a 12-qubit optimization problem without degrading solution quality. Finally we introduce a lightweight replay-buffer transfer scheme that warm-starts noisy-setting learning by reusing noiseless trajectories, without network-weight transfer or $\epsilon$-greedy pretraining. This reduces steps to chemical accuracy by up to $85-90\%$ and final energy error by up to $90\%$ over from-scratch baselines on 6-, 8-, and 12-qubit molecular tasks. Together, these results establish that experience storage, sampling, and transfer are decisive levers for scalable, noise-robust quantum circuit optimization.
A Universal Quantum Information Preserving Photonic Switch for Scalable Quantum Networks
That author's affiliation: Cisco Quantum Lab Institution (first & last author): Cisco Quantum Lab
Quantum networks are a keystone of the quantum internet. However, existing implementations remain largely confined to static point-to-point links due to the absence of a switching paradigm capable of dynamically routing fragile quantum entanglement without introducing decoherence. Here, we propose the Universal Quantum Switch, a foundational building block allowing on-demand, non-blocking, and encoding-agnostic routing of quantum information, as well as seamless modality conversion between disparate quantum platforms. We develop a prototype in thin-film lithium niobate and experimentally demonstrate robust switching with $\le 4\%$ decoherence via thermo-optic modulation and high-speed electro-optic switching of arbitrary entangled states at 1 MHz. Moreover, we show that our platform can support reconfiguration speeds up to 1 GHz. To our knowledge, this work represents the first demonstration of multi-node dynamic entanglement distribution at these speeds. Complementing these experimental results, we project the architecture's scalability, showing dimension-independent decoherence, and provide a scalable, interoperable building block for heterogeneous quantum network fabrics.
Efficient Classical Simulation of Heuristic Peaked Quantum Circuits
That author's affiliation: IBM Quantum Institution (first & last author): IBM Quantum
Peaked quantum circuits, whose output distribution is sharply concentrated on a single bitstring, have emerged as a promising candidate for verifiable quantum advantage, as the correctness of the quantum output can be checked by simply comparing against the known peak. Recent work by Gharibyan et al. arXiv:2510.25838 claimed heuristic quantum advantage using peaked circuits executed on Quantinuum's 56-qubit H2 processor. These peaked circuits concentrate their output on a single hidden bitstring by training a shallow simulable circuit variationally and inserting an obfuscated permutation to increase the depth to a level that makes classical simulation intractable, with estimated runtimes of years for the largest instances. We show that these circuits can be efficiently simulated classically. We describe a method that efficiently performs a full tensor network contraction, allowing near-exact sampling and extraction of the peaked bitstring. The method exploits the mirrored structure of the circuit and iteratively cancels both halves into a Matrix Product Operator (MPO), and avoids the obfuscated permutation by greedily reducing the MPO bond dimension through a process we call unswapping. The method can fully contract and extract the peak of the largest circuit in approximately one hour on a single GPU, around half the time it took to run on the quantum hardware.
Dual-use quantum hardware for quantum resource generation and energy storage
That author's affiliation: National Sun Yat-Sen University Institution (first & last author): National Sun Yat-Sen University
Quantum resources such as entanglement form the backbone of quantum technologies and their efficient generation is a central objective of modern quantum platforms. Independently, quantum batteries have emerged as nanoscale devices that utilize collective quantum effects to store energy with a charging advantage over classical strategies. Here, we show that these two pursuits can co-exist: protocols for fast generation of resourceful quantum states can simultaneously charge a quantum battery with a collective advantage, and conversely, a quantum battery protocol with a charging advantage can produce resource-rich states. Using this connection, we propose an integrated hardware protocol on superconducting circuits in which each experimental run can interchangeably accomplish either quantum battery charging, or quantum sensing through generation of metrologically useful states. Our results establish that quantum resources and stored energy are distinct yet co-producable quantities, opening the door to modular quantum architectures that dynamically switch between sensing and energy-storage functions, thereby producing additional functionalities without extra hardware cost.
Asymmetry Control in a Parametric Oscillator for the Quantum Simulation of Chemical Activation
That author's affiliation: University of California, Santa Barbara First author institution: University College of London Last author institution: Yale University
Dissipative tunneling remains a cornerstone effect in quantum mechanics. In chemistry, it plays a crucial role in governing the rates of chemical reactions, often modeled as the motion along the reaction coordinate from one potential well to another. The relative positions of energy levels in these wells strongly influence the reaction dynamics. Chemical research will benefit from a fully adjustable, asymmetric double-well equipped with precise measurement capabilities of the tunneling rates. In this paper, we show a quantum simulator system that consists of a continuously driven Kerr parametric oscillator with a third order non-linearity that can be operated in the quantum regime to create a fully tunable asymmetric double-well. Our experiment leverages a low-noise, all-microwave control system with a high-efficiency readout, based on a tunnel Josephson junction circuit, of the which-well information. We explore the reaction rates across the landscape of tunneling resonances in parameter space. We uncover two new and counter-intuitive effects: (i) a weak asymmetry can significantly decrease the activation rates, even though the well in which the system is initialized is made shallower, and (ii) the width of the tunneling resonances alternates between narrow and broad lines as a function of the well depth and asymmetry. We predict by numerical simulations that both effects will also manifest themselves in ordinary chemical double-well systems in the quantum regime. Our work is a first step for the development of analog molecule simulators of proton transfer reactions based on quantum parametric processes.
Architecting Distributed Quantum Computers: Design Insights from Resource Estimation
That author's affiliation: University of Cambridge Institution (first & last author): University of Cambridge
In the emerging field of Fault Tolerant Quantum Computation (FTQC), resource estimation is an important tool for quantitatively comparing prospective architectures, identifying hardware bottlenecks and informing which research paths are most valuable. Despite a recent increase in attention on FTQC, there is currently a lack of resource estimation research for architectures that can realistically offer quantum advantage. In particular, current modelling efforts focus on monolithic quantum computers where all qubits reside on a single device. Constraints on fabrication yield, wiring density, and cooling power make monolithic devices unlikely to scale to fault-tolerant sizes in the foreseeable future. Distributed quantum supercomputers offer a path to overcome these limitations. We propose a prospective distributed quantum computing architecture based on lattice surgery with support for modular and distributed operations, with a focus on superconducting qubits. We develop a resource-estimation framework and software tool tailored to distributed FTQC, enabling end-to-end analysis of practical quantum algorithms on our proposed architecture with various hardware configurations, spanning different node sizes, inter-node entanglement generation rates and distillation protocols. Our extensive benchmarking across eight applications and thousands of hardware configurations, shows that resource estimation driven architecture design is crucial for scalability. We provide concrete design configurations that have feasible resource requirements, recommendations for hardware design and system organization. More broadly, our work provides a rigorous methodology for architectural pathfinding, capable of informing system designs and guiding future research priorities.
From Membership-Privacy Leakage to Quantum Machine Unlearning
That author's affiliation: Shanghai Jiao Tong University, University of Oxford Institution (first & last author): Beijing University of Posts and Telecommunications
Quantum machine learning (QML) has the potential to achieve quantum advantage for specific tasks by combining quantum computation with classical machine learning (ML). In classical ML, a significant challenge is membership-privacy leakage, whereby an attacker can infer from model outputs whether specific data were used in training. When specific data are required to be withdrawn, removing their influence from the trained model becomes necessary. Machine unlearning (MU) addresses this issue by enabling the model to forget the withdrawn data, thereby preventing membership-privacy leakage. However, this leakage remains underexplored in QML. This raises two research questions: do QML models leak membership privacy about their training data, and can MU methods efficiently mitigate such leakage in QML models? We investigate these questions using two quantum neural network (QNN) architectures, a basic QNN and a hybrid QNN, evaluated in noiseless simulations and cloud quantum device demonstrations. To answer the first question, we analyze how quantum constraints shape membership-privacy leakage in QML and then formalize a realistic gray-box threat model accordingly. Based on this, we design a membership inference attack (MIA) tailored to QNN outputs, and our results provide clear evidence of membership leakage in both QNNs. To answer the second question, we propose a quantum machine unlearning (QMU) framework, comprising three MU mechanisms. Evaluations on two QNN architectures show that QMU removes the influence of the withdrawn data while preserving accuracy for retained data. A comparative analysis further characterizes the three MU mechanisms with respect to data dependence, computational cost, and robustness.
Coined Quantum Walks on Complex Networks for Quantum Computers
That author's affiliation: University of Tsukuba Institution (first & last author): Unknown
We propose a quantum circuit design for implementing coined quantum walks on complex networks. In complex networks, the coin and shift operators depend on the varying degrees of the nodes, which makes circuit construction more challenging than for regular networks. To address this issue, we use a dual-register encoding to enable a simplified shift operator and reduces the resource overhead. We implement the circuit using Qmod, a high-level quantum programming language, and evaluated the performance through numerical simulations on Erd\H{o}s-R\'enyi, Watts-Strogatz, and Barab\'asi-Albert models. The results show that the circuit depth scales as approximately $N^{1.9}$ regardless of the network topology. Furthermore, we execute the proposed circuits on the ibm\_torino superconducting quantum processor for Watts-Strogatz models with $N=4$ and $N=8$. The experiments show that hardware-aware optimization slightly improved the variation distance and Hellinger fidelity for the larger network, whereas connectivity constraints imposed overhead for the smaller one. These results indicate that while current NISQ devices are limited to small-scale validations, the polynomial scaling of our framework makes it suitable for larger-scale implementations in the fault-tolerant quantum computing era.
Simplified circuit-level decoding using Knill error correction
That author's affiliation: Inria Paris First author institution: University of Waterloo Last author institution: Inria Paris
Quantum error correction will likely be essential for building a large-scale quantum computer, but it comes with significant requirements at the level of classical control software. In particular, a quantum error-correcting code must be supplemented with a fast and accurate classical decoding algorithm. Standard techniques for measuring the parity-check operators of a quantum error-correcting code involve repeated measurements, which both increases the amount of data that needs to be processed by the decoder, and changes the nature of the decoding problem. Knill error correction is a technique that replaces repeated syndrome measurements with a single round of measurements, but requires an auxiliary logical Bell state. Here, we provide a theoretical and numerical investigation into Knill error correction from the perspective of decoding. We give a self-contained description of the protocol, prove its fault tolerance under locally decaying (circuit-level) noise, and numerically benchmark its performance for quantum low-density parity-check codes. We show analytically and numerically that the time-constrained decoding problem for Knill error correction can be solved using the same decoder used for the simpler code-capacity noise model, illustrating that Knill error correction may alleviate the stringent requirements on classical control required for building a large-scale quantum computer.
Benchmarking Quantum Computers via Protocols, Comparing Superconducting and Ion-Trap Quantum Technology
That author's affiliation: Technion - Israel Institute of Technology Institution (first & last author): Technion - Israel Institute of Technology
Both Superconducting and Ion-Trap are leading quantum architectures common in the current landscape of the quantum computing field, each with distinct characteristics and operational constraints. Understanding and measuring the underlying quantumness of these devices is essential for assessing their readiness for practical applications and guiding future progress and research. Building on earlier work (Meirom, Mor, Weinstein Arxiv 2505.12441), we utilize a benchmarking strategy applicable for comparing these two architectures by measuring "quantumness" directly on optimal sub-chips. Distinct from existing metrics, our approach employs rigorous binary fidelity thresholds derived from the classical limits of state transfer. This enables us to definitively establish quantum advantage of a designated sub-region. Here we apply this quality assurance methodology to platforms from both technologies. This comparison provides a protocol-based evaluation of quantumness advantage, revealing not only the strengths and weaknesses of each tested chip and its sub-chips but also offering a common language for their assessment. By abstracting away technical differences in the final result, we demonstrate a benchmarking strategy that bridges the gap between disparate quantum-circuit technologies, enabling fair performance comparisons and establishing a critical foundation for evaluating future claims of quantum advantage. This work was made possible by policies of two companies who enable independent and objective assessment on their quantum computers and sub-chips. In the name of science, we encourage other companies to emulate the independent qubit availability and the fair pricing which allow researchers to preform such assessments.
Lund Plane to Bloch (LP2B) Encoding for Object and Polarization Tagging with Quantum Jet Substructure
That author's affiliation: University of Perugia First author institution: University of Perugia Last author institution: Università degli Studi di Perugia
The application of quantum algorithms to jet substructure analysis is of growing interest as NISQ hardware continues to mature in qubit count and gate depth. Jet substructure remains essential for addressing demanding and complementary challenges at the LHC and beyond, notably object classification and polarization tagging. However, existing quantum machine learning approaches typically rely on data representations that suffer from infrared and collinear unsafety, sensitivity to non-perturbative effects, or poor scalability. In this work, we introduce the Lund Plane to Bloch (LP2B) encoding, designed to map a theoretically clean and robust representation of jet kinematics directly into qubit states. Leveraging this encoding, we implement a Quantum Tree-Topology Network (QTTN) that natively embeds the hierarchical structure of the Lund tree. We evaluate the QTTN across multiple benchmarks and observe that it matches the performance of large classical deep learning architectures, such as LundNet, on polarization tagging, while maintaining competitive accuracy for W boson and top quark tagging. The architecture demonstrates enhanced sensitivity compared to standard 1P1Q encodings on both polarization and W tagging, and pushes the Pareto front when compared against MLP of similar size and BDTs. Remarkably, the QTTN requires three orders of magnitude fewer parameters than LundNet, demonstrating promises for low-latency FPGA implementations in trigger systems. Furthermore, the QTTN outperforms classical methods in the low-data regime, making it suitable for low-yield, data-driven analyses. We also find that the quantum model is less susceptible to overfitting generator-specific parton shower and hadronization models than classical deep learning approaches, pointing toward potentially smaller systematic uncertainties. We validate the QTTN on real quantum hardware using a 3-qubit SpinQ device.
Harmoniq: Efficient Data Augmentation on a Quantum Computer Inspired by Harmonic Analysis
That author's affiliation: Johannes Kepler University Linz Institution (first & last author): Johannes Kepler University Linz
Quantum machine learning has attracted significant interest in recent years. Most existing approaches, however, are variational in nature and require extensive parameter optimization subroutines. Here, we propose a conceptually distinct quantum machine learning approach that goes beyond the variational paradigm. Harmoniq takes a novel data augmentation technique from quantum harmonic analysis and approximates it as a stochastic mixture of n-qubit circuits with (at most) quadratic depth each. A key strength of Harmoniq is its modularity: viewed as a quantum process acting on density matrices, it can readily be combined with other quantum data processing and learning subroutines. A subsequent case study demonstrates this modularity by combining Harmoniq with stochastic amplitude encoding for the input density matrix and quantum PCA on the output density matrix. This results in a promising signal denoising pipeline that works particularly well in the small sample size regime.
Learning error suppression strategies for dynamic quantum circuits
That author's affiliation: IBM Corporation First author institution: Massachusetts Institute of Technology Last author institution: IBM Thomas J. Watson Research Center
Dynamic quantum circuits integrate unitary evolution with mid-circuit measurement and feedforward, enabling conditional operations essential for efficient quantum algorithms and foundational for fault-tolerant quantum computation. However, such operations introduce measurement-induced errors and control constraints that are not addressed by conventional error-suppression techniques. Here, we introduce an empirical learning framework that optimizes dynamical decoupling (DD) sequences for dynamic circuits at the level of circuit subintervals and qubit subregisters. Applying empirically learned DD sequences, we achieve a three-fold reduction in average dynamic circuit error rates as measured via randomized benchmarking. We apply the learned strategies to the dynamic circuit implementation of the quantum Fourier transform with measurement (QFT+M), demonstrating nontrivial process fidelity on connected chains of up to 20 qubits. Applying the resulting enhancement, we perform a high signal-to-noise QFT immediately following the preparation of a 10-qubit entangled state. Our results demonstrate that empirically optimized DD systematically outperforms theoretically derived sequences for dynamic circuits, establishing it as an efficient approach for error suppression in dynamic quantum circuits, with direct relevance to applications requiring measurement and feedback such as quantum error correction.
Benchmarking Quantum Kernel Support Vector Machines Against Classical Baselines on Tabular Data: A Rigorous Empirical Study with Hardware Validation
That author's affiliation: Helmholtz Zentrum München First author institution: Fraunhofer Institute for Production Technology IPT Last author institution: Schaeffler Technologies AG & Co. KG
Quantum kernel methods have been proposed as a promising approach for leveraging near-term quantum computers for supervised learning, yet rigorous benchmarks against strong classical baselines remain scarce. We present a comprehensive empirical study of quantum kernel support vector machines (QSVMs) across nine binary classification datasets, four quantum feature maps, three classical kernels, and multiple noise models, totalling 970 experiments with strict nested cross-validation. Our analysis spans four phases: (i) statistical significance testing, revealing that none of 29 pairwise quantum-classical comparisons reach significance at $\alpha = 0.05$; (ii) learning curve analysis over six training fractions, showing steeper quantum slopes on six of eight datasets that nonetheless fail to close the gap to the best classical baseline; (iii) hardware validation on IBM ibm_fez (Heron r2), demonstrating kernel fidelity $r \geq 0.976$ across six experiments; and (iv) seed sensitivity analysis confirming reproducibility (mean CV 1.4%). A Kruskal-Wallis factorial analysis reveals that dataset choice dominates performance variance ($\varepsilon^2 = 0.73$), while kernel type accounts for only 9%. Spectral analysis offers a mechanistic explanation: current quantum feature maps produce eigenspectra that are either too flat or too concentrated, missing the intermediate profile of the best classical kernel, the radial basis function (RBF). Quantum kernel training (QKT) via kernel-target alignment yields the single competitive result -- balanced accuracy 0.968 on breast cancer -- but with ~2,000x computational overhead. Our findings provide actionable guidelines for quantum kernel research. The complete benchmark suite is publicly available to facilitate reproduction and extension.
Wave--particle transition and quantum Zeno effect in which-way experiments with a superconducting quantum processor
That author's affiliation: RIKEN / University of Michigan First author institution: University of Georgia Last author institution: RIKEN / The University of Tokyo
Wave--particle duality demonstrates the peculiar nature of quantum mechanics. In which-way experiments, depending on the measurement scheme, a particle exhibits either wave-like or particle-like properties, as summarized by Bohr's principle of complementarity. In this work, we implement Mach-Zehnder (MZ) interferometry on a two-dimensional (2D) superconducting quantum processor. With precise control of the which-way measurement strength, we demonstrate the transition of a photon from wave-like to particle-like behavior. Furthermore, by performing quantum state tomography on two qubits located in the two paths, we demonstrate that which-way measurements break the entanglement and coherence between the two paths and cause information leakage from the quantum system to the environment. To capture this behavior quantitatively, we derive complementarity relations between the entropy and the fringe visibility. By applying a continuous which-way measurement during the evolution, we also observe the quantum Zeno effect that partially obstructs the interferometer path, giving rise to nonmonotonic behavior of purity and von Neumann entropy. Our experiments provide a detailed characterization of the full interferometer dynamics, reveal the relation between wave--particle duality and quantum information, and demonstrate the potential of superconducting quantum processors for testing quantum foundations under high precision and controllability.
Quantum Homomorphic Encryption: Towards Practical and Private Computation on Untrusted Quantum Hardware
That author's affiliation: University of the Basque Country (UPV/EHU) First author institution: TECNALIA, Basque Research and Technology Alliance (BRTA) Last author institution: University of the Basque Country (UPV/EHU)
As quantum computing matures into a practical paradigm, the need for secure and private quantum computation on untrusted hardware becomes increasingly urgent. While classical fully homomorphic encryption has enabled computation over encrypted data in untrusted environments, a fully homomorphic and practically implementable quantum counterpart remains elusive. In this work, we propose a universal quantum homomorphic encryption (QHE) framework developed from the Quantum One-Time Pad (QOTP) scheme. Our approach (QOTPH) maintains information-theoretic security and supports a broad class of quantum operations on encrypted quantum states through a systematic set of homomorphic gate decompositions and key update rules. By leveraging the symmetric structure of QOTP and exploiting the transformation properties of quantum gates under Pauli encryption, we enable non-interactive homomorphic evaluation of arbitrary circuits expressible in the Clifford+T gate set, as well as controlled and parameterized operations relevant to variational quantum algorithms and delegated computation. We provide a formal specification of the proposed encryption model, detail its implementation procedure, and report the results obtained from both simulated environments and real quantum processors. Experimental validation demonstrates the correctness of the homomorphic operations and the preservation of key secrecy under circuit-level noise and real-device constraints. This work takes a step toward bridging the gap between theoretical quantum homomorphic encryption and practical realization on near-term quantum hardware, offering a scalable and symmetric cryptographic primitive for privacy-preserving quantum computation.
An Oracle-Free Quantum Algorithm for Nonadiabatic Quantum Molecular Dynamics
That author's affiliation: University of Georgia Institution (first & last author): University of Georgia
Quantum computation is an attractive front for many problems that are intractable for computers today. One such problem is nonadiabatic quantum molecular dynamics, where quantized internal states coupling to parameterized modes result in a Hamiltonian resistant to oracle-based models and spectral decomposition. This dissertation applies diabatic Hamiltonian operators directly to the computational basis as first-quantized split-operator propagators, validated with dynamic observables including absorption and recurrence spectra, scattering cross-sections, population dynamics, and quantum scars. Circuits are derived and specified, with focused circuit optimization in multi-mode and multi-channel extensions, including multivariate potential energy terms and graph theoretic optimization from molecular symmetry. Resource estimation shows circuit depth advantage against QROM-loading architectures on a fault-tolerant scale, and a quantitative comparison against quantum signal processing variants confirms that a Trotter-based architecture retains a scalable T-gate advantage. Expanding beyond electronic states demonstrates that duality between finite basis and discrete variable representations permits congruent structural decompositions into quantum circuits, expanding the use of multi-channel dynamics far beyond chemistry.
Single-shot quantum neural networks with amplitude estimation
Quantum neural networks (QNNs) suffer from a fundamental sampling bottleneck since quantum measurements are probabilistic, requiring many circuit executions to estimate outputs with sufficient accuracy. Conventional Monte-Carlo (MC) inference exhibits an $\mathcal{O}(1/\sqrt{N})$ sampling error, rendering QNN inference and training costly on near-term quantum hardware, especially where each shot requires expensive qubit generation. This work introduces a "single-shot" QNN framework by integrating quantum amplitude estimation (AE) into the readout stage. By embedding a trained QNN as a state-preparation oracle within AE, outputs are estimated through coherent interference rather than repeated sampling. We demonstrate that AE-based QNN inference achieves an $\mathcal{O}(1/N)$ error even with a single shot. We further analyze noise robustness and training feasibility, showing that AE can be a powerful primitive for overcoming the sampling overhead of QNNs. This highlights that when the model itself is quantum, quantum algorithms can enhance the computation efficiency.
Fault-Tolerant Quantum Computing with Trapped Ions: The Walking Cat Architecture
We propose a fault-tolerant quantum computer architecture for trapped-ion devices, which we call the walking cat architecture. Our blueprint includes a compiler, a detailed description of all the quantum error-correction protocols, a micro-architecture, a sufficiently fast decoder, and thorough simulations. The backbone of the architecture is a cat factory, producing cat states distributed throughout the machine, which are consumed to perform logical operations. The walking cat architecture is based entirely on a modern quantum error-correction approach called low-density parity-check (LDPC) codes. We identify promising instances of the walking cat architecture, such as (1) a simple architecture based on a single LDPC code, (2) a fast architecture based on fast logical gates relying on a [[70, 6, 9]] code, equipped with Clifford-frame tracking for any 6-qubit Clifford gate, and (3) a dense architecture based on a [[102, 22, 9]]] code encoding 22 logical qubits per memory block. Our dense architecture provides a design with 110 logical qubits executing about one million T gates per day using only 2,514 physical qubits. We estimate that the quantum Hamiltonian simulation of a Heisenberg model on 100 sites can be executed within one month with 10,000 physical qubits, including all shots required to achieve chemical accuracy, suggesting that such a device could enter the regime of classically intractable physics simulations. Our design relies on hardware components that have been experimentally demonstrated on small devices. We emphasize simplicity over hypothetical performance to facilitate the practical realization of this machine. Based on this approach, we believe that a fault-tolerant quantum computer with hundreds of logical qubits capable of running millions of logical gates can be built in the near term, providing a platform to explore a broad range of applications.
Quantum mechanics over real numbers fully reproduces standard quantum theory
Standard quantum mechanics employs complex Hilbert spaces, but whether complex numbers are fundamental or merely convenient has long been debated. For decades, real-valued equivalents were considered mathematically possible but cumbersome. However, a landmark 2021 result claimed that any quantum theory based on real numbers is experimentally falsifiable via network Bell experiments. Yet, it remains an open question whether this falsification applies to all real-valued theories. Here we show that this conclusion rests on an incomplete real formulation, and we present a rigorous real-valued framework that perfectly reproduces all predictions of standard quantum mechanics, i.e. standard quantum mechanics. We demonstrate that the standard real tensor product ($\otimes_{\mathbb{R}}$) used in previous no-go theorems is algebraically incompatible with the rich structure of standard quantum mechanics. We present a real framework based on \ka space and prove that it is exactly isomorphic to standard quantum mechanics via an explicit bijection $\gamma$. The isomorphism extends to composite systems through a symplectic composition rule $\otimes^{\ks}$ that replaces the Kronecker product. Consequently, our formulation achieves the maximal $\mathrm{CHSH}_{3}$ violation of $6\sqrt{2}$ using purely real variables, directly contradicting previous falsification claims. These results demonstrate that complex numbers are not fundamentally required by nature; rather, they encode a deeper real geometric structure that governs quantum interference and entanglement, settling this long debate.
Architecting Early Fault Tolerant Neutral Atoms Systems with Quantum Advantage
Recent advancements in neutral atom platforms have enabled exploration of early fault-tolerant (FT) architectures for applications with quantum advantage, such as quantum dynamics simulations. An efficient fault-tolerant architecture has both spatially efficient quantum error correction codes (low qubit overhead), and efficient methodologies (transversal based gates, extractor based gates, etc.) for logical computation, to minimize overall execution time. Achieving the right balance between space and time can be critical for enabling early FT demonstrations of quantum advantage. In this work, we identify bottlenecks in existing spatially efficient schemes, which tend to be very serial, and do not take advantage of unutilized space. We introduce a teleportation-based scheme that leverages the reconfigurable connectivity of neutral atoms to parallelize logical operations. Our approach achieves up to \textbf{$\mathbf{\sim 3 \times}$ speedup} over extractor architectures at no extra space cost and achieves the best spacetime performance among other viable architectures before accounting for external \textit{resource-states}. To rigorously evaluate performance, we construct explicit quantum advantage benchmarks and \textit{simulate} compilation to a fault-tolerant instruction set, including low-level gate scheduling and shuttling patterns, and resource-state nondeterminism. We find that our speedups still apply and report exact space-time cost along with success probabilities, identifying architectures capable of achieving quantum advantage \textbf{with as little as $\mathbf{11,495}$ atoms and a runtime of $\mathbf{\sim 15}$ hours}.
Perspective: Quantum Computing on Magnetic Racetrack
Magnetic domain walls have long been pursued as carriers of classical information for storage and processing. With the ability to create, control, and probe domain walls at the nanoscale, they are recently recognized as an ideal platform for studying macroscopic quantum effects and provide a natural blueprint for building scalable quantum computing architectures. In particular, the experimentally demonstrated high mobility of domain walls makes them not only suitable as stationary qubits but also as flying qubits, which may offer advantages over currently explored quantum computing platforms. In this Perspective, we outline our current understanding of the essential ingredients and key requirements for realizing universal quantum computation based on magnetic domain walls. We highlight promising concrete material platforms and identify the experiments that are still needed to advance this concept. We also discuss the potential challenges and point to new opportunities in this emerging research direction at the interface between magnetism and quantum information science.
Digital quantum magnetism on a trapped-ion quantum computer
Digital quantum matter -- realized when discrete quantum gates approximate continuous time evolution -- is susceptible to heating into chaotic, structureless states. If digitization errors are adequately suppressed, a long-lived transient regime of approximately energy-conserving dynamics can be observed on gate-based quantum computers. Conservation of energy, in turn, enables the exploration of a wide variety of complex behaviors observed in equilibrium systems, ranging from the nontrivial microscopic origins of thermalization itself to the stabilization of effective models hosting exotic emergent properties. Here, we use Quantinuum's system model H2 quantum computer to simulate digitized dynamics of the quantum Ising model, suppressing digitization errors well enough to observe thermalization on timescales that severely challenge classical simulation methods. Relaxation of an inhomogeneous state reveals an emergent hydrodynamics due to approximate energy conservation, and we compute the associated diffusion constant. By reprogramming our simulations to take place on a triangular lattice with periodic boundary conditions, we observe thermalization consistent with emergent gauge and topological constraints resulting from lattice frustration. Our results were enabled by continued advances in two-qubit gate quality (native partial entangler fidelities of $99.94(1)\%$), and establish digital quantum computers as powerful tools for studying (effectively) continuous-time dynamics.
A quantum turbuloscope: unlocking end-to-end quantum simulation of turbulence
Multiscale organization is a hallmark of complex natural systems, spanning climate dynamics, biological morphogenesis, and fluid turbulence. While quantum computing promises exponential speedups for solving the evolution equations governing these fields, this potential is fundamentally hindered by the quantum state preparation bottleneck, the prohibitive cost of loading classical complex data into quantum states. Here, we overcome this barrier by introducing a physics-informed, three-stage geometric encoding method "turbuloscope", which efficiently generates complex turbulent fields. Rather than brute-force data loading, our approach acts as a kaleidoscope, leveraging the intrinsic structures of turbulence. We capture scale-invariant self-similarity via a hyperplane approximation in high-dimensional feature space, and utilize the Hopf fibration to map quantum observables directly onto vortex tubes, the fundamental building blocks of fluid turbulence. Remarkably, the algorithm requires no ancillary qubits, utilizes a linear-depth quantum circuit, and scales logarithmically with the Reynolds number, an exponential speedup compared to classical methods. We demonstrate the power of this method by generating an instantaneous turbulent field at a high Reynolds number of 35,000 across over one billion grid points using only 30 qubits, reproducing Kolmogorov's 5/3 energy spectrum, tangled vortex structures, and strong intermittency. This asymptotically optimal approach not only signals a near-term pathway to practical quantum advantage in fluid dynamics, but establishes a scalable foundation for the quantum simulation of broad multiscale systems.
Quantum spatial best-arm identification via quantum walks
Quantum reinforcement learning has emerged as a framework combining quantum computation with sequential decision-making, and applications to the multi-armed bandit (MAB) problem have been reported. The graph bandit problem extends the MAB setting by introducing spatial constraints, where the accessibility of arms is restricted by graph connectivity, yet quantum approaches to this setting remain limited. In this paper, we formulate best-arm identification in graph bandits and propose a quantum algorithmic framework, termed Quantum Spatial Best-Arm Identification (QSBAI), which is applicable to general graph structures. This framework uses quantum walks to encode superpositions over graph-constrained actions, thereby extending amplitude amplification and generalizing the quantum BAI algorithm via Szegedy's walk framework. We focus our theoretical analysis on complete and bipartite graphs, deriving the maximal success probability of identifying the best arm and the time step at which it is achieved. Our results clarify how quantum-walk-based search can be adapted to structurally constrained decision problems and provide a foundation for quantum best-arm identification in graph-structured environments.
Cyber Risk Scoring with QUBO: A Quantum and Hybrid Benchmark Study
Assessing cyber risk in complex IT infrastructures poses significant challenges due to the dynamic, interconnected nature of digital systems. Traditional methods often fall short, relying on static and largely qualitative models that do not scale with system complexity and fail to capture systemic interdependencies. In this work, we introduce a novel quantitative approach to cyber risk assessment based on Quadratic Unconstrained Binary Optimization (QUBO), a formulation compatible with both classical computing and quantum annealing. We demonstrate the capabilities of our approach using a realistic 255-nodes layered infrastructure, showing how risk spreads in non-trivial patterns that are difficult to identify through visual inspection alone. To assess scalability, we further conduct extensive experiments on networks up to 1000 nodes comparing classical, quantum, and hybrid classical-quantum workflows. Our results reveal that although quantum annealing produces solutions comparable to classical heuristics, its potential advantages are significantly hindered by the embedding overhead required to map the densely connected cyber-risk QUBO onto the limited connectivity of current quantum hardware. By contrast, hybrid quantum-classical solvers avoid this bottleneck and therefore emerge as a promising option, combining competitive scaling with an improved ability to explore the solution space and identify more stable risk configurations. Overall, this work delivers two main advances. First, we present a rigorous, tunable, and generalizable mathematical model for cyber risk that can be adapted to diverse infrastructures and domains through flexible parameterization. Second, we provide the first comparative study of classical, quantum, and hybrid approaches for cyber risk scoring at scale, highlighting the emerging potential of hybrid quantum-classical methods for large-scale infrastructures.
Properties of multi-qubit variational quantum states representing weighted graphs and their computing with quantum programming
We study multi-qubit variational quantum states that can be considered as vertex- and edge-weighted graph. These states are constructed as single-layer variational circuits with $RX$ rotations and $RZZ$ entangling gates, corresponding to graphs of arbitrary structure. In general case of quantum graph states of arbitrary structure we derive the geometric measure of entanglement and evaluate quantum correlators. It is shown that these quantities are related to the edge-weight structure around the corresponding vertices in the graph (i.e., edge weights incident to the vertices and vertex weights associated with their closed neighborhoods). In the special case of quantum states representing unweighted graphs, these quantities are related to the degrees of the corresponding vertices in the graph. As an example, we analyze the state associated with the star graph $K_{1,4}$ using noisy quantum computing on the AerSimulator. The results are in good agreement with theoretical predictions. These findings demonstrate a connection between graph structure and quantum properties, enabling the study of properties of classical graphs via quantum computing.
Q-SINDy: Quantum-Kernel Sparse Identification of Nonlinear Dynamics with Provable Coefficient Debiasing
Quantum feature maps offer expressive embeddings for classical learning tasks, and augmenting sparse identification of nonlinear dynamics (SINDy) with such features is a natural but unexplored direction. We introduce \textbf{Q-SINDy}, a quantum-kernel-augmented SINDy framework, and identify a specific failure mode that arises: \emph{coefficient cannibalization}, in which quantum features absorb coefficient mass that rightfully belongs to the polynomial basis, corrupting equation recovery. We derive the exact cannibalization-bias formula $\Delta\xi_P = (P^\top P)^{-1}P^\top Q\,\hat\xi_Q$ and prove that orthogonalizing quantum features against the polynomial column space at fit time eliminates this bias exactly. The claim is verified numerically to machine precision ($<10^{-12}$) on multiple systems. Empirically, across six canonical dynamical systems (Duffing, Van der Pol, Lorenz, Lotka-Volterra, cubic oscillator, R\"ossler) and three quantum feature map architectures (ZZ-angle encoding, IQP, data re-uploading), orthogonalized Q-SINDy consistently matches vanilla SINDy's structural recovery while uncorrected augmentation degrades true-positive rates by up to 100\%. A refined dynamics-aware diagnostic, $R^2_Q$ for $\dot X$, predicts cannibalization severity with statistical significance (Pearson $r=0.70$, $p=0.023$). An RBF classical-kernel control across 20 hyperparameter configurations fails more severely than any quantum variant, ruling out feature count as the cause. Orthogonalization remains robust under depolarizing hardware noise up to 2\% per gate, and the framework extends without modification to Burgers' equation.
The Rise of Quantum Computing -- Take a BITE for Built Environment and Urban Microclimate Research
Quantum computing is a new approach to computation that utilizes superposition, entanglement, interference, and tunneling to solve problems too complex for classical computers. This paper discusses the basic concepts and development of quantum computing, exploring its potential applications in the built environment and urban microclimate research. In buildings, quantum computing may help optimize energy management, control HVAC systems, and plan electric vehicle charging networks more efficiently. For urban microclimates, it could accelerate renewable energy planning and support multi-objective design, making it easier to balance urban building performance with climate conditions. Since current quantum hardware is still in the Noisy Intermediate-Scale Quantum (NISQ) stage, we propose the "BITE" principle to guide researchers in choosing suitable problems for quantum acceleration: B (Big search), I (Input-light), T (Tiny computation), and E (Evaluation polish). Although quantum computing still faces challenges such as noise and hardware limits, it offers great potential for developing more climate-resilient, sustainable, and energy-efficient cities of the future.
Classical and Quantum Machine Learning for Population-Level Prediction of Heat-Related Physiological Events
That author's affiliation: La Salle Campus Barcelona - Universitat Ramon Llull First author institution: Universidad de Valladolid Last author institution: La Salle – U. Ramon Llull
Predicting heat-related physiological events at the population level is challenging due to the complex interactions among climatic, demographic, and socioeconomic factors, as well as the strong sparsity and seasonality of observational data. In this work, we propose a unified predictive framework that integrates heterogeneous environmental and public-health datasets and evaluates two learning paradigms within a common pipeline: classical machine learning and quantum machine learning. The methodology combines data harmonization, temporal aggregation, feature engineering, and dimensionality reduction to construct a weekly county-level population dataset. On this unified representation, we train both a classical regression baseline and a variational quantum model based on parameterized quantum circuits with angle embedding and data re-uploading. Experimental evaluation on datasets from the United States and Catalonia shows that classical models currently achieve higher predictive accuracy, particularly under conditions of strong class imbalance and sparse targets. Nevertheless, the quantum models demonstrate non-trivial learning capability and capture meaningful predictive structure in several scenarios. These results provide an empirical comparison between classical and quantum learning approaches for population-level physiological prediction and establish a methodological foundation for future hybrid health modeling as quantum hardware continues to evolve.
Efficient $n$-qubit entangling operations via a superconducting quantum router
That author's affiliation: University of Chicago Institution (first & last author): University of Chicago
Quantum algorithms on near-term quantum processors are typically executed using shallow quantum circuits composed of one- and two-qubit gates. However, as circuit depth and gate number increase, gate imperfections and qubit decoherence begin to dominate, limiting algorithmic complexity. An alternative approach is to explore gates involving more than two qubits. In previous work (X. Wu et al., Physical Review X 14, 041030 (2024)), we demonstrated a new superconducting qubit architecture with user-selectable two-qubit interactions via a reconfigurable router, used to connect pairs of qubits. Here, we leverage this novel architecture to realize programmable and efficient multi-qubit operations involving more than two qubits, resulting in faster preparation of multi-qubit entangled states with good fidelities. We also successfully apply model-free reinforcement learning to perform multi-qubit gates, including training a two-qubit controlled-Z gate as well as three-qubit controlled-SWAP and controlled-controlled-phase (Fredkin and Toffoli) gates. Higher $n$th-order gates may also be feasible, using our high-connectivity router design. This could provide a more efficient and higher-fidelity implementation of complex quantum algorithms and a more practical approach to quantum computation.
Quantum computation at the edge of chaos
That author's affiliation: University of Hamburg Institution (first & last author): University of Hamburg
A key challenge in classical machine learning is to mitigate overparameterization by selecting sparse solutions. We translate this concept to the quantum domain, introducing quantum sparsity as a principle based on minimizing quantum information shared across multiple parties. This allows us to address fundamental issues in quantum data processing and convergence issues such as the barren plateau problem in Variational Quantum Algorithm (VQA). We propose a practical implementation of this principle using the topological Entanglement Entropy (TEE) as a cost function regularizer. A non-negative TEE is associated with states with a sparse structure in a suitable basis, while a negative TEE signals untrainable chaos. The regularizer, therefore, guides the optimization along the critical edge of chaos that separates these regimes. We link the TEE to structural complexity by analyzing quantum states encoding functions of tunable smoothness, deriving a quantum Nyquist-Shannon sampling theorem that bounds the resource requirements and error propagation in VQA. Numerically, our TEE regularizer demonstrates significantly improved convergence and precision for complex data encoding and ground-state search tasks. This work establishes quantum sparsity as a design principle for robust and efficient VQAs.
Explainable quantum regression algorithm with encoded data structure
That author's affiliation: University of California San Diego First author institution: University of California San Diego Last author institution: University of Helsinki
Hybrid variational quantum algorithms are promising for solving practical problems, such as combinatorial optimization, quantum chemistry simulation, quantum machine learning, and quantum error correction on noisy quantum computers. However, variational quantum algorithms (derived from randomized hardware-efficient ansatz or adaptive ansatz) become a black box, not trustworthy for model interpretation, and not to mention for application deployment in informing critical decisions. In this paper, we construct the first interpretable quantum regression algorithm, in which the quantum state exactly encodes the classical data table and the variational parameters correspond directly to the regression coefficients, which are real numbers by construction, providing a high degree of model interpretability and minimal cost to optimize due to the right expressiveness. We also exploit the encoded data structure to reduce the gate complexity of computing the regression map. To reduce circuit depth in nonlinear regression, our algorithm can be extended by directly constructing nonlinear features via classical preprocessing, such as independent encoded column vectors. By design, the model performance is determined by the cost function measurement results $\mathcal{C}$ synchronous to the mean squared errors (MSE) for the regression models. We derived the read-out errors induced by one-hot encoding and compact encoding; the required physical qubit resources are exponentially compressed for the compact encoding to be favorable for noisy quantum devices. We also derive the cost function dependent sample complexity $ \in \mathcal{O}\left(\sigma^{2}(\mathcal{C}) \ln (1/\alpha)/\epsilon^{2}\right)$ under the error budget $\epsilon$ and confidence tolerance $\alpha$.
Digital Predistortion for Flux Control of Tunable Superconducting Qubits
That author's affiliation: Keysight Technologies First author institution: Jaya institute of technology Last author institution: Keysight Technologies
Flux-tunable superconducting qubits rely on fast flux control pulses to implement two-qubit entangling quantum gates, a key building block for quantum algorithms. However, distortion effects introduced by non-ideal control electronics, parasitic components, and the cryogenic quantum chip response can all degrade the gate fidelity. We present a digital predistortion (DPD) framework for characterizing and then compensating for these distortions using a combination of infinite impulse response (IIR) and finite impulse response (FIR) filters. Experiments on a flux-tunable quantum processing unit (QPU) demonstrate a successful correction of step-response distortions on the flux-control line, with a compensated control signal showing only sub-percent deviations from the ideal target linear behavior. The demonstrated method enables automated rapid calibration of flux control channels for superconducting QPUs.
A Modular Cryogenic Link for Microwave Quantum Communication Over Distances of Tens of Meters
That author's affiliation: ETH Zurich Institution (first & last author): ETH Zurich
Quantum technologies promise a radically new way to solve classically intractable computing problems. Superconducting circuits as a platform are at the forefront of this field. The cryogenic operation temperatures of superconducting circuits however impose challenges for the further scaling to many connected quantum information processing units into a local area or global network. In this work, we present a hardware solution for connecting quantum devices operating at microwave frequencies into local area networks, which enable the exchange of quantum information between spatially separated parties. Specifically, we demonstrate a modular system spanning distances of 5, 10 and 30 meters operated at cryogenic temperatures and connecting two superconducting circuit systems, located in individual dilution refrigerators, through a quantum communication channel. We develop a thermal model to evaluate the heat transfer processes in the setup, optimize the design and select appropriate materials for its construction. The assembled 30-meter-long system achieves operating temperatures of below 50 mK after a cooldown time of about six and a half days. This link enables the execution of distributed quantum computing and communication algorithms. It also adds the resource of non-locality, certified by a loophole-free Bell test, to the field of quantum science and technology with superconducting circuits.
A unified framework for efficient quantum simulation of nonlinear spectroscopy
That author's affiliation: Peking University Institution (first & last author): Peking University
Nonlinear spectroscopy is a cornerstone of quantum science, providing unique access to multi-point correlations, quantum coherence, and couplings that are invisible to linear methods. However, classical simulation of these phenomena is fundamentally limited by the exponential growth of the Hilbert space, and practical quantum algorithms for the nonlinear regime have remained largely unexplored. Here, we present a unified quantum algorithmic framework for computing $n$-th order nonlinear spectroscopies. By reformulating multi-time responses as a weighted sum of expectation values at finite pump amplitudes via a generalized parameter shift rule, our approach bypasses the costly evaluation of high-order commutators and time-dependent operator expansions. This reformulation enables efficient execution via real-time evolution on current quantum hardware, ensuring inherent noise resilience. We validate the framework on IBM's superconducting quantum processors, successfully obtain higher-order response functions of a 12-qubit XXZ spin-chain. Furthermore, the versatility of our method is demonstrated by resolving quasi-particle excitation spectra in spin-liquids and identifying interaction-induced cross-peaks in atomic systems. Our results establish a practical and scalable pathway for probing complex quantum dynamics on near-term quantum devices, extending the reach of quantum simulation into the nonlinear domain.
Coherence dynamics in Simon's quantum algorithm
That author's affiliation: Capital Normal University First author institution: Nanchang University Last author institution: Capital Normal University
Quantum coherence plays a pivotal role in quantum algorithms. We study the coherence dynamics of the evolved states in Simon's quantum algorithm based on Tsallis relative $\alpha$ entropy and $l_{1,p}$ norm. We prove that the coherences of the first register and the second register both rely on the dimension $N$ of the state spaces of the $n$ qubit systems, and increase with the increase of $N$. We show that the oracle operator $O$ does not change the coherence. Moreover, we study the coherence dynamics in the Simon's quantum algorithm and prove that in overall the coherence is in production when $N>4$ and in depletion when $N<4$.
Towards Ultra-High-Rate Quantum Error Correction with Reconfigurable Atom Arrays
That author's affiliation: Massachusetts Institute of Technology First author institution: Unknown Last author institution: Massachusetts Institute of Technology
Quantum error correction is widely believed to be essential for large-scale quantum computation, but the required qubit overhead remains a central challenge. Quantum low-density parity-check codes can substantially reduce this overhead through high-rate encodings, yet finite-size instances with practical logical error rates often achieve encoding rates only around or below $1/10$. Here, building on a recent ultra-high-rate construction by Kasai, we identify new structural conditions on the underlying affine permutation matrices that make encoding rates exceeding $1/2$ compatible with efficient implementation on reconfigurable neutral atom arrays. These conditions define a co-designed family of ultra-high-rate quantum codes that supports efficient syndrome extraction and atom rearrangement under realistic parallel control constraints. Using a hierarchical decoder with high accuracy and good throughput, we study the performance under a circuit-level noise model with $p=0.1\%$, achieving per-logical-per-round error rates of $1.3_{-0.9}^{+3.0} \times 10^{-13}$ with a $[[2304,1156,\leq 14]]$ code and $2.9_{-1.5}^{+3.1} \times 10^{-11}$ with a $[[1152,580,\leq 12]]$ code. These results approach the teraquop regime, highlighting the promise of this code family for practical ultra-high-rate quantum error correction.
A digitally controlled silicon quantum processing unit
That author's affiliation: HRL Laboratories (United States) Institution (first & last author): HRL Laboratories (United States)
Commercially-relevant quantum computers will require large numbers of high-performing qubits that can be manufactured, integrated, and controlled at scale. Silicon exchange-only (EO) qubits are a strong candidate modality due to their control-signal simplicity and compatibility with advanced semiconductor manufacturing, but questions remain around the achievability of sufficiently low noise and a scalable control and wiring solution. Here we introduce a quantum processing unit composed of a custom-designed cryogenic CMOS controller, a novel high-density superconducting ribbon cable, and a low-noise EO qubit device. The quantum chip features a three-rail array of 54 exchange-coupled quantum dots, configurable to host up to 18 EO qubits. We integrate and use these components to demonstrate qubit performance for both single-qubit and entangling operations that advances the EO state of the art by an order of magnitude. We further validate this system by implementing a distance-5 repetition code and a quantum error detecting code then make detailed comparisons with simulations. Our approach facilitates a utility-scale quantum computer with manageable operational and capital requirements.
Yttrium ion as a platform for quantum information processing
That author's affiliation: University of Delaware First author institution: University of Washington Last author institution: University of California, Los Angeles
Engineering large-scale quantum computers which simultaneously provide high-fidelity quantum operations, low memory errors, low crosstalk, and reasonable resource usage remains an outstanding challenge across quantum computing platforms. In trapped ions, progress has largely focused on alkaline-earth and ytterbium ions, whose simple electronic structures facilitate control over their internal state. Here we investigate singly-ionized yttrium ($^{89}\mathrm{Y}^+$), a two-valence-electron ion whose ground-state manifold hosts a nuclear-spin qubit and which also features a variety of low-lying metastable manifolds, for applications in quantum information processing. Because experimental data are limited, we perform high-resolution laser-induced fluorescence spectroscopy to measure the hyperfine structure of several low-lying levels, and carry out comprehensive electronic structure calculations to determine lifetimes, transition matrix elements, and hyperfine coefficients for manifolds addressable with visible, near-visible, or infrared wavelengths. Using these results, we analyze schemes for qubit storage, initialization, readout, leakage mitigation, and single- and two-qubit gates. These results position $^{89}\mathrm{Y}^+$ as a uniquely capable next-generation trapped-ion qubit, combining field-insensitive nuclear-spin or clock-qubit storage with spectrally isolated transitions for operations.
Fast, High-Fidelity Erasure Detection of Dual-Rail Qubits with Symmetrically Coupled Readout
Erasure qubits are a promising platform for implementing hardware-efficient quantum error correction. Realizing the error-correction advantages of this encoding requires frequent mid-circuit erasure checks that are fast, high-fidelity, and scalable. Here, we realize erasure detection with a hardware-efficient circuit consisting of a single readout resonator dispersively and symmetrically coupled to both transmons of a dual-rail qubit. We use this circuit to demonstrate single-shot erasure detection in 384 ns with minimal impact on the dual-rail logical manifold, achieving a residual error per check of $6.0(2) \times 10^{-4}$, with only $8(3) \times 10^{-5}$ induced dephasing per check, and an erasure error per check of $2.54(1)\times 10^{-2}$. The high degree of matched dispersive readout coupling ($\chi$-matching) within the dual-rail qubit code space also allows us to realize a new modality: time-continuous erasure detection performed in parallel with single-qubit gates. Here we achieve a median $7.2 \times 10^{-5}$ error per gate with $< 1 \times 10^{-5}$ error induced by erasure detection. This demonstrates a reduction in erasure detection overhead as well as a crucial ingredient for soft information quantum error correction. Together, these results establish symmetrically coupled dispersive readout as a fast, hardware-efficient, and scalable component for erasure-based quantum error correction using transmon dual-rail qubits.
Federated Learning with Quantum Enhanced LSTM for Applications in High Energy Physics
Learning with large-scale datasets and information-critical applications, such as in High Energy Physics (HEP), demands highly complex, large-scale models that are both robust and accurate. To tackle this issue and cater to the learning requirements, we envision using a federated learning framework with a quantum-enhanced model. Specifically, we design a hybrid quantum-classical long-shot-term-memory model (QLSTM) for local training at distributed nodes. It combines the representative power of quantum models in understanding complex relationships within the feature space, and an LSTM-based model to learn necessary correlations across data points. Given the computing limitations and unprecedented cost of current stand-alone noisy-intermediate quantum (NISQ) devices, we propose to use a federated learning setup, where the learning load can be distributed to local servers as per design and data availability. We demonstrate the benefits of such a design on a classification task for the Supersymmetry(SUSY) dataset, having 5M rows. Our experiments indicate that the performance of this design is not only better that some of the existing work using variational quantum circuit (VQC) based quantum machine learning (QML) techniques, but is also comparable ($\Delta \sim \pm 1\%$) to that of classical deep-learning benchmarks. An important observation from this study is that the designed framework has $<$300 parameters and only needs 20K data points to give a comparable performance. Which also turns out to be a 100$\times$ improvement than the compared baseline models. This shows an improved learning capability of the proposed framework with minimal data and resource requirements, due to the joint model with an LSTM based architecture and a quantum enhanced VQC.
Quantum communication networks with defects in silicon carbide
Quantum communication promises unprecedented capabilities enabled by the transmission of quantum states of light. However, current implementations face severe distance limitations due to photon loss. Silicon carbide (SiC) defects have emerged as a promising quantum device platform, offering strong optical transitions, long spin coherence lifetimes and the opportunity for integration with semiconductor devices. Some defects with optical transitions in the telecom range have been identified, allowing to interface with fiber networks without the need for wavelength conversion. These unique properties make SiC an attractive platform for the implementation of quantum nodes for quantum communication networks. We provide an overview of the most prominent defects in SiC and their implementation in spin-photon interfaces. Furthermore, we model an exemplary, memory-enhanced quantum communication protocol in order to extract the parameters required to surpass a direct point-to-point link performance. Based on these insights, we summarize the key steps required towards the deployment of SiC devices in large-scale quantum communication networks.
Resource-efficient equivariant quantum convolutional neural networks
Equivariant quantum neural networks (QNNs) are promising variational models that exploit symmetries to improve machine learning capabilities. Despite theoretical developments in equivariant QNNs, their implementation on near-term quantum devices remains challenging due to limited computational resources. This study proposes a resource-efficient model of equivariant quantum convolutional neural networks (QCNNs) called equivariant split-parallelizing QCNN (sp-QCNN). Using a group-theoretical approach, we encode general symmetries into our model beyond the translational symmetry addressed by previous sp-QCNNs. We achieve this by splitting the circuit at the pooling layer while preserving symmetry. This splitting structure effectively parallelizes QCNNs to improve measurement efficiency in estimating the expectation value of an observable and its gradient by order of the number of qubits. Our model also exhibits high trainability and generalization performance, including the absence of barren plateaus. Numerical experiments demonstrate that the equivariant sp-QCNN can be trained and generalized with fewer measurement resources than a conventional equivariant QCNN in a noisy quantum data classification task. Our results contribute to the advancement of practical quantum machine learning algorithms.
Generalized quantum singular value transformation with application in quantum conjugate gradient least squares algorithm
Quantum signal processing (QSP) and generalized quantum signal processing (GQSP) are essential tools for implementing the block encoding of matrix functions. The achievable polynomials of QSP have restrictions on parity, while GQSP eliminates these restrictions. But GQSP only constructs functions of unitary matrices. In this paper, we further investigate GQSP and extend it to general matrices. Compared with the quantum singular value transformation (QSVT), our proposed method relaxes the requirements on the parity of polynomials. We refer to this extension as generalized quantum singular value transformation (GQSVT). Subsequently, by utilizing the relationship between generalized matrix functions and standard matrix functions, we propose a classical-quantum hybrid quantum conjugate gradient least squares (CGLS) algorithm using GQSVT.
AI-Enabled Decoding of Qubit Loss for Quantum Error-Correcting Codes
Qubit loss is a major source of error in quantum computation, as it invalidates the algebraic structure of the standard stabilizer formalism for quantum error-correcting codes. On the one hand, it complicates decoding; on the other hand, it introduces stochastic flicker patterns in stabilizers as a hallmark of qubit loss. Here, we develop an artificial-intelligence-enabled decoder based on a spatiotemporal Graph Neural Network (STGNN) architecture to extract spatial and temporal correlations from syndrome histories. Our decoder performs a dual-head task, simultaneously correcting standard Pauli errors and identifying the locations of qubit loss. Our decoder achieves significantly higher logical accuracy than both the traditional minimum-weight perfect matching (MWPM) algorithm and even delayed-erasure MWPM decoders that use qubit loss information from the final round as input. Our decoder can also identify more than 90% of loss locations after accumulating stabilizer measurements over the subsequent ten rounds, thereby facilitating qubit reinitialization, for instance, via the continuous loading technique on the atom array platform. For both tasks, our STGNN performs nearly identically to a modified version of AlphaQubit, but it employs a parallel input structure, giving it an advantage in inference time over modified AlphaQubit's recurrent input structure. This work provides a robust and scalable framework for correcting qubit loss errors, paving the way for more efficient fault-tolerant quantum computation.
Entanglement and circuit complexity in finite-depth random linear optical networks
We study the growth of entanglement and circuit complexity in random passive linear optical networks as a function of the circuit depth. For entanglement dynamics, we start with an initial Gaussian state with all $n$ modes squeezed. For random brickwall circuits, we show that entanglement, as measured by the R\'enyi-2 entropy, grows at most diffusively as a function of the depth. In the other direction, for arbitrary circuit geometries we prove bounds on depths which ensure the average subsystem entanglement reaches within a constant factor of the maximum value in all subsystems, and bounds which ensure closeness of the random linear optical unitary to a Haar random unitary in $L^2$ Wasserstein distance. We also consider robust circuit complexity for random one-dimensional brickwall circuits, as measured by the minimum number of gates required in any circuit that approximately implements the linear optical unitary. Viewing this as a function of the number of modes and the circuit depth, we show the robust circuit complexity for random one-dimensional brickwall circuits scales at most diffusively in the depth with high probability. The corresponding Gaussian unitary $\tilde{\mathcal U}$ for the approximate implementation retains high output fidelity $|\langle\psi|\mathcal U^\dagger \tilde{\mathcal U}|\psi\rangle|^2$ for pure states $|\psi\rangle$ with constrained expected photon-number.
Scalable quantum error correction tailored for a heavy-hex qubit array
To produce an operable quantum computer that is made with imperfect hardware, we must design and test scalable quantum error correcting codes that are suited for the devices we can build and, in unison, develop decoding strategies that accommodate device-specific noise characteristics. Here, we introduce the \emph{dynamic compass code}, a subsystem code with a novel syndrome extraction cycle, that has a competitive threshold while making efficient use of qubits arranged on a heavy-hex lattice. We use a superconducting qubit array to implement a distance-5 instance of this code, and demonstrate how detailed noise characterisation can boost decoder performance to yield significant improvements in logical error rates. We perform averaged circuit eigenvalue sampling (ACES) to acquire detailed context-dependent error information on all elements of the syndrome extraction process. Furthermore, we leverage soft information produced from measurement devices to augment the decoder with measurement error information and detect leakage errors for exclusion through post-selection. Our noise-informed approach yields up to 38.3\% improvement in the logical error rate of a distance-5 implementation of the dynamic compass code in experiment.
Low-valency scalable quantum error correction with a dynamic compass code
The ongoing development of hardware that is capable of reliably executing general quantum algorithms requires quantum error-correcting codes that are both practical for realisation and rapidly reduce logical error rates as they are scaled up. Here we introduce the dynamic compass code, a code that can be implemented with a modest footprint on the heavy-hex lattice while also demonstrating a threshold. The dynamic code is obtained by choosing a novel measurement schedule for the syndrome extraction circuit of the heavy-hex subsystem code. We numerically evaluate its performance and observe that different choices of schedule can provide a trade-off in protection against logical errors in the $X$ vs $Z$ basis. We also demonstrate that this new measurement schedule provides the code with a threshold for stability experiments. We finally show how the dynamic compass code could be used for fault-tolerant logic by illustrating lattice surgery between code patches.
Distributed Variational Quantum Linear Solver
The Variational Quantum Linear Solver (VQLS), a hybrid quantum-classical algorithm for solving linear systems, faces a practical scalability bottleneck: the Linear Combination of Unitaries (LCU) decomposition requires O(L^2) circuit evaluations per optimizer iteration, where $L$ can grow as 4^n for n-qubit systems for the worst case scenario. We address this computational bottleneck through two complementary strategies. First, we present a distributed VQLS (D-VQLS) framework, built on NVIDIA CUDA-Q, that enables asynchronous, scalable distribution of the O(L^2) cost-function evaluations. Second, a fast Walsh--Hadamard transform (FWHT)-based Pauli decomposition with 1% coefficient thresholding curbs the exponential growth of LCU terms, reducing L from O}(2^n) to O(1) for n > 6 qubits and compressing the per-iteration circuit complexity from O(n * 4^n) to O(n) for sparse, structured matrices. For a 10-qubit tridiagonal Toeplitz system, this yields a 256x reduction, from 23 million to 90,112 circuits per iteration, while preserving over $99.99\%$ solution fidelity. Additionally, to inform feasibility on early fault-tolerant QPUs, the paper provides resource estimates -- gate counts, qubit requirements, and circuit evaluations per iteration -- for VQLS applied to arbitrary matrices. The D-VQLS framework is validated on the NERSC Perlmutter supercomputer using multi-node, multi-GPU ideal state-vector simulations, achieving over 99.99% fidelity against classical solutions on tridiagonal Toeplitz and Hele--Shaw flow benchmarks, with near-ideal strong scaling up to 24 GPUs and 95.3% weak scaling efficiency at 96 GPUs processing 360,448 circuits per iteration for a 10-qubit system. Systematic profiling identifies the optimal resource allocation for distributed quantum circuit workloads, yielding a 2.52x speedup for the configurations studied.
Photonic state engineering via energy-level crossing by giant atoms in topological waveguide QED setup
Photonic state engineering in waveguide QED is typically based on local light-matter interactions. This limits its control over the spatial structure of bound photonic states. Here, we demonstrate a distinct mechanism arising from the interplay between nonlocal giant-atom coupling and topological band structure. Specifically, we consider giant atoms coupled to a Su-Schrieffer-Heeger waveguide and show that this configuration enables a controllable energy-level crossing protected by the topological gap. Adiabatically sweeping the atomic detuning across the crossing leads to a controlled exchange between distinct photonic bound states. In a two-giant-atom configuration, this mechanism achieves high-fidelity conversion of a spatially splitting state into a combining state. Extending this scheme to three-giant atoms, we further realize robust, shape-preserving photon transfer mediated by sequential in-gap crossings. Our results demonstrate how topology and nonlocal light-matter coupling can be combined to achieve programmable control of bound photonic states in waveguide QED platforms.
Coherence dynamics in quantum algorithm for linear systems of equations
Quantum coherence is a fundamental issue in quantum mechanics and quantum information processing. We explore the coherence dynamics of the evolved states in HHL quantum algorithm for solving the linear system of equation $A\overrightarrow{x}=\overrightarrow{b}$. By using the Tsallis relative $\alpha$ entropy of coherence and the $l_{1,p}$ norm of coherence, we show that the operator coherence of the phase estimation $P$ relies on the coefficients $\beta_{i}$ obtained by decomposing $|b\rangle$ in the eigenbasis of $A$. We prove that the operator coherence of the inverse phase estimation $\widetilde{P}$ relies on the coefficients $\beta_{i}$, eigenvalues of $A$ and the success probability $P_{s}$, and it decreases with the increase of the probability when $\alpha\in(1,2]$. Moreover, the variations of coherence deplete with the increase of the success probability and rely on the eigenvalues of $A$ as well as the success probability.
Learning to Concatenate Quantum Codes
Concatenating quantum error correction codes scales error correction capability by driving logical error rates down double-exponentially across levels. However, the noise structure shifts under concatenation, making it hard to choose an optimal code sequence. We automate this choice by estimating the effective noise channel after each level and selecting the next code accordingly. In particular, we use learning-based methods to tailor small, non-additive encoders when the noise exhibits sufficient structure, then switch to standard codes once the noise is nearly uniform. In simulations, this level-wise adaptation achieves a target logical error rate with far fewer qubits than concatenating stabilizer codes alone--reducing qubit counts by up to two orders of magnitude for strongly structured noise. Therefore, this hybrid, learning-based strategy offers a promising tool for early fault-tolerant quantum computing.
Runtime-efficient zero-noise extrapolation from mixed physical and logical data
Partial quantum error correction and quantum error mitigation are expected to coexist in the pre-fault-tolerant regime, yet the resource advantage of combining them remains insufficiently quantified. We study zero-noise extrapolation constructed from mixed datasets that contain a small number of error-corrected data points together with data obtained without error correction. The low-noise logical points anchor the extrapolation, while the higher-noise physical points enlarge the noise baseline at a much smaller runtime cost. Under a simple model in which error correction suppresses the effective gate error rate from p to $\gamma$p, we derive the variance of the zero-noise estimator and compare the physical runtime required to reach a target precision. For Richardson extrapolation, the mixed-data strategy reduces variance amplification and can lower the required physical runtime by several orders of magnitude when $\gamma \leq 0.1$. As a proof of principle, we apply the method to digital quantum simulation of a six-spin transverse-field Ising model and find that mixed physical/logical datasets yield lower-variance zero-noise estimates and outperform extrapolation based only on error-corrected data in the parameter regime studied here. These results identify hybrid error correction and error mitigation as a practical route to resource-efficient quantum computation before full fault tolerance.
SyQMA: A memory-efficient, symbolic and exact universal simulator for quantum error correction
The classical simulation of universal quantum circuits is crucial both fundamentally and practically for quantum computation. We propose SyQMA, a simulator with several convenient features, particularly suited for quantum error correction (QEC). SyQMA simulates universal quantum circuits with incoherent Pauli noise and computes exact expectation values and measurement probabilities as symbolic functions of circuit parameters: rotation angles, measurement outcomes, and noise rates. This simulator can sample measurement outcomes, enabling the simulation of dynamic quantum programs where circuit composition depends on prior measurement outputs. For QEC, it performs circuit-level maximum-likelihood decoding, provides exact symbolic expressions for logical error rates, and verifies the fault distance of fault-tolerant (FT) stabiliser and magic state preparation protocols. These features are enabled by an intuitive extension of stabiliser simulators, where each non-Clifford Pauli rotation and incoherent Pauli channel is compactly represented via auxiliary qubits and a modified trace. Representing the state requires only polynomial memory and time, while computing expectation values and measurement probabilities takes exponential time in the number of non-Clifford rotations and deterministic measurements, but only polynomial memory. The FT preparation of stabiliser and magic states, including the first stage of magic state cultivation, is analysed without approximations. We also exactly convert the disjoint error probabilities of a general multi-qubit Pauli channel to independent ones, a key step for creating and sampling from detector error models. The code is publicly available and open-source.
Low-rank geometry of two-qubit gates
We present a framework based on the determinantal geometry of two-qubit gates. Combining the Weyl chamber representation with operator Schmidt theory, we interpret gate synthesis as a distance problem to determinantal varieties. This gives an operational geometry to the Weyl chamber, quantifying nonlocal complexity. We show that the square root iSWAP gate is the closest perfect entangler to the variety of local operations, and that no perfect entangler can be approximated by a local gate with average gate fidelity above 79.8%. The three different determinantal costs form a synthesis-adapted coordinate system that encodes nonlocal complexity and generally reconstructs the Weyl chamber.
Super-Constant Weight Dicke States in Constant Depth Without Fanout
An $n$-qubit Dicke state of weight $k$, is the uniform superposition over all $n$-bit strings of Hamming weight $k$. Dicke states are an entanglement resource with important practical applications in the NISQ era and, for instance, play a central role in Decoded Quantum Interferometry (DQI). Furthermore, any symmetric state can be expressed as a superposition of Dicke states. First, we give explicit constant-depth circuits that prepare $n$-qubit Dicke states for all $k \leq \text{polylog}(n)$, using only multi-qubit Toffoli gates and single-qubit unitaries. This gives the first $\text{QAC}^0$ construction of super-constant weight Dicke states. Previous constant-depth constructions for any super-constant $k$ required the FANOUT$_n$ gate, while $\text{QAC}^0$ is only known to implement FANOUT$_k$ for $k$ up to $\text{polylog}(n)$. Moreover, we show that any weight-$k$ Dicke state can be constructed with access to FANOUT$_{\min(k,n-k)}$, rather than FANOUT$_n$. Combined with recent hardness results, this yields a tight characterization: for $k \leq n/2$, weight-$k$ Dicke states can be prepared in $\text{QAC}^0$ if and only if FANOUT$_k \in \text{QAC}^0$. We further extend our techniques to show that, in fact, \emph{any} superposition of $n$-qubit Dicke states of weight at most $k$ can be prepared in $\text{QAC}^0$ with access to FANOUT$_k$. Taking $k = n$, we obtain the first $O(1)$-depth unitary construction for arbitrary symmetric states. In particular, any symmetric state can be prepared in constant depth on quantum hardware architectures that support FANOUT$_n$, such as trapped ions with native global entangling operations.
Generating photons from vacuum with counter rotating wave interaction
We propose a bang-bang control scheme to enhance photon generation from the vacuum via the counter-rotating wave (CRW) interaction, and develop a pruning greedy algorithm (PGA) to identify the optimal control sequence. Our numerical results demonstrate that the maximum number of photons generated within a given evolution time is increased by several orders of magnitude compared with that achieved by continuous activation of the CRW interaction.
Holonomic quantum computation: a scalable adiabatic architecture
Holonomic quantum computation exploits the geometric evolution of eigenspaces of a degenerate Hamiltonian to implement unitary evolution of computational states. In this work we introduce a framework for performing scalable quantum computation in atom experiments through a universal set of fully holonomic adiabatic gates. Through a detailed differential geometric analysis, we elucidate the geometric nature of these gates and their inherent robustness against classical control errors and other noise sources. The concepts that we introduce here are expected to be widely applicable to the understanding and design of error robustness in generic holonomic protocols. To underscore the practical feasibility of our approach, we contextualize our gate design within recent advancements in Rydberg-based quantum computing and simulation.
Zero-Error List Decoding for Classical-Quantum Channels
The aim of this work is to study the zero-error capacity of pure-state classical-quantum channels in the setting of list decoding. We provide an achievability bound for list-size two and a converse bound holding for every fixed list size. The two bounds coincide for channels whose pairwise absolute state overlaps form a positive semi-definite matrix. Finally, we discuss a remarkable peculiarity of the classical-quantum case: differently from the fully classical setting, the rate at which the sphere-packing bound diverges might not be achievable by zero-error list codes, even when we take the limit of fixed but arbitrarily large list size.
Measuring quasiparticle dynamics for particle impact reconstruction in a superconducting qubit chip
Quasiparticle poisoning following particle impacts poses a significant challenge to the development of fault-tolerant superconducting quantum computers, as a sudden excess of quasiparticles can simultaneously degrade the coherence of multiple qubits across large device arrays. In this work, we present a statistical analysis that models the time evolution of radiation-induced qubit energy relaxation through quasiparticle density dynamics. This study provides insight into quasiparticle loss processes by distinguishing between recombination and trapping decay channels and assessing their respective impact on qubit performance. We precisely measure quasiparticle recombination in multiple transmon qubits and uncover an unexpected dependence of qubit relaxation dynamics on deposited energy. By linking correlated relaxation events across qubits to ballistic phonon propagation, we introduce a statistical localization approach to extract the energy deposited in the substrate, which is in good agreement with Monte Carlo simulation. This work establishes the quantitative framework for using an arbitrary subset of superconducting transmon qubits in a QPU as energy-resolving witness particle detectors.
Fault-Tolerant Error Detection Above Break-Even for Multi-Qubit Gates
A fully fault-tolerant implementation of the quantum error-detecting Iceberg $[[2m, 2m-2, 2]]$ code applied to a Toffoli circuit achieved beyond-break-even error detection on a leading trapped-ion quantum computer, where the effect of encoding a circuit with a quantum error-detection code enables increased fidelity compared to an unencoded circuit. This code was also applied to Bell state preparation circuits, where a lean non-fault-tolerant implementation of the Iceberg code enables a fidelity gain as well. This highlights the important point that, at least for small-scale circuits with a substantial portion of error-free runs, it can be effective simply to use error detection to filter out the runs with errors. Furthermore, experiments performed in this work highlight the necessity for judicious compilation of circuits not only for a given hardware but also within a quantum error detection code.
Scalable Fluxonium Quantum Processors via Tunable-Coupler Architecture
Superconducting quantum processors have largely converged on transmon-based architectures, while alternative qubit modalities with intrinsic error protection have lacked a demonstrated path to scalable system integration. In particular, although tunable-coupler-mediated interactions have been validated for small fluxonium systems, it remains unclear whether such designs can be scaled to a multi-qubit lattice. Here, we establish a scalable fluxonium processor architecture based on a modular qubit-coupler unit cell engineered to suppress residual interactions and spectator errors in a many-qubit lattice. The system enables parallel single-qubit gate fidelities approaching 99.99% and two-qubit CZ gate fidelities around 99%. With an optimized gate duration of 32 ns, the best CZ gate fidelity reaches 99.9%. We further validate this architecture in a 22-qubit processor based on the same configuration, where parallel operations enable the deterministic generation of Greenberger-Horne-Zeilinger states involving up to 10 qubits. Together, these results demonstrate that the fluxonium-tunable-coupler unit cell composes without emergent interaction pathologies and establish fluxonium as a scalable superconducting qubit platform.
Excited-State Quantum Chemistry on Qumode-Based Processors via Variational Quantum Deflation
Variational quantum algorithms on bosonic quantum processors are an emerging paradigm for quantum chemistry calculations, exploiting the natural alignment between molecular structure and harmonic oscillator-based hardware. We introduce the qumode-based variational quantum deflation framework (QumVQD) for finding both electronic and vibrational excited state energies on qumode-based architectures. For electronic structure, we incorporated particle number conservation constraints via Fock basis Hamming weight filtering. This symmetry enforcement achieves a significant reduction in computational overhead, scaling the Hilbert space dimension as O$M \choose n_e$ rather than O$(2^M)$ for $M$ spin orbitals and $n_e$ electrons. We validate the approach through electronic structure calculations on H$_{\text{2}}$, achieving agreement with full configuration interaction (FCI) using the STO-3G basis within chemical accuracy across potential energy surfaces. Extending to vibrational structure, we combine QumVQD with Hamiltonian fragmentation based on Bogoliubov transforms, computing CO$_{\text{2}}$ and H$_{\text{2}}$S vibrational eigenstates to spectroscopic accuracy with entangling gate counts 1-2 orders of magnitude lower than analogous qubit-based algorithms. We performed noise characterization using amplitude-damping models and gate-fidelity analysis, which demonstrates enhanced error resilience due to reduced circuit depth compared to qubit-based algorithms. Together, these results highlight the potential of bosonic quantum devices for advancing computational chemistry, particularly in areas where qubit-based devices struggle.
A $\boldsymbol{2d \times d \times d}$ Spacetime Volume Implementation of a Logical S Gate in the Surface Code
The logical S gate implemented via twist defect braiding in the surface code is one of the major sources of overhead in fault-tolerant quantum computing, since an S-gate correction is required in every logical T-gate teleportation. Existing logical S-gate implementations require spacetime volumes of \(2d \times 2d \times d\) or \(2d \times 1.5d \times d\), where $d$ is the code distance of the surface code. To the best of our knowledge, their circuit-level implementations have not yet been shown, hindering quantitative comparisons of fault distances and logical error rates. In this work, we provide these missing circuit-level implementations. Additionally, we propose a novel twist defect braiding protocol that reduces the spacetime volume to \(2d \times d \times d\). First, we construct an implementation of the proposed method using constant-length non-local gates, and then refine it to utilize only nearest-neighbor two-qubit gates on a square grid, without requiring additional two-qubit gate depth beyond that of standard syndrome extraction circuits. Through numerical simulations, we evaluate the fault distances and logical error rates for both existing and proposed methods. Our results show that, although the proposed method reduces the fault distance by one or three, its logical error rates remain comparable to those of existing methods at large code distances (\(d \ge 5\)) and at physical error rates near \(p = 10^{-3}\). This demonstrates that the proposed method is promising for near-term fault-tolerant quantum computing.
Decoupling of the STIRAP and Microwave-Dressing paths in Trapped Rydberg Ion Gates
The strong dipole-dipole interaction of trapped Rydberg ions offers the possibility of sub-microsecond entanglement gates. For example a two-qubit Control-Phase gate in 88 Sr + ions can be realized, by simultaneous excitation to the Rydberg states via stimulated Raman adiabatic passage (STIRAP) with simultaneous microwave induced dipole-dipole interaction. We show that this excitation protocol distorts the dark-state of the STIRAP stage and is prone to decay from the intermediate state. Here, we propose a novel pulse ordering, in which the STIRAP and the microwave dressing of the Rydberg states occurs in separate stages, preventing mutual interference effects that are detrimental to the gate fidelity. We show that, for experimentally feasible parameters, the proposed excitation scheme can achieve a fidelity of 99.93%, surpassing the experimentally demonstrated gate. In addition, we demonstrate a non-adiabatic speed-up to 400 ns by employing asymmetric pulse shapes in the STIRAP stage. The entangling phase is then controlled solely through the interaction strength by nonresonant asymmetric chirping of the microwave field.
Optimally Controlled Storage of a Qubit in an Inhomogeneous Spin Ensemble
The storage of quantum information in spin-ensembles is limited by practically unavoidable inhomogeneous broadening, and the macroscopic number of spins in such an ensemble makes the design of control solutions to increase the coherence time a challenging task. Together with a concurrently developed Krylov theory that allows us to treat the control problem efficiently, we design optimal cavity modulation for such spin ensembles that achieve an order of magnitude enhancement in qubit lifetime compared to the losses due to inhomogeneity and cavity decay.
dqc_simulator: an easy-to-use distributed quantum computing simulator
Distributed quantum computing (DQC) is a promising proposal for overcoming the scalability challenges of quantum computing. However, the evaluation of DQC hardware and software is difficult due to the relative dearth of classical simulation tools available for DQC devices. In this work, we introduce dqc_simulator, a novel simulation toolkit, written in Python, which automates many of the most challenging aspects of the DQC simulation workflow. dqc_simulator enables the easy simulation of both hardware and software, making it easy to create realistic and robust tests and benchmarks for the full DQC stack.
Tsallis relative $\alpha$ entropy of coherence dynamics in Grover's search algorithm
Quantum coherence plays a central role in Grover's search algorithm. We study the Tsallis relative $\alpha$ entropy of coherence dynamics of the evolved state in Grover's search algorithm. We prove that the Tsallis relative $\alpha$ entropy of coherence decreases with the increase of the success probability, and derive the complementarity relations between the coherence and the success probability. We show that the operator coherence of the first $H^{\otimes n}$ relies on the size of the database $N$, the success probability and the target states. Moreover, we illustrate the relationships between coherence and entanglement of the superposition state of targets, as well as the production and deletion of coherence in Grover iterations.
From coupled $\mathbb{Z}_3$ Rabi models to the $\mathbb{Z}_3$ Potts model
We study $\mathbb{Z}_3$-symmetric Rabi model that describes a three-level system coupled to two bosonic modes. We derive a mapping of the two-mode $\mathbb{Z}_3$ Rabi model onto a qubit-boson ring. This mapping allows us to formulate a realistic implementation of the $\mathbb{Z}_3$ Rabi model based on superconducting qubits. It also provides context for the previously proposed optomechanical implementation of the $\mathbb{Z}_3$ Rabi model. In addition, we propose a physical implementation of the $\mathbb{Z}_3$ Potts model via a coupled chain of $\mathbb{Z}_3$ Rabi models.
State preparation with parallel-sequential circuits
We introduce parallel-sequential (PS) circuits, a family of quantum circuit layouts that interpolate between brickwall and sequential circuits, which introduces control parameters governing a trade-off between the amount of entanglement and the maximum correlation range they can express. We provide numerical evidence that PS circuits can efficiently prepare many-body ground states in one dimension. On noisy devices, characterized through both idling errors and two-qubit gate errors, we show that in a wide parameter regime, PS circuits outperform brickwall, sequential, and the log-depth circuits from [Malz, Styliaris, Wei, Cirac, PRL 132, 040404 (2024)]. Additionally, we demonstrate that properly chosen noisy random PS circuits suppress error proliferation and, when employed as a variational ansatz, exhibit superior trainability.
SAQ: Stabilizer-Aware Quantum Error Correction Decoder
Quantum Error Correction (QEC) decoding faces a fundamental accuracy-efficiency tradeoff. Classical methods like Minimum Weight Perfect Matching (MWPM) exhibit variable performance across noise models and suffer from polynomial complexity, while tensor network decoders achieve high accuracy but at prohibitively high computational cost. Recent neural decoders reduce complexity but lack the accuracy needed to compete with computationally expensive classical methods. We introduce SAQ-Decoder, a unified framework combining transformer-based learning with constraint aware post-processing that achieves both near Maximum Likelihood (ML) accuracy and linear computational scalability with respect to the syndrome size. Our approach combines a dual-stream transformer architecture that processes syndromes and logical information with asymmetric attention patterns, and a novel differentiable logical loss that directly optimizes Logical Error Rates (LER) through smooth approximations over finite fields. SAQ-Decoder achieves near-optimal performance, with error thresholds of 10.99% (independent noise) and 18.6% (depolarizing noise) on toric codes that approach the ML bounds of 11.0% and 18.9% while outperforming existing neural and classical baselines in accuracy, complexity, and parameter efficiency. Our findings establish that learned decoders can simultaneously achieve competitive decoding accuracy and computational efficiency, addressing key requirements for practical fault-tolerant quantum computing systems.
An asymmetric and fast Rydberg gate protocol for entanglement outside of the blockade regime
We analyze a new Rydberg gate design based on the original $\pi-2\pi-\pi$ protocol [Jaksch, et. al. Phys. Rev. Lett. {\bf 85}, 2208 (2000)] that is modified to enable high fidelity operation without requiring a strong Rydberg interaction. The gate retains the $\pi-2\pi-\pi$ structure with an additional detuning added to the $2\pi$ pulse on the target qubit. The protocol reaches within a factor of 2.39 (1.68) of the fundamental fidelity limit set by Rydberg lifetime for equal (asymmetric) Rabi frequencies on the control and target qubits. We generalize the gate protocol to arbitrary controlled phases. We design optimal target-qubit phase waveforms to generalize the gate across a range of interaction strengths and we find that, within this family of gates, the constant-phase protocol is time-optimal for a fixed laser Rabi frequency and tunable interaction strength. Quantum control techniques are used to design gates that are robust against variations in Rydberg Rabi frequency or interaction strength.
Geometry-induced correlated noise in qLDPC syndrome extraction
Routed geometry is a device-level choice in a fixed syndrome-extraction circuit. Two embeddings of the same code can set different physical separations between gate blocks active in the same time step, and these separations control the residual coupling between those blocks. We derive how this choice shapes the leading correlated-fault structure of the effective data channel, and we test the consequences at circuit level. Starting from a geometry-conditioned interaction Hamiltonian on disjoint blocks within one tick, we obtain a retained data channel of single and pair faults for bivariate-bicycle codes, with a truncation error controlled by the per-tick coupling strength. Two geometry metrics emerge. In the combinatorial limit, a matching argument on the logical support reduces the effective fault weight on that support. For strictly positive kernels, once every support pair contributes somewhere in the schedule, the induced support graph becomes complete. At that point the matching-number reduction is exhausted, and the embedding-dependent quantity is the total retained pair weight on the support, which we call the weighted exposure. Circuit-level Monte Carlo on the $[\![72,12,6]\!]$ and $[\![144,12,12]\!]$ benchmarks shows that a biplanar layout, with the schedule split across two routing planes, suppresses the geometry penalty incurred by the monomial layout in a single plane. On the BB72 baseline set of $101$ operating points, the reference-support weighted exposure is strongly correlated with the observed logical error rate (Spearman $\rho_\mathrm{S}=0.893$) in the tested window. A logical-aware two-swap local search over single-layer embeddings on BB72 reduces the worst-case family exposure by $26.11\%$ and lowers the logical error rate across the tested power-law window.
A compact QUBO encoding of computational logic formulae demonstrated on cryptography constructions
We aim to advance the state-of-the-art in Quadratic Unconstrained Binary Optimization formulation with a focus on cryptography algorithms. As the minimal QUBO encoding of the linear constraints of optimization problems emerges as the solution of integer linear programming (ILP) problems, by solving special boolean logic formulas (like ANF and DNF) for their integer coefficients it is straightforward to handle any normal form, or any substitution for multi-input AND, OR or XOR operations in a QUBO form. To showcase the efficiency of the proposed approach we considered the most widespread cryptography algorithms including AES-128/192/256, MD5, SHA1 and SHA256. For each of these, we achieved QUBO instances reduced by thousands of logical variables compared to previously published results, while keeping the QUBO matrix sparse and the magnitude of the coefficients low. In the particular case of AES-256 cryptography function we obtained more than 8x reduction in variable count compared to previous results. The demonstrated reduction in QUBO sizes notably increases the vulnerability of cryptography algorithms against future quantum annealers, capable of embedding around $30$ thousands of logical variables.
Quantum algorithms for Young measures: applications to nonlinear partial differential equations
Many nonlinear PDEs have singular or oscillatory solutions or may exhibit physical instabilities or uncertainties. This requires a suitable concept of physically relevant generalized solutions. Dissipative measure-valued solutions have been an effective analytical tool to characterize PDE behavior in such singular regimes. They have also been used to characterize limits of standard numerical schemes on classical computers. The measure-valued formulation of a nonlinear PDE yields an optimization problem with a linear cost functional and linear constraints, which can be formulated as a linear programming problem. However, this linear programming problem can suffer from the curse of dimensionality. In this article, we propose solving it using quantum linear programming (QLP) algorithms and discuss whether this approach can reduce costs compared to classical algorithms. We show that some QLP algorithms, such as the quantum central path algorithm, have up to polynomial advantage over the classical interior point method. For problems where one is interested in the dissipative weak solution, namely the expected values of the Young measure, we show that the QLP algorithms offer no advantage over direct classical solvers. Moreover, for random PDEs, there can be up to polynomial advantage in obtaining the Young measure over direct classical PDE solvers. This is a significant advantage over standard PDE solvers, since the Young measure provides a more detailed description of the solution. We also propose some open questions for future development in this direction.
Blind Catalytic Quantum Error Correction: Target-State Estimation and Fidelity Recovery Without \textit{A Priori} Knowledge
Catalytic quantum error correction (CQEC) recovers quantum states via catalytic covariant transformations but requires full knowledge of the target state. We introduce \emph{blind CQEC}, which estimates the target from the noisy output alone before catalytic recovery. Five estimation strategies are benchmarked across three noise models (dephasing, depolarizing, amplitude damping), four quantum algorithms ($d = 4$--$64$), Haar-random states up to $d = 256$, and mixed-state targets with variable purity. Key results: (i)~coherence maximization achieves $ F_{ rec } > 0.95$ for $d \leq 16$ without noise-model knowledge, matching the oracle to within $4\%$; (ii)~channel inversion is required at $d = 64$ ($ F_{ rec } = 0.905$); (iii)~estimation and recovery fidelities are linearly correlated ($r > 0.99$), identifying target estimation as the sole bottleneck; (iv)~an analytical crossover dimension $d^* \approx 25$--$40$ separates noise-model-free and noise-informed regimes, bridged by a hybrid interpolation strategy; (v)~copy scaling follows $1 - F(n) \sim n^{-\alpha}$ with $\alpha \in [0.4, 2.2]$, spanning the statistical averaging and denoising synergy limits. Standard linear inversion tomography fails as a CQEC target estimator, validating the need for decoherence-aware strategies. An end-to-end VQE demonstration for H$_2$ shows $3.4\times$ energy-error reduction with channel-inversion blind CQEC.
The Rotation Gap Is Not An Error: Ternary Structure in IBM Quantum Hardware
Quantum error correction assumes that all syndrome activations represent errors requiring correction. We present evidence from 756 QEC runs across three IBM Eagle r3 processors that this assumption is wrong. The hardware exhibits sub-Poissonian syndrome statistics (Fano factor F = 0.856, t = -131 against Poisson, zero dependence on code distance), indicating that a fraction of syndrome events are not random noise but structured cooperative transitions. We introduce a regime classifier decoder that distinguishes binary errors (which should be corrected) from ternary transitions (which should not). On a mixed binary/ternary error model calibrated to IBM hardware statistics, the classifier reduces logical error rates by 7-19% at static detection depth (tau = 1) across all cell sizes, with statistical significance p < 0.05 in 7 of 8 test conditions (p < 0.0001 in all four tau = 1 conditions). The improvement mechanism is selective abstention: the classifier correctly identifies 75-98% of ternary transitions and leaves them uncorrected (75-81% at tau = 1, 88-98% at tau = 5), whereas a standard decoder miscorrects them, introducing errors that would not otherwise exist. A cross-platform control on Google's 105-qubit Willow processor (420 experiments, d = 3, 5, 7) shows the opposite: super-Poissonian statistics (F = 2.42), super-linear burst scaling, and positive spatial correlation -- confirming that the sub-Poissonian signal is absent from standard surface-code circuits that lack the P-gate asymmetry. The result demonstrates that standard QEC actively destroys quantum information by correcting valid ternary states, and that less correction produces better performance when the hardware has cooperative error structure.
Fault-tolerant simulation of the electronic structure using Projector Augmented-Waves and Bloch orbitals
Strongly correlated materials are a natural target for fault-tolerant quantum computers, but they require tools beyond those developed for molecules. Electronic wavefunctions vary rapidly near nuclei yet remain delocalized across many unit cells, and bulk properties must be converged systematically with respect to finite-size errors. To resolve such issues, we present the Bloch--UPAW framework that combines Bloch-orbital $k$-space structure with unitary projector-augmented-wave (UPAW) augmentation. The UPAW Hamiltonian, expressed directly in the Bloch basis, retains explicit control of Brillouin-zone sampling, and incorporates near-nuclear physics through strictly local on-site corrections. The construction is independent of the underlying one-particle representation, so it applies to both plane-wave and localized bases, and it handles supercells for symmetry-breaking phenomena more efficiently. We derive a linear-combination-of-unitaries decomposition and a block-encoding circuit suitable for qubitization; UPAW augmentation adds one ancilla qubit and no Toffoli gates at leading order relative to a Bloch-only block encoding. Asymptotically, the Toffoli cost scales as $\mathcal{O}(N_k^3)$ when refining the $k$-mesh and as $\mathcal{O}(N_a^{3.5})$ when enlarging the supercell, enabling convergence to be steered by the most favorable route for a given material. Resource estimates for bulk diamond show approximately an order-of-magnitude reduction in Toffoli count relative to prior work on periodic solids.
Design automation and space-time reduction for surface-code logical operations using a SAT-based EDA kernel compatible with general encodings
Fault-tolerant quantum computers (FTQCs) based on surface codes and lattice surgery have been widely studied, and there is strong demand for a framework that can identify logical operations with low space-time cost, verify their functionality and fault tolerance, and demonstrate their optimality within a given search space, much like electronic design automation (EDA) in classical circuit design. In this paper, we propose KOVAL-Q, an EDA kernel that verifies and optimizes surface-code logical operations by formulating them as a satisfiability (SAT) problem. Compared with existing SAT-based frameworks such as LaSsynth, our method can handle logical qubits with more flexible surface-code encodings, both as target configurations and as intermediate states. This extension enables the optimization of advanced layouts, such as fast blocks, and broadens the search space for logical operations. We demonstrate that KOVAL-Q can determine the minimum execution time of fundamental logical operations in given spatial layouts, such as $d$-cycle logical CNOTs and $2d$-cycle patch rotations. Their use reduces the execution time of widely studied FTQC applications by about 10% under a simplified scheduling model. KOVAL-Q consists of three subkernels corresponding to different types of constraints, which facilitates its integration as a submodule into scalable heuristic frameworks. Thus, our proposal provides an essential framework for optimizing and validating core FTQC subroutines.
The Impact of Qubit Connectivity on Quantum Advantage in Noisy IQP Circuits
Instantaneous Quantum Polynomial-time (IQP) circuits are a candidate for demonstrating near-term quantum advantage, as their sampling task is believed to be classically hard in the ideal theoretical setting under standard complexity-theoretic assumptions. In noisy implementations, however, this hardness can disappear once circuit depth exceeds a noise-dependent critical threshold. We show that qubit connectivity is a key parameter in this transition, since sparse architectures require additional routing to implement long-range interactions, thereby increasing compiled circuit depth. To make this explicit, we present a connectivity-aware analysis of compiled IQP circuits. For a fixed abstract IQP instance, different hardware connectivity graphs yield different compiled depths and thus different effective positions relative to the noisy-IQP simulatability boundary. We quantify this architecture-dependent shift using the compiled depth overhead and the corresponding simulatability margin. We combine analytic depth estimates for sparse geometries, including the two-dimensional grid, with native-gateset-aware compilation experiments across seven hardware-grounded experimental device models derived from publicly available topologies. To compare these device models under a unified empirical framework, we approximate the effective noise level primarily through reported two-qubit gate error rates. This lets us compare how much effective noise sparse and fully connected architectures can tolerate for the same position relative to the noisy-IQP simulatability boundary. Our results show that sparse connectivity requires a lower effective noise level to sustain the same margin relative to the noisy-IQP simulatability boundary, and they provide a quantitative framework for determining when compiled IQP experiments are likely to remain outside, or instead enter, the classically simulatable regime.
Quasi-Orthogonal Stabilizer Design for Efficient Quantum Error Suppression
Orthogonal geometric constructions are the basis of many many quantum error-correcting codes (QEC), but strict orthogonality constraints limit design flexibility and resource efficiency. We introduce a quasi-orthogonal geometric framework for stabilizer codes that relaxes these constraints while preserving the symplectic commutation structure on the binary symplectic space $\mathbb{F}_{2}^{2}$. The approach permits controlled overlap between X- and Z-check supports, leading to quasi-orthogonal Pauli operators and a generalized notion of effective distance defined via induced anti-commutation with logical operators. This relaxation expands the stabilizer design space, enabling codes that approach the Gilbert-Varshamov regime with improved logical rates at moderate distances. Finite-length constructions, including quasi-orthogonal variants of the $[[8,3,\approx 3]]$, $[[10,4,\approx 3]]$, $[[13,1,5]]$, and $[[29,1,11]]$ codes, demonstrate consistent improvements over strictly orthogonal counterparts. Under depolarizing noise with error rates up to $p=0.30$, logical error rates, fidelities, and trace distances improve by up to two orders of magnitude. These improvements reflect the increased connectivity of the underlying stabilizer geometry while remaining compatible with standard decoding schemes. The proposed framework offers a principled extension of stabilizer code design through quasi-orthogonal geometric structures.
Distinguishability of locally diagonal orthogonally invariant quantum states
We study the distinguishability of quantum states under local operations with classical communication (LOCC), separable, and positive-partial-transpose (PPT) measurements, focusing on locally diagonal orthogonally invariant (LDOI) states -- those invariant under local diagonal orthogonal twirling. This class includes many important families such as Werner states, isotropic states, X-states, and Dicke states. We show that optimal PPT and separable measurements for distinguishing LDOI states can always be taken to be LDOI, and the LOCC supremum can be approached by LDOI LOCC POVMs, enabling a dimensional reduction from $n^4$ to $O(n^2)$ in the associated optimization problems. We establish efficiently computable bounds on the distinguishability of orthonormal LDOI bases and prove that for a broad class of such bases -- including all two-qubit cases -- the LOCC supremum equals the PPT and separable optima. More generally, we show the gap between PPT and LOCC distinguishability is at most $(n-2)/(2n^2)$ for local dimension $n$.
Fast and accurate AI-based pre-decoders for surface codes
Fast, scalable decoding architectures that operate in a block-wise parallel fashion across space and time are essential for real-time fault-tolerant quantum computing. We introduce a scalable AI-based pre-decoder for the surface code that performs local, parallel error correction with low decoding runtimes, removing the majority of physical errors before passing residual syndromes to a downstream global decoder. This modular architecture is backend-agnostic and composes with arbitrary global decoding algorithms designed for surface codes, and our implementation is completely open source. Integrated with uncorrelated PyMatching, the pipeline achieves end-to-end decoding runtimes of order $\mathcal{O}(1 \mu\text{s})$ per round at large code distances on NVIDIA GB300 GPUs while reducing logical error rates (LERs) relative to global decoding alone. In a block-wise parallel decoding scheme with access to multiple GPUs, the decoding runtime can be reduced to well below $\mathcal{O}(1 \mu\text{s})$ per round. We observe further LER improvements by training a larger model, outperforming correlated PyMatching up to distance-13. We additionally introduce a noise-learning architecture that infers decoding weights directly from experimentally accessible syndrome statistics without requiring an explicit circuit-level noise model. We show that purely data-driven graph weight estimation can nearly match uncorrelated PyMatching and exceed correlated PyMatching in certain regimes, enabling highly-optimized decoding when hardware noise models are unknown or time-varying, as well as training pre-decoders with realistic noise models. Together, these results establish a practical, modular, and high-throughput decoding framework suitable for large-distance surface-code implementations.
Diversity Methods for Improving Convergence and Accuracy of Quantum Error Correction Decoders Through Hardware Emulation
As quantum computing moves toward fault-tolerant architectures, quantum error correction (QEC) decoder performance is increasingly critical for scalability. Understanding the impact of transitioning from floating-point software to finite-precision hardware is essential, as hardware decoder performance affects code distance, qubit requirements, and connectivity between quantum and classical control units. This paper introduces a hardware emulator to evaluate QEC decoders using real hardware instead of software models. The emulator can explore $10^{13}$ different error patterns in 20 days with a single FPGA device running at 150 MHz, guaranteeing the decoder's performance at logical rates of $10^{-12}$, the requirement for most quantum algorithms. In contrast, an optimized C++ software on an Intel Core i9 with 128 GB RAM would take over a year to achieve similar results. The emulator also enables the storage of uncorrectable error patterns that generate logical errors, allowing for offline analysis and the design of new decoders. Using results from the emulator, we propose a method that combines several belief propagation (BP) decoders with different quantization levels, which we define as a diversity-based decoder. Individually, these decoders may show subpar error correction, but together they outperform the floating-point version of BP for quantum low-density parity-check (QLDPC) codes like hypergraph or lifted product. Preliminary results with circuit-level noise and bivariate bicycle codes suggest that hardware insights can also improve software. Our diversity-based proposal achieves a similar logical error rate as the well-known approach, BP with ordered statistics (BP+OSD) decoding, with average speed improvements ranging from 30% to 80%, and 10% to 120% in worst-case scenarios, while reducing post-processing algorithm activation from 47% to 96.93%, maintaining the same accuracy.
Fast Quantum Gates for Neutral Atoms Separated by a Few Tens of Micrometers
We present a theoretical scheme for a family of fast and high-fidelity two-qubit iSWAP gates between neutral atoms separated by more than 20 um, enabled by resonant dipole-dipole spin-exchange interactions between Rydberg states. The protocol harnesses coherent excitation-exchange-deexcitation dynamics between the qubit and the Rydberg states within a single and smooth laser pulse, in the presence of strong dipole-dipole interactions. We utilize optimal control methods to achieve theoretical gate fidelities and durations comparable to blockade-based gates in the presence of relevant noise, while extending the effective interaction range by an order of magnitude. This enables entanglement well beyond the blockade radius, offering a route toward fast, high-connectivity quantum processors.
Heterogeneous architectures enable a 138x reduction in physical qubit requirements for fault-tolerant quantum computing under detailed accounting
Quantum computer hardware is predicted to scale over hundreds of thousands of qubits coming online in the next decade. Despite significant theoretical and experimental QEC progress, quantum computer architecture has suffered a significant gap, with bottom-up physical-device-driven challenges largely disconnected from top-down QEC-code-driven considerations. In this work, we unify these two views, presenting a complete heterogeneous quantum computing architecture incorporating task-specific hardware selection and QEC encoding, and agnostic to code selection or physical qubit parameters. Our approach further enables special-purpose processing modules, and includes a full microarchitecture for fault-tolerant implementation of interfaces between quantum processing units and quantum memories. Using this architecture and a new fully featured compiler functioning across subsystems at the scale of $1,000$ logical qubits, we schedule and orchestrate a variety of algorithms down to hardware-specific instructions; a detailed accounting of all operations reveals up to 551x reduction in algorithmic logical error and up to 138x reduction in physical-qubit overhead compared to a monolithic baseline architecture. We then consider the factorization of 2048-bit RSA-integers; using an experimentally demonstrated grid-coupling topology, factoring RSA-2048 requires 381k physical qubits and 9.2 days, which can be reduced to 4.9 days via addition of an algorithm-specific accelerator for the Adder subroutine (requiring 439k qubits). Finally, assuming hypothetical long-range coupling, implementing quantum memory using qLDPC codes reduces the resources required for factoring to just 190k qubits and under 10 days. These results and the tooling we have built indicate that heterogeneous quantum-computer architectures can deliver significant, verifiable benefits on realistic hardware.
Optimising Quantum Error Correction Using Morphing Circuits
Quantum error correction (QEC) codes are traditionally defined and searched for without specifying the manner in which its syndrome extraction circuits are executed using elementary gates and measurements. We show how morphing circuits introduced in Refs. [1-3] provide a way of optimising syndrome extraction circuits and codes directly in terms of connectivity, choice of two-qubit gate (ISWAP versus CNOT) and number of physical qubits. We discuss morphing circuits in code optimisation among Abelian two-block group algebra (2BGA) codes, handling boundaries for 2D codes, codes with single-shot properties, and improving performance in stability experiments against measurement and reset errors. We show that alternating syndrome extraction circuits - executed with alternating time-reversed rounds - can be viewed as a two-round morphing circuit whose fault-tolerant properties are computationally much easier to examine than non-alternating syndrome extraction circuits. Our methods find new codes and syndrome extraction circuits of practical interest, including Abelian 2BGA morphing circuits with better code parameters and connectivity than existing circuits. [1] Matt McEwen, Dave Bacon, and Craig Gidney. Relaxing hardware requirements for surface code circuits using time-dynamics. Quantum, 7:1172, 2023. [2] Craig Gidney and Cody Jones. New circuits and an open source decoder for the color code, 2023. [3] Mackenzie H. Shaw and Barbara M. Terhal. Lowering connectivity requirements for bivariate bicycle codes using morphing circuits.
A Polylogarithmic-Depth Quantum Multiplier
We present a quantum algorithm for multiplying two $n$-bit integers with overall circuit depth and $T$-depth both bounded by $O(\log^{2} n)$, while using $O(n^{2})$ gates and ancillary qubits. Our construction generates partial products via indicator-controlled copying and adds them using a binary adder tree, enabling parallel accumulation with logarithmic depth overhead per level. To the best of our knowledge, our design has the lowest $T$-depth among all multiplication algorithms using the Clifford + $T$ model. By optimizing both circuit depth and $T$-depth, our construction advances the practical feasibility of large-scale fault-tolerant quantum algorithms.
Logical Compilation for Multi-Qubit Iceberg Patches
Recent advancements in quantum computing have enabled practical use of quantum error detecting and correcting codes. However, current architectures and future proposals of quantum computer design suffer from limited qubit counts, necessitating the use of high-rate codes. Such codes, with their code parameters denoted as $[[n, k, d]]$, have more than $1$ logical qubit per code (i.e., $k > 1$). This leads to reduced error tolerance of the code, since $\lceil (d-1)/2\rceil$ errors on any of the $n$ physical qubits can affect the logical state of all $k$ logical qubits. Therefore, it becomes critical to optimally map the input qubits of a quantum circuit to these codes, in such a way that the circuit fidelity is maximized. \par However, the problem of mapping program qubits to logical qubits for high-rate codes has not been studied in prior work. A brute force search to find the optimal mapping is super exponential (scaling as $O(n!)$, where $n$ is the number of input qubits), making exhaustive search infeasible past a small number of qubits. We propose a framework that addresses this problem on two fronts: (1) for any given mapping, it performs logical-to-physical compilation that translates input gates into efficiently encoded implementations utilizing Hadamard commutation and gate merging; and (2) it quickly searches the space of possible mappings through a merge-optimizing, noise-biased packing heuristic that identifies high-performing qubit assignments without exhaustive enumeration. To the best of our knowledge, our compiler is the first work to explore mapping and compilation for high-rate codes. Across 71 benchmark circuits, we reduce circuit depth by $34\%$, gate counts by up to $31\%$ and $17\%$ for one-qubit and two-qubit gates, and improve total variation distance by $1.75\times$, with logical selection rate improvements averaging $86\%$ relative to naive compilation.
Quantum Error Mitigation Strategies for Variational PDE-Constrained Circuits on Noisy Hardware
Variational quantum circuits (VQCs) solving partial differential equations (PDEs) on near-term quantum hardware face a critical challenge: hardware noise degrades solution fidelity and disrupts convergence. We present a systematic study of three noise channels; depolarizing, amplitude damping, and bit-flip on VQCs constrained by PDE residual loss functions for the heat equation, Burgers' equation, and the Saint-Venant shallow water equations. We benchmark three error mitigation strategies: zero-noise extrapolation (ZNE) via Richardson polynomial fitting, probabilistic error cancellation (PEC), and measurement error mitigation through inverse confusion matrices. Our numerical experiments on 6-qubit, 4-layer circuits demonstrate that ZNE reduces absolute error by 82-96% at low noise (p = 0.001), with effectiveness degrading gracefully at higher noise strengths. We prove analytically and confirm numerically that physics-constrained circuits exhibit inherent noise resilience: at p = 0.01, constrained circuits maintain 25-47% higher fidelity than unconstrained counterparts, with the advantage scaling with PDE complexity. PEC provides near-exact correction at low gate counts but incurs exponential sampling overhead, rendering it impractical beyond ~60 gates at p >= 0.02. Error budget decomposition reveals that systematic errors dominate at all noise levels (43-58%), while the PDE residual component grows from ~10% to ~31% as noise increases, indicating that physics constraints absorb noise through structured gradient information. These results establish practical guidelines for deploying variational PDE solvers on NISQ hardware.
An Undergraduate Course in Quantum Computing
This is the text for a one quarter or one semester undergraduate course on quantum computing that has been given at the University of California Santa Cruz. It is intended for students in the physical sciences who have already studied linear algebra (though a review of this topic is given in the course). No prior knowledge of quantum mechanics is required. The most important topics covered are Shor's algorithm and an introduction to quantum error correction. Most of the text is a build-up to these topics.
Adaptive H-EFT-VA: A Provably Safe Trajectory Through the Trainability-Expressibility Landscape of Variational Quantum Algorithms
H-EFT-VA established a physics-informed solution to the Barren Plateau (BP) problem via a hierarchical EFT UV-cutoff, guaranteeing gradient variance in Omega(1/poly(N)). However, localization restricts the ansatz to a polynomial subspace, creating a reference-state gap for states distant from |0>^N. We introduce Adaptive H-EFT-VA (A-H-EFT) to navigate the trainability-expressibility tradeoff by expanding the reachable Hilbert space along a safe trajectory. Gradient variance is maintained in Omega(1/poly(N)) if sigma(t) <= 0.5/sqrt(LN) (Theorem 1). A Safe Expansion Corollary and Monotone Growth Lemma confirm expansion without discontinuous jumps. Benchmarking across 16 experiments (up to N=14) shows A-H-EFT achieves fidelity F=0.54, doubling static H-EFT-VA (F=0.27) and outperforming HEA (F~0.01), with gradient variance >= 0.5 throughout. For Heisenberg XXZ (Delta_ref=1), A-H-EFT identifies the negative ground state while static methods fail. Results are statistically significant (p < 10^-37). Robustness over three decades of hyperparameters enables deployment without search. This is the first rigorously bounded trajectory through the VQA landscape.
Autonomous Quantum Error Correction of Spin-Oscillator Hybrid Qubits
We propose a novel measurement-free scheme for stabilizing a spin-oscillator hybrid qubit via autonomous quantum error correction. The engineered Lindbladian renders the code space into an attractive steady-state subspace, realized by coupling the storage mode to a rapidly cooled bath through a controlled beam-splitter and spin-dependent displacement interactions. The continuous variable-discrete variable hybrid approach to autonomous quantum error correction preserves the hardware efficiency of conventional dissipation engineering while simplifying the required system-bath coupling. The construction is compatible with simple logical gates and leverages primitives already demonstrated in experimental platforms, such as trapped-ion systems, suggesting a practical route to hardware-efficient, noise-biased logical qubits without repeated syndrome measurements and feedforward.
Fidelity-informed neural pulse compilation of a continuous family of quantum gates with uncertainty-margin analysis
We develop a fidelity-informed neural pulse-compilation framework for a continuous family of single-qubit gates on a three-qubit liquid-state nuclear magnetic resonance (NMR) processor. Instead of decomposing each target unitary into a sequence of calibrated basis gates, the method learns a direct map from the axis-angle parameters of an arbitrary U_2 in SU(2) operation to a piecewise-constant radio-frequency control sequence that implements the desired transformation. Training is performed end-to-end through the time-ordered propagator of the driven Hamiltonian using global-phase-insensitive unitary fidelity as the learning signal. We show numerically that a single model generalizes across a continuous range of gate parameters and experimentally validate representative compiled pulses on a benchtop three-qubit NMR device. In addition, we analyze sensitivity to structured perturbations in Hamiltonian and control parameters by introducing a prescribed uncertainty set and performing a comparative risk-aware redesign based on right-tail Conditional Value-at-Risk (RU-CVaR). This stage produces pulse solutions with broader tolerance margins within the chosen uncertainty model. The results demonstrate continuous pulse-level gate synthesis in an experimentally accessible setting and illustrate a hardware-aware compilation strategy that can be extended to other quantum platforms. While the uncertainty model considered here is tailored to NMR, the neural compilation and risk-aware optimization framework are general and may be useful in architectures where calibration overhead, parameter drift, or control constraints make repeated per-gate optimization costly.
Optimal Two-Qubit Gates for Group-IV Color-Centers in Diamond
Color centers associated with group-IV dopants in diamond with long-lived nuclear spins have emerged as major candidates for distributed quantum computing nodes and quantum repeaters. Several proof-of-principle experiments have already been demonstrated. A key operation for long-distance entanglement-distribution protocols are fast and robust gates between the electron spin and a nuclear spin. Here, we investigate numerically for an existing experimental platform of a Germanium-vacancy (GeV) center with a strongly-coupled ${}^{13}$C spin, how such gates can be implemented via quantum optimal control. In the presence of realistic noise we investigate different parameter regimes and gate operations and obtain robust two-qubit gates with fidelities exceeding $99.9 \%$. The framework provides a scalable strategy for group-IV quantum nodes and can be adapted to related architectures.
When T-Depth Misleads: Predicting Fault-Tolerant Quantum Execution Slowdown under Magic-State Delivery Constraints
The efficient execution of fault-tolerant quantum algorithms is fundamentally limited by the production rate of magic states required for non-Clifford operations. While circuit optimization typically targets T-depth, static T-depth does not reliably predict executable performance under bounded T-state delivery. We introduce a model that captures demand-supply imbalance using two key quantities: slack ratio, a structural indicator of scheduling flexibility, and Delta_max, a measure of cumulative demand surplus. We show that Delta_max is a strong schedule-level indicator of execution slowdown and yields a provable lower bound on executable makespan for a fixed schedule. Empirical evaluation on constructed directed acyclic graph (DAG) families, with arithmetic circuits and exact quantum Fourier transform (QFT) traces providing additional grounding, shows that slack ratio is a stronger structural predictor than T-depth for stall and inversion risk, while Delta_max is the strongest predictor of slowdown. Across 4,904 instances, the lower bound shows zero violations, with 88.9% of cases within one cycle. These results highlight the importance of explicitly modeling delivery constraints in fault-tolerant quantum compilation.
First-principles study of dispersive readout in circuit QED
The speed and fidelity of dispersive readout of superconducting qubits should improve by increasing the amplitude of the measurement drive. Experiments show, however, that beyond some drive amplitude there is always a saturation or drop in fidelity, often associated with a decrease in qubit energy relaxation time $T_1$. A simple Lindblad master equation does not capture the latter effect. More involved approaches based on effective master equations rely on strong assumptions about the spectra of the system and the bath and only partially agree with observations. Here, we perform a first-principles simulation of the full unitary dynamics of dispersive readout by considering the circuit QED Hamiltonian coupled to a microscopic model for the measurement transmission line, allowing for its arbitrary spectrum, including filters. Our access to the dynamics of the bath degrees of freedom allows us to investigate the emission spectrum of the system as a function of drive power. We show how the dependence of qubit $T_1$ on readout drive amplitude is sensitive to the details of the bath spectrum. In particular, we find that $T_1$ drops with increasing drive amplitude when a Purcell notch filter is placed at the qubit frequency, and that the Lindblad master equation shows general qualitative defects compared to the first-principles model.
Automated discovery of heralded ballistic graph state generators for fusion-based photonic quantum computation
Designing photonic circuits that prepare graph states with high fidelity and success probability is a central challenge in linear optical quantum computing. Existing approaches rely on hand-crafted designs or fusion-based assemblies. In the absence of multiplexing/boosting, both post-selected ballistic circuits and sequential fusion exhibit exponentially decreasing single-shot yields - a fundamental limitation that makes optimizing individual resource state generators particularly important, as these serve as building blocks in larger FBQC architectures. We present a general-purpose optimization framework for automated photonic circuit discovery using a novel polynomial-based simulation approach, enabling efficient strong simulation and gradient-based optimization. Our framework employs a two-pass optimization procedure: the first pass identifies a unitary transformation that prepares the desired state with perfect fidelity and maximal success probability, and the second pass implements a novel sparsification algorithm that reduces this solution to a compact photonic circuit with minimal beamsplitter count while preserving performance. This sparsification procedure often reveals underlying mathematical structure, producing highly simplified circuits with rational reflection coefficients. We demonstrate our approach by discovering optimized circuits for $3$-, $4$-, and $5$-qubit graph states across multiple equivalence classes. For 4-qubit states, our circuits achieve success probabilities of $2.053 \times 10^{-3}$ to $7.813 \times 10^{-3}$, outperforming the fusion baseline by up to $4.7 \times$. For 5-qubit states, we achieve $5.926 \times 10^{-5}$ to $1.157 \times 10^{-3}$, demonstrating up to $7.5 \times$ improvement. These results include the first known state preparation circuits for certain 5-qubit graph states.
Benchmarking Optimization Algorithms for Automated Calibration of Quantum Devices
We present the results of a comprehensive study of optimization algorithms for the calibration of quantum devices. As part of our ongoing efforts to automate bring-up, tune-up, and system identification procedures, we investigate a broad range of optimizers within a simulated environment designed to closely mimic the challenges of real-world experimental conditions. Our benchmark includes widely used algorithms such as Nelder-Mead and the state-of-the-art Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We evaluate performance in both low-dimensional settings, representing simple pulse shapes used in current optimal control protocols with a limited number of parameters, and high-dimensional regimes, which reflect the demands of complex control pulses with many parameters. Based on our findings, we recommend the CMA-ES algorithm and provide empirical evidence for its superior performance across all tested scenarios.
Entanglement dynamics and performance of two-qubit gates for superconducting qubits under non-Markovian effects
Within a numerically exact simulation technique, the dissipative dynamics of a two-qubit architecture is considered in which each qubit couples to its individual noise source (reservoir). The goal is to reveal the role of subtle qubit-reservoir correlations including non-Markovian processes as a prerequisite to guide further improvements of quantum computing devices. This paper addresses the following three topics. First, we examine the validity of the rotating wave approximation imposed previously on the qubit-reservoir coupling with respect to the disentanglement dynamics. Second, generation of the entanglement as well as destruction are analyzed by monitoring the reduced dynamics during and after application of a $\sqrt{\mbox{iSWAP}^\dagger}$ gate, also focusing on memory effects caused by reservoirs. Finally, the performance of a Hadamard + CNOT sequence is analyzed for different gate decomposition schemes. In all three cases, various types of noise sources and qubit parameters are considered.
Accelerating Fault-Tolerant Quantum Computation with Good qLDPC Codes
We propose a fault-tolerant quantum computation scheme that is broadly applicable to quantum low-density parity-check (qLDPC) codes. The scheme achieves constant qubit overhead and a time overhead of $O(d^{a+o(1)})$ for any $[[n,k,d]]$ qLDPC code with constant encoding rate and distance $d = \Omega(n^{1/a})$. For good qLDPC codes, the time overhead is minimized and reaches $O(d^{1+o(1)})$. In contrast, code surgery based on gauging measurement and brute-force branching requires a time overhead of $O(dw^{1+o(1)})$, where $d\leq w\leq n$. Thus, our scheme is asymptotically faster for all codes with $a < 2$. This speedup is achieved by developing techniques that enable parallelized code surgery under constant qubit overhead and leverage classical locally testable codes for efficient resource state preparation. These results establish a new paradigm for accelerating fault-tolerant quantum computation on qLDPC codes, while maintaining low overhead and broad applicability.
Catalytic Quantum Error Correction: Theory, Efficient Catalyst Preparation, and Numerical Benchmarks
We introduce Catalytic Quantum Error Correction (CQEC), a state recovery protocol exploiting catalytic covariant transformations. CQEC recovers a known target state from noisy copies without an error \emph{magnitude} threshold: recovery succeeds whenever the coherent modes satisfy $\mathcal{C}(\rho_0) \subseteq \mathcal{C}(\rho_\mathrm{noisy})$, regardless of noise strength. The main practical bottleneck -- catalyst preparation requiring $n^* \sim d^4 e^{2\gamma}$ copies -- is resolved by a three-stage pipeline combining CPMG dynamical decoupling, Clifford twirling, and the recursive swap test, achieving $F_\mathrm{cat} > 0.96$ with only 8~copies ($10^9$-fold reduction). Numerical validation across four quantum algorithms ($d = 4$--$64$), a cryptographic protocol, and three noise models confirms $F > 0.999$ in the asymptotic limit across 200~configurations.
Generation of magnonic squeezed state and its superposition in a hybrid qubit-magnon system
We propose a protocol for generating magnonic squeezed states (MSS) and their superpositions (SMSS) in a hybrid system comprising a superconducting flux qubit magnetically coupled to the Kittel mode of a yttrium iron garnet (YIG) sphere. The flux qubit provides an intrinsic longitudinal interaction with the magnon mode, which, under resonant microwave driving, gives rise to an effective qubit-state-dependent squeezing Hamiltonian. Numerical simulations incorporating realistic dissipation demonstrate that magnon quadrature noise reduction exceeding $8~\mathrm{dB}$ is achievable with experimentally accessible parameters.~By preparing the qubit in a superposition state followed by projective measurement, we further obtain symmetric and antisymmetric superpositions of orthogonally squeezed magnon states exhibiting clear phase-space interference fringes.~We discuss how the fourfold rotational symmetry of these states supports a bosonic logical encoding with potential for protecting against dominant error channels in magnonic platforms.
Resist-free shadow deposition using silicon trenches for Josephson junctions in superconducting qubits
Superconducting qubit fabrication innovations continue to be explored to achieve higher performance. Despite improvements to base layer fabrication and processing, resist-based Josephson junction (JJ) schemes have largely remained unchanged. The polymer mask during deposition causes chemical contamination and limits in situ and ex situ surface preparation, junction materials, and scalability. Here, we demonstrate a resist-free approach to junction fabrication based on etched silicon trenches that is CMOS compatible and easily integrated into existing innovations in qubit base layer fabrication and chemical processing. We fabricate Al-AlOx-Al JJs and qubits using this method, measuring median energy relaxation times up to 184 microseconds. We find minimal contamination at the substrate-metal interface and fluctuations of energy relaxation on a 35 hour timescale that are narrow and normally distributed. The method widens the process window for substrate preparation and new materials platforms.
High-Fidelity Transmon Reset with a Multimode Acoustic Resonator
Achieving sufficiently low residual excited-state populations remains a key challenge in superconducting quantum circuits, particularly for protocols operating close to noise limits or requiring repeated qubit initialization. Existing protocols primarily address this challenge through sophisticated control, engineered dissipation, or feedback mechanisms. Here, we demonstrate an alternative approach in which a superconducting qubit is reset using a physically distinct, intrinsically colder phononic bath. Specifically, we interface a transmon with a high-overtone bulk acoustic resonator (HBAR), enabling cooling of the qubit into GHz-frequency modes. Using this approach, we achieve a residual excited-state population of the qubit below $10^{-4}$, representing an improvement of one to two orders of magnitude compared to existing reset schemes. These results highlight the potential of phononic baths as a resource for high-fidelity qubit initialization in superconducting circuits.
Geometry-Induced Long-Range Correlations in Recurrent Neural Network Quantum States
Neural Quantum States based on autoregressive recurrent neural network (RNN) wave functions enable efficient sampling without Markov-chain autocorrelation, but standard RNN architectures are biased toward finite-length correlations and can fail on states with long-range dependencies. A common response is to adopt transformer-style self-attention, but this typically comes with substantially higher computational and memory overhead. Here we introduce dilated RNN wave functions, where recurrent units access distant sites through dilated connections, injecting an explicit long-range inductive bias while retaining a favorable $\mathcal{O}(N \log N)$ forward pass scaling. We show analytically that dilation changes the correlation geometry and can induce power-law correlation scaling in a simplified linearized and perturbative setting. Numerically, for the critical 1D transverse-field Ising model, dilated RNNs reproduce the expected power-law connected two-point correlations in contrast to the exponential decay typical of conventional RNN ans\"atze. We further show that the dilated RNN accurately approximates the one-dimensional Cluster state, a paradigmatic example with long-range conditional correlations that has previously been reported to be challenging for RNN-based wave functions. These results highlight dilation as a simple geometric mechanism for building correlation-aware autoregressive neural quantum states.
Hardware-Efficient Erasure Qubits With Superconducting Transmon Qutrits
Quantum error correction using erasure qubits offers higher fault-tolerant thresholds and improved scaling by converting dominant physical errors into detectable erasures. In superconducting circuits, erasure qubits can be constructed using the dual-rail approach, which, however, requires additional qubit-count overhead and tailored coupling elements. Here, we demonstrate a hardware-efficient scheme that operates transmon qutrits as erasure qubits, which is compatible with standard superconducting circuit-QED hardware. The logical states $\ket{0_\text{L}}$ and $\ket{1_\text{L}}$ are represented by the ground and second excited states, while the dominant relaxation errors can be detected via an ancilla qubit using a microwave-activated two-qutrit SWAP gate. We demonstrate a logical qubit $T_1$ lifetime exceeding $500\,\mu\mathrm{s}$, post-selected with repeated mid-circuit erasure detection, which is ten times longer than the $T_1$ time of the transmon physical qubit. Coherence times beyond $300\,\mu\mathrm{s}$ are achieved using dynamical decoupling. Single-qubit gate operations reach average Clifford gate infidelity on the order of $10^{-4}$. We further demonstrate dual-purposing an ancilla qubit for both erasure detection and parity checking, showing heralded generation of Bell states between erasure qubits. These results suggest that mainstream architectures of transmon qubit arrays may already be capable of implementing erasure-based QEC strategies for hardware-efficient fault-tolerant quantum computing.
The MQT Compiler Collection: A Blueprint for a Future-Proof Quantum-Classical Compilation Framework
As the capabilities of quantum computing hardware continue to rise, algorithms that exploit them are becoming increasingly complex. These developments increase the need for sophisticated compilation frameworks that translate high-level algorithms into executable code. In the past, most solutions were built with a quantum-first approach and handled mostly pure quantum programs without classical elements such as structured control flow. However, developments in quantum algorithms, error correction, and optimization, as well as the integration into high-performance computing (HPC) environments, depend on such classical elements. As quantum-first approaches increasingly struggle to handle these concepts, classical-first approaches are becoming a promising alternative. In this work, we present the MQT Compiler Collection, a blueprint for a future-proof quantum-classical compilation framework built on the Multi-Level Intermediate Representation (MLIR). After years of experience with the quantum-first approach and its shortcomings, we propose a framework that embraces core MLIR concepts to support the full compilation pipeline from high-level algorithms to hardware-specific instructions. The proposed architecture is designed from the ground up to support complex optimizations beyond, e.g., simple gate cancellation. It is publicly available at https://github.com/munich-quantum-toolkit/core.
Arqon: A suite of control applications enabling a reliable quantum network
A quantum network's purpose is to enable users to execute applications on end nodes. This requires the network to provide the service of creating entangled links between those nodes. Users of mature networks, such as the internet or the telephone network expect accepted service demands to be met reliably. We first define reliability requirements that extend classical computer network concepts to quantum network service delivery. We then introduce Arqon, a suite of control applications designed to deliver reliable service in centrally controlled quantum networks. We demonstrate through both analytic and numerical evaluation that Arqon satisfies all reliability requirements for accepted demands. These evaluations consider static network topologies. We provide a complete Python implementation and perform complexity analysis showing that admission control scales as $O(k^3)$ in the number of incoming demands $k$ and schedule computation scales as ${O(N^3)}$ in the number of accepted demands to schedule $N$.
Investigation of coherence of niobium-based resonators enabled by a fast-sealing microwave cavity
Resonators and qubits with a niobium (Nb) base metal layer achieve some of the highest coherence times in superconducting quantum devices. The performance of such devices is often limited by loss associated with two-level systems, which are found primarily at material surfaces and interfaces. The metal-air (MA) interface is a major contributor to device loss. In this work, we develop a fast-sealing microwave cavity that enables devices to be placed under vacuum within five minutes of oxide removal, thereby significantly reducing the MA interface loss compared to common device processing and packaging approaches. Using coplanar stripline resonators, we demonstrate that devices sealed in such a cavity exhibit internal quality factors exceeding one million at single-photon power. After re-exposure to air, the devices show downward resonance frequency shifts and quality factor degradations, quantitatively consistent with a model of Nb oxide regrowth. The fast-sealing microwave cavity provides a practical and consistent method to mitigate MA interface loss and sustain high coherence in Nb devices, and establishes a controlled platform for studying metal oxide regrowth kinetics and dielectric properties, the understanding of which is critical to achieving high coherence in superconducting quantum devices.
Quantum Patches: Enhancing Robustness of Quantum Machine Learning Models
Machine learning models and their applications, such as autonomous driving systems, are becoming increasingly common and are essential components of human daily life. However, due to their sensitivity to perturbed noise, these models are easily susceptible to adversarial attacks. Not only are classical machine learning models affected, but quantum machine learning (QML) models have also been proven to be vulnerable to adversarial attacks, which degrade their performance. To defend against these types of attacks, several classical methods have been proposed. Among these, a prominent approach uses various types of pseudo-noise during training to enhance the model's robustness against real-world attacks. One of the recently emerging solutions is to leverage the unique properties of quantum circuits to create quantum-based pseudo-noise similar to real perturbed noise to counter adversarial attacks. This paper proposes a solution that utilizes random quantum circuits (RQCs) as adversarial data to help QML models overcome these adversarial attacks. The results reported in this paper show that the data generated by RQC actually provides a similar effect to models trained with adversarial data on high-feature datasets. This quantum-based pseudo-noise resulted in a significant reduction in the attack rate in the CIFAR-10 data set, from \textbf{89. 8\%} to \textbf{68.45\%}. For the CINIC-10 dataset, the successful attack rate decreased from \textbf{94.23\%} to \textbf{78.68\%}. This research opens up avenues for applying unique quantum properties, such as superposition, entanglement, and even decoherence, to enhance the quality of machine learning models.
Comparison of the standard and dressed-picture master equations for the quantum Rabi model in the ultrastrong coupling regime
The goal of this chapter is to investigate the effects of relaxation and dephasing on the quantum Rabi model in the ultrastrong coupling regime, and to provide explicit formulas to implement and numerically solve the resulting nonunitary dynamics from first principles. The quantum Rabi model constitutes the most fundamental description of light-matter interaction, describing a single two-level system coupled to a single mode of a quantized cavity field. The ultrastrong coupling regime is typically defined by $g \gtrsim 0.1\omega$, where $\omega$ denotes the cavity-mode frequency. In this regime, the standard master equation of quantum optics -- commonly referred to as the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation -- becomes inaccurate. The reason is that strong light-matter interaction hybridizes the bare atom and field states, so that dissipation cannot be consistently described in the uncoupled basis. A consistent treatment must therefore incorporate this hybridization directly into the dissipative terms. One such approach is the dressed-picture Markovian master equation derived by Beaudoin, Gambetta, and Blais, in which the qubit-field interaction is explicitly included in the construction of the system-bath coupling operators. In this chapter, we numerically solve both the GKSL master equation and the dressed master equation (DME) for various initial field states, including coherent, odd Schr\"{o}dinger cat, squeezed vacuum, squeezed coherent, and thermal states. We also examine photon generation from the vacuum induced by external time-dependent modulation of the qubit parameters, as well as multiphoton Rabi oscillations for an initially excited qubit. Two reservoir spectral densities are considered: white and Ohmic noise. The differences between the two approaches are illustrated through numerical results for several physical observables.
Crosstalk-robust superconducting two-qubit geometric gates using tunable couplers
The design of coupler-based superconducting two-qubit gates simplifies circuit layout and alleviate frequency crowding, thereby enhancing the scalability and flexibility of quantum chips. However, in such architectures, a trade-off often exists between suppressing crosstalk and reducing gate duration, and how to achieve synergistic optimization of both remains an open challenge. To address this, this paper proposes a coupler-assisted superconducting two-qubit geometric gate scheme oriented towards crosstalk robustness. By introducing additional parametric degrees of freedom, the scheme steers the system evolution along desired trajectories, thereby flexibly avoiding crosstalk-sensitive operational regions. Numerical simulations demonstrate that the proposed scheme can effectively suppress crosstalk errors while enabling fast gate operations, and exhibits strong robustness against typical experimental imperfections such as qubit frequency drift. Moreover, even when accounting for unavoidable high-frequency oscillation terms and qubit decoherence in realistic physical systems, our crosstalk-robust two-qubit geometric gates still achieve high fidelity. This work provides a feasible pathway toward robust and efficient two-qubit gate implementation in superconducting quantum computation.
Beating three-parameter precision trade-offs with entangling collective measurements
Quantum-mechanical incompatibility, which precludes the simultaneous precise measurement of non-commuting observables, imposes fundamental limits on the rate at which classical information can be extracted. While the potential to surpass these limits using entangling collective measurements has been explored for two parameters, the regime of three or more parameters remains largely unexplored despite its fundamental and technological importance. Here, we investigate the three-parameter trade-off relations for estimating the Bloch vector components of a qubit, comparing conventional individual measurements with entangling collective measurements. We theoretically derive and experimentally implement optimal collective measurements on two identically prepared qubits using a programmable photonic circuit. Our experimental results demonstrate a clear violation of the entanglement-free trade-off relation -- by an average of 16 standard deviations -- achieving a tomography precision beyond the reach of any individual measurement scheme. This work directly confirms that optimal collective measurements can surpass the fundamental quantum limits of individual schemes in a three-parameter setting -- thereby deepening our understanding of quantum uncertainty relations beyond the two-parameter regime and providing a clear strategy to overcome the precision trade-offs imposed by quantum incompatibility.
Impact of Pump Phase-Noise on Josephson Traveling-Wave Parametric Amplifiers
Superconducting traveling-wave parametric amplifiers (TWPAs) are essential elements for enhancing the signal-to-noise ratio (SNR) and thus the read-out fidelity of superconducting qubits because of their high gain and near quantum-limited noise. However, the impact of the pump source, e.g., phase noise on these amplifiers, has not yet been studied. In this work, we show that among the two amplification processes in JTWPAs, the three-wave mixing (3WM) process is more sensitive to the pump phase noise than the four-wave mixing (4WM) process. We show that the even-order nonlinearity of 4th order and above in three-wave mixing is responsible for more than 10 dB increase of phase noise at high frequency offsets within the phase noise mask as the power of the pump increases. A polynomial model of the amplifier and cyclo-stationary property of phase noise also corroborate with the simulations. The Harmonic Balance (HB) periodic noise analysis tool and Leeson phase noise model in Keysight Advanced Design System (ADS) simulator were used in this study.
Loss-Tolerant Quantum Communication via Bosonic-GKP-Parity-Encoding
Quantum repeaters constitute a promising platform for enabling long distance quantum communication and may ultimately serve as the backbone of a secure quantum internet, a scalable quantum network, or a distributed quantum computer. An efficient approach to encoding qubits within an error-correcting code is provided by bosonic codes, in which even a single oscillator mode can function as a sufficiently large physical system. In this work, initially we focus on the bosonic Gottesman Kitaev Preskill (GKP) code as a natural candidate for loss correction based quantum repeaters, which can be implemented at room temperature. We demonstrate that transmission loss can be suppressed across three related protocols at the expense of the introduction of logical errors. The third protocol, where a relay-like teleamplifier is applied is optimal. This approach enables medium-distance quantum communication without requiring higher level encoding. We compute the resulting secure key rates while leveraging analog syndrome information. Furthermore, we propose a concatenated Bell state measurement (CBSM) scheme with a modified parity encoding based on GKP qubits, CV measurement and a clipping method that corrects transmission loss without introducing logical errors. This significantly enhances the possible transmission distance. We find that GKP based repeaters can achieve performance comparable to approaches relying on photonic qubits, while requiring orders of magnitude fewer qubits.
Tantalum-Encapsulated Niobium Superconducting Resonators: High Internal Quality Factor and Improved Temporal Stability via Surface Passivation
Superconducting coplanar waveguide resonators are essential components in quantum processors, where their internal quality factor (Qi) constrains qubit coherence and readout fidelity. In niobium devices, microwave losses at millikelvin temperatures are strongly influenced by two-level systems (TLS) associated with the complex NbOx surface oxide. To mitigate these losses, we investigate a surface-engineering approach in which Nb films are capped in situ with a thin tantalum layer to suppress Nb2O5 formation and replace the native NbOx interface with a Ta-based oxide. We fabricate Nb/Ta bilayer and reference Nb resonators on high-resistivity silicon using identical DC sputtering and wet etching conditions, and characterize their performance at millikelvin temperatures. Fresh Ta-encapsulated devices exhibit internal quality factors up to 2.4 x 10^6 in the near-single-photon regime, with power dependence consistent with reduced TLS-related loss at the metal-air interface. A control Nb device fabricated under the same process shows comparatively lower Q_TLS, consistent with the beneficial effect of the Ta capping layer. Furthermore, ageing tests performed on Nb/Ta resonators after six months reveal a moderate reduction in Q_TLS relative to their initial values, yet the performance remains superior to newly fabricated Nb-only devices. These results suggest that thin Ta encapsulation enhances interface quality and contributes to improved temporal stability while remaining compatible with Nb-based fabrication workflows.
Optimizing stimulated Raman adiabatic passage for leakage suppression via Pontryagin's maximum principle
The standard stimulated Raman adiabatic passage (STIRAP) protocol enables high-fidelity quantum state transfer in an ideal three-level system via adiabatic following of a dark state evolution. However, in practical systems with more energy levels, control pulses with finite spectral selectivity often couple the three-level subspace to the remaining subspace, introducing leakage that fundamentally limits the transfer performance. Here, we adopt a multilevel chain model for STIRAP that explicitly incorporates this leakage subspace. Using Pontryagin's maximum principle, we formulate a leakage-penalized quantum optimal control problem with the control pulses constrained to experimentally feasible Gaussian pulse families. We derive explicit gradients of the objective functional with respect to the pulse parameters, enabling efficient low-dimensional optimization that suppresses leakage while preserving the counterintuitive STIRAP pulse ordering. Numerical simulations for a superconducting transmon platform demonstrate that the optimized control pulses can significantly enhance the target-state transfer fidelity and provide enhanced robustness to amplitude miscalibration and detuning drifts.
SatQNet: Satellite-assisted Quantum Network Entanglement Routing Using Directed Line Graph Neural Networks
Quantum networks are expected to become a key enabler for interconnecting quantum devices. In contrast to classical communication networks, however, information transfer in quantum networks is usually restricted to short distances due to physical constraints of entanglement distribution. Satellites can extend entanglement distribution over long distances, but routing in such networks is challenging because satellite motion and stochastic link generation create a highly dynamic quantum topology. Existing routing methods often rely on global topology information that quickly becomes outdated due to delays in the classical control plane, while decentralized methods typically act on incomplete local information. We propose SatQNet, a reinforcement learning approach for entanglement routing in satellite-assisted quantum networks that can be decentralized at runtime. Its key innovation is an edge-centric directed line graph neural network that performs local message passing on directed edge embeddings, enabling it to better capture link properties in high-degree and time-varying topologies. By exchanging messages with neighboring repeaters, SatQNet learns a local graph representation at runtime that supports agents in establishing high-fidelity end-to-end entanglements. Trained on random graphs, SatQNet outperforms heuristic and learning-based approaches across diverse settings, including a real-world European backbone topology, and generalizes to unseen topologies without retraining.
Coherent Control of Nanoscale Nuclear Spin Ensembles in the Spin Noise Regime
Spin defects in solids, such as the nitrogen-vacancy (NV) center in diamond, have emerged as a key tool for detecting nuclear spins at the nanoscale. While active nuclear spin control via radio-frequency (RF) irradiation is often unnecessary for standard spin-noise detection, it becomes essential for advanced protocols like multidimensional nanoscale NMR. In this work, we investigate nuclear spin control using correlation spectroscopy techniques. We demonstrate, both theoretically and experimentally, that the resulting nuclear spin dynamics depend critically on the initial RF phase and its orientation relative to the NV crystalline axis. Depending on these parameters, identical nuclear rotations can yield full, partial, or even vanishing contrast in the NV readout. These findings highlight a previously underappreciated aspect of spin manipulation in the spin-noise regime: the link between the phase and direction of the applied RF field and its direct impact on correlation-based experiments. Consequently, imperfect calibration of these parameters can lead to ambiguous signal contrasts and misinterpretation of the underlying nuclear spin dynamics. Our results provide deeper insight into nanoscale spin control and pave the way toward reliable multidimensional spin resonance experiments.
Discrete-time quantum walks in synthetic dimensions
In this work we introduce discrete-time quantum walks in state space, more precisely on Fock-state lattices. Fock-state lattices provide a natural and clean setting for implementing lattice models, particularly in quantum optical systems. Thus, contrary to the common setting where the walker resides in real space or phase space, here the walk takes place in a synthetic space. We present a general formalism based on Lie algebras and their properties. For each Lie algebra one can associate both a phase space and a Fock-state lattice, and by understanding how these spaces are related, together with the action of generalized displacement operators, we construct the discrete unitary operator that generates the walk. In this framework the displacement operators replace the usual nearest-neighbor shifts and lead to state-dependent tunneling on the lattice. By considering several examples we demonstrate ballistic spreading and other characteristic features of discrete-time quantum walks, such as coin-walker entanglement and symmetry-induced interference patterns. We also show that different algebraic structures can give rise to qualitatively different dynamics, including anomalous behavior such as super-ballistic spreading as well as localization effects.
Quantum Uncertainty and Entropy
We review the plethora of uncertainty relations that appear in quantum mechanics and their nuances. We present both foundational applications, e.g. in understanding and defining complementarity, and practical applications, e.g. in quantum metrology and cryptography. Both variance- and entropy-based uncertainties are covered here.
Quantum Randomized Subspace Iteration
Resolving degenerate quantum eigenspaces - including topologically ordered ground states and frustrated magnets - requires preparing high-fidelity states that span every direction of the target manifold. Existing variational and projective algorithms do not naturally cover a multi-dimensional degenerate subspace without sequential orthogonality constraints. We introduce the quantum randomized subspace iteration (QRSI), a fully parallel construction that conjugates the Hamiltonian by independent random unitaries across as many branches as the degeneracy g, then invokes any chosen eigenstate-preparation primitive on each branch. The target subspace is identified from the resulting ensemble via standard subspace estimation, either classically through the coefficient matrix or on hardware through Gram-matrix measurements. We prove that the construction spans the full eigenspace almost surely and preserves the spectral gap exactly on every branch. For practical use, we show that these guarantees hold whenever the random rotations satisfy an anti-concentration condition over the degenerate manifold, substantially weaker than full Haar randomness. We demonstrate QRSI on the toric code, recovering all four topological ground states, and on random Hamiltonians with planted degeneracies.
Explicit Block Encoding of Difference-of-Gaussian Operators on a Periodic Grid
The Difference-of-Gaussian (DoG) is a widely used operator across applications, including image processing (feature and edge detection), quantum machine learning, and finite-difference methods (approximations of the Laplacian-of-Gaussian). In this paper, we construct an explicit quantum block encoding of the DoG operator on a periodic grid, exploiting its natural probabilistic structure. The central observation is that the DoG admits a natural decomposition to two normalized Gaussian distributions, each preparable by explicit and efficient circuits, with the negation encoded using a single Pauli-$Z$ gate on a branch-indicator qubit. This enables the operator's block encoding to be directly mapped to the Linear Combination of Unitaries framework without requiring signed amplitude loading, quantum random-access memory, or any other black-box oracles. The proposed method achieves a constant subnormalization factor $\lambda = 2$ independent of the grid size $N$, the spatial dimension $D$, and the stencil width. Additionally, we show that the DoG operator is diagonalized by the discrete Fourier basis, which allows us to derive an exact closed-form expression for the block-encoding success probability in terms of the input signal's power spectrum, weighted by the operator's transfer function. Finally, we prove that the expression reduces to $O(h^4)$ scaling with respect to grid spacing $h$ as the periodic grid becomes finer. This implementation provides an explicit construction method for a tunable, wide-stencil bandpass filter whose frequency response is controlled by two Gaussian scale parameters.
QuanBench+: A Unified Multi-Framework Benchmark for LLM-Based Quantum Code Generation
Large Language Models (LLMs) are increasingly used for code generation, yet quantum code generation is still evaluated mostly within single frameworks, making it difficult to separate quantum reasoning from framework familiarity. We introduce QuanBench+, a unified benchmark spanning Qiskit, PennyLane, and Cirq, with 42 aligned tasks covering quantum algorithms, gate decomposition, and state preparation. We evaluate models with executable functional tests, report Pass@1 and Pass@5, and use KL-divergence-based acceptance for probabilistic outputs. We additionally study Pass@1 after feedback-based repair, where a model may revise code after a runtime error or wrong answer. Across frameworks, the strongest one-shot scores reach 59.5% in Qiskit, 54.8% in Cirq, and 42.9% in PennyLane; with feedback-based repair, the best scores rise to 83.3%, 76.2%, and 66.7%, respectively. These results show clear progress, but also that reliable multi-framework quantum code generation remains unsolved and still depends strongly on framework-specific knowledge.
Covariant quantum error correction in a three-layer quantum brain model: computational analysis of layer-specific coherence dynamics
Quantum brain proposals require coherence on behaviorally relevant timescales, yet the gap between spin coherence times and neural decision windows has remained a quantitative obstacle. We evaluate approximate covariant quantum error correction (CQEC) -- a purification protocol constrained by the Eastin-Knill theorem -- across two radical-pair proteins parameterized by \textit{ab initio} spin Hamiltonians: monoamine oxidase~A (MAO-A) and cryptochrome (CRY, PDB~4I6G). Both share a three-layer architecture (${}^{31}$P nuclear spin memory, electron spin interface, classical electrochemistry) and identical hyperfine coupling ($A = 200$~MHz), but differ 16-fold in nuclear $T_2$: 3.2~ms (MAO-A) versus 52~ms (CRY). We test whether CQEC preserves coherence over the 200~ms Schultze-Kraft veto window by mapping each protein's $T_2$ gap onto a simulation decoherence rate ($\gamma_\mathrm{veto} = T_2~\text{gap}/2T_\mathrm{sim}$): 3.08 for MAO-A, 0.19 for CRY. At $\gamma_\mathrm{veto} = 0.19$, CQEC maintains tunneling coherence of 0.83 (95\% CI [0.76, 0.79]; versus 0.12 without correction, $\times$6.9 improvement). At $\gamma_\mathrm{veto} = 3.08$, coherence collapses to 0.012 even with CQEC. A $T_2$ sensitivity analysis confirms robustness: at $T_2 = 26$~ms (half the CRY estimate), CQEC-protected coherence remains 0.69. A classical Markov baseline produces only monotonic relaxation, confirming that CQEC-maintained oscillatory dynamics are genuinely quantum. However, no single protein optimizes both layers: CRY's shorter $T_2^e$ (0.53~ns versus 1.1~ns) worsens Layer~2 fidelity. This layer-protein tradeoff, together with unresolved challenges in state preparation and entanglement distribution, defines the next targets for quantum brain research.
Decoding coherent errors in toric codes on honeycomb and square lattices: duality to Majorana monitored dynamics and symmetry classes
Topological stabilizer codes, such as the toric and surface codes, are leading candidates for fault-tolerant quantum computation. While their decodability under stochastic noise has been extensively studied, the effects of coherent errors, which involve quantum interference, remain less explored. In this work, we study the decodability of toric codes on honeycomb and square lattices subject to $X$- and $Z$-type coherent errors generated by the $X$- and $Z$-rotations on each qubit. We establish a duality between these decoding problems and 1+1D monitored dynamics of non-interacting Majorana fermions. This duality shows that the Altland-Zirnbauer symmetry class of the dual Majorana dynamics governs the universal structure of the decodability phase diagram. We show that the honeycomb-lattice toric code (hTC) with $X$-type error is dual to class-DIII dynamics, while the hTC with $Z$-type error and the square-lattice toric code (sTC) with both error types are dual to class-D dynamics. The key distinction arises from time-reversal symmetry. In class DIII, the generic transition out of the decodable phase is dual to a measurement-induced transition between dynamical phases with area-law and logarithmic entanglement scaling. In contrast, in class D, the generic decodability transition corresponds to a transition between two topologically distinct area-law phases. To explore these transitions in microscopic models, we consider hTC and sTC with $X$-type errors as representatives and introduce a minimal two-parameter coherent error model with spatially varying rotation angles. Using analytical and numerical methods, we map out the decodability phase diagrams and characterize the universal behavior of the transitions. We find that the decodability of sTC is more vulnerable to spatially varying coherent errors than uniform ones.
An Algorithm for Fast Assembling Large-Scale Defect-Free Atom Arrays
It is widely believed that tens of thousands of physical qubits are needed to build a practically useful quantum computer. Atom arrays formed by optical tweezers are among the most promising platforms for achieving this goal, owing to the excellent scalability and mobility of atomic qubits. However, assembling a defect-free atom array with ~ 10^4 qubits remains algorithmically challenging, alongside other hardware limitations. This is due to the computationally hard path-planning problems and the time-consuming generation of suffciently smooth trajectories for optical tweezer potentials by spatial light modulators (SLM). Here, we present a unified framework comprising two innovative components to fully address these algorithmic challenges: (1) a path-planning module that employs a supervised learning approach using a graph neural network combined with a modified auction decoder, and (2) a potential-generation module called the phase and profile-aware Weighted Gerchberg-Saxton algorithm. The inference time for the first module is nearly a size-independent constant overhead of ~ 5 ms, and the second module generates a potential frame with about 0.5 ms, a timescale shorter than the current commercial SLM refresh time. Altogether, our algorithm enables the assembly of an atom array with 10^4 qubits on a timescale much shorter than the typical vacuum lifetime of the trapped atoms.
Probing Electrostatic Disorder via g-Tensor Geometry
Low-frequency charge noise induced by fluctuating electrostatic disorder is a major limitation for semiconductor hole spin qubits. Here, we analyze the quasistatic response of a hole spin qubit to individual two-level fluctuators (TLFs). We show that, due to the anisotropy of the g-tensor, the qubit response depends on the geometry of the fluctuator-induced dipolar perturbation. We then propose a readout protocol that isolates selected g-tensor components through an accumulated Berry phase and estimate, within our readout model, an order-unity signal-to-noise ratio with a total protocol time in the tens of microseconds. Finally, using microscopic simulations, we compute the quantum Fisher information (QFI) to identify magnetic field directions and confinement regimes in which the qubit is most sensitive to disorder-induced variations of selected g-tensor components.
Learning Encodings by Maximizing State Distinguishability: Variational Quantum Error Correction
Quantum error correction is crucial for protecting quantum information against decoherence. Traditional codes like the surface code require substantial overhead, making them impractical for near-term, early fault-tolerant devices. We propose a novel objective function for tailoring error correction codes to specific noise structures by maximizing the distinguishability between quantum states after a noise channel, ensuring efficient recovery operations. We formalize this concept with the distinguishability loss function, serving as a machine learning objective to discover resource-efficient encoding circuits optimized for given noise characteristics. We implement this methodology using variational techniques, termed variational quantum error correction (VarQEC). Our approach yields codes with desirable theoretical and practical properties and outperforms standard codes in various scenarios. We also provide proof-of-concept demonstrations on IBM and IQM hardware devices, highlighting the practical relevance of our procedure.
Randomized hypergraph states and their entanglement properties
We study the entanglement properties of randomized mixed hypergraph states, extending the concept of randomized mixed graph states to encompass hypergraph-based quantum states. In our model, imperfect generalized multi-qubit gates are applied probabilistically, simulating experimentally realistic noisy gate operations where gate fidelity decreases with increasing hyperedge order. We analyze bipartite and genuine multipartite entanglement of these mixed multi-qubit states. Numerical results for various hypergraph configurations with up to four qubits reveal rich, sometimes nonmonotonic entanglement behavior stemming from the interplay between hyperedge structure and gate imperfections. We derive analytical expressions for entanglement witnesses based on randomization overlap for new hypergraph families. Our findings contribute to understanding entanglement resilience under gate imperfections, providing insight into the experimental implementation of hypergraph states in noisy quantum devices.
Non-Markovian thermal reservoirs for autonomous entanglement distribution
We describe a novel scheme for the generation of stationary entanglement between two separated qubits that are driven by a purely thermal photon source. While in this scenario the qubits remain in a separable state at all times when the source is broadband, i.e. Markovian, the qubits relax into an entangled steady state once the bandwidth of the thermal source is sufficiently reduced. We explain this phenomenon by the appearance of a quasiadiabatic dark state and identify the most relevant nonadiabatic corrections that eventually lead to a breakdown of the entangled state, once the temperature is too high. This effect demonstrates how the non-Markovianity of an otherwise incoherent reservoir can be harnessed for quantum communication applications in optical, microwave, and phononic networks. As two specific examples, we discuss the use of filtered room-temperature noise as a passive resource for entangling distant superconducting qubits in a cryogenic quantum link or solid-state spin qubits in a phononic quantum channel.
Gate Freezing Method for Gradient-Free Variational Quantum Algorithms in Circuit Optimization
Parameterized quantum circuits (PQCs) are pivotal components of variational quantum algorithms (VQAs), which represent a promising pathway to quantum advantage in noisy intermediate-scale quantum (NISQ) devices. PQCs enable flexible encoding of quantum information through tunable quantum gates and have been successfully applied across domains such as quantum chemistry, combinatorial optimization, and quantum machine learning. Despite their potential, PQC performance on NISQ hardware is hindered by noise, decoherence, and the presence of barren plateaus, which can impede gradient-based optimization. To address these limitations, we propose novel methods for improving gradient-free optimizers Rotosolve, Fraxis, and FQS, incorporating information from previous parameter iterations. Our approach conserves computational resources by reallocating optimization efforts toward poorly optimized gates, leading to improved convergence. The experimental results demonstrate that our techniques consistently improve the performance of various optimizers, contributing to more robust and efficient PQC optimization.
Spectral Gaps with Quantum Counting Queries and Oblivious State Preparation
Approximating the $k$-th spectral gap $\Delta_k=|\lambda_k-\lambda_{k+1}|$ and the corresponding midpoint $\mu_k=\frac{\lambda_k+\lambda_{k+1}}{2}$ of an $N\times N$ Hermitian matrix with eigenvalues $\lambda_1\geq\lambda_2\geq\ldots\geq\lambda_N$, is an important special case of the eigenproblem with numerous applications in science and engineering. In this work, we present a quantum algorithm which approximates these values up to additive error $\epsilon\Delta_k$ using a logarithmic number of qubits. Notably, in the QRAM model, its total complexity (queries and gates) is bounded by $O\left( \frac{N^2}{\epsilon^{2}\Delta_k^2}\mathrm{polylog}\left( N,\frac{1}{\Delta_k},\frac{1}{\epsilon},\frac{1}{\delta}\right)\right)$, where $\epsilon,\delta\in(0,1)$ are the accuracy and the failure probability, respectively. For large gaps $\Delta_k$, this provides a speed-up against the best-known complexities of classical algorithms, namely, $O \left( N^{\omega}\mathrm{polylog} \left( N,\frac{1}{\Delta_k},\frac{1}{\epsilon}\right)\right)$, where $\omega\lesssim 2.371$ is the matrix multiplication exponent. A key technical step in the analysis is the preparation of a suitable random initial state, which ultimately allows us to efficiently count the number of eigenvalues that are smaller than a threshold, while maintaining a quadratic complexity in $N$. In the black-box access model, we also report an $\Omega(N^2)$ query lower bound for deciding the existence of a spectral gap in a binary (albeit non-symmetric) matrix.
Beyond Stellar Rank: Control Parameters for Scalable Optical Non-Gaussian State Generation
Advanced quantum technologies rely on non-Gaussian states of light, essential for universal quantum computation, fault-tolerant error correction, and quantum sensing. Their practical realization, however, faces hurdles: simulating large multi-mode generators is computationally demanding, and benchmarks such as the \emph{stellar rank} do not capture how effectively photon detections yield useful non-Gaussianity. We address these challenges by introducing the \emph{non-Gaussian control parameters} $(s_0,\delta_0)$, a continuous and operational measure that goes beyond stellar rank. Leveraging these parameters, we develop a universal optimization method that reduces photon-number requirements and greatly enhances success probabilities while preserving state quality. Applied to the Gottesman--Kitaev--Preskill (GKP) state generation, for example, our method cuts the required photon detections by a factor of three and raises the preparation probability by nearly $10^8$. Demonstrations across cat states, cubic phase states, GKP states, and even random states confirm broad gains in experimental feasibility. Our results provide a unifying principle for resource-efficient non-Gaussian state generation, charting a practical route toward scalable optical quantum technologies and fault-tolerant quantum computation.
Quantum speed limits based on Jensen-Shannon and Jeffreys divergences for general physical processes
We discuss quantum speed limits (QSLs) for finite-dimensional quantum systems undergoing general physical processes. These QSLs were obtained using two families of entropic measures, namely the square root of the Jensen-Shannon divergence, which in turn defines a faithful distance of quantum states, and the square root of the quantum Jeffreys divergence. The results apply to both closed and open quantum systems, and are evaluated in terms of the Schatten speed of the evolved state, as well as cost functions that depend on the smallest and largest eigenvalues of both initial and instantaneous states of the quantum system. To illustrate our findings, we focus on the unitary and nonunitary dynamics of mixed single-qubit states. In the first case, we obtain speed limits $\textit{\`{a} la}$ Mandelstam-Tamm that are inversely proportional to the variance of the Hamiltonian driving the evolution. In the second case, we set the nonunitary dynamics to be described by the noisy operations: depolarizing channel, phase damping channel, and generalized amplitude damping channel. We provide analytical results for the two entropic measures, present numerical simulations to support our results on the speed limits, comment on the tightness of the bounds, and provide a comparison with previous QSLs. Our results may find applications in the study of quantum thermodynamics, entropic uncertainty relations, and also complexity of many-body systems.
Quantum limit cycles and synchronization from a measurement perspective
Limit-cycle oscillators are the basic building blocks for synchronization; yet, the notion of a quantum limit cycle has remained unclear. Here, we study quantum limit cycles and synchronization in the presence of continuous heterodyne measurement. The resulting quantum trajectories, i.e., time evolutions of the quantum state conditioned on the measurement outcome, make the quantum limit cycles apparent. We focus on the paradigmatic model of the quantum van der Pol oscillator and on two-level systems. Our work provides insights into limit cycles in quantum systems, emphasizing their similarity to classical limit cycles subject to noise. Additionally, we connect theoretical measures of quantum synchronization to quantities experimentally accessible via heterodyne detection.
Homological origin of transversal implementability of logical diagonal gates in quantum CSS codes
Transversal Pauli $Z$ rotations provide a natural route to fault-tolerant logical diagonal gates in quantum CSS codes, but their capability is inherently constrained. We develop a homological framework that organizes transversal diagonal gates in terms of their logical action and physical implementation, revealing two layers of structure that govern their behavior. At a fixed level, we establish that their logical action admits a classification in terms of homological data of the underlying chain complex, extending the standard description of logical operators. We then formulate the refinement to finer angles as a lifting problem and derive two Bockstein-type obstruction maps, whose vanishing is a necessary and sufficient condition for the existence of a transversal logical diagonal gate at the next level. Within this framework, known algebraic conditions such as divisibility and triorthogonality are reinterpreted as necessary conditions for the existence of transversal logical diagonal gates with uniform rotation angles. Our results identify homological obstructions governing transversal implementability and provide a conceptual foundation for a formal theory of transversal structures in quantum error correction.
Exponential Scaling Barriers for Variational Quantum Eigensolvers
The Variational Quantum Eigensolver (VQE) is widely regarded as a promising algorithm for calculating ground states of quantum systems that are intractable for classical computers. This promise is typically motivated by the hope of mitigating the exponential growth of Hilbert space with system size. Here we scrutinize how the computational cost of adaptive VQE scales with the size of the target system. We demonstrate that the R\'enyi entropy derived from classical simulations predicts the required number of adaptive iterations of VQE with high accuracy ($R^2 \approx 0.99$). We validate this on a benchmarking set of more than 20 different molecules with active spaces ranging from four to ten orbitals. For these molecules, we find an exponential scaling of the number of adaptive iterations, and in turn, of the circuit depth with the system size. We therefore conclude that it is unlikely that VQE in its current form is able to simulate large molecular systems with high fidelity without exponential resource requirements.
Tensor-Parallel Emulation of Quantum Circuits with Block-Cyclic Distributed Matrix Product States
Tensor networks establish an adaptable framework for the emulation of quantum circuits. By partitioning exponentially large registers and gates into smaller tensors, this unlocks fast transformations through tensor algebra, and grants fine control over memory, runtime and accuracy. Due to inherently lower spatial footprint, there is a gap in distributed-memory tensor network methods. While certain parallel techniques exist, they are usually limited to direct contraction and sampling problems, and a more general approach is needed for tensor representations like matrix product states (MPS), which efficiently approximate full quantum state evolution. In this study, we expand the MPS site tensors beyond local memory by introducing a tensor-parallel distribution scheme, where individual dense tensors are evenly scattered across a subset of indices. This is further facilitated by leveraging pivoted QR factorisation instead of slower singular value decomposition (SVD). We demonstrate the capabilities of our approach by approximately emulating the classically difficult Google's random circuit sampling (RCS) benchmark. The highest bond dimensions of 16,384 is reached, surpassing the accuracy of the state-of-the-art methods by three orders of magnitude on 32 nodes of ARCHER2. We also show how this helps advance experiments involving more practical quantum phase estimation circuits. Our approach has the potential to enhance numerous algorithms based on dense tensor networks, offering a scalable and naturally load-balanced distribution formula. It is also compatible with other types of parallelism, unlocking new opportunities to push the quantum-classical computational phase boundary.
Ten-Second Electron-Spin Coherence in Isotopically Engineered Diamond
Solid-state spin defects are a promising platform for quantum networks. A key requirement is to combine long ground-state spin-coherence times with a coherent optical transition for spin-photon entanglement. Here, we investigate the spin and optical coherence of single nitrogen-vacancy (NV) centres in (111)-grown isotopically engineered diamond. Our diamond-growth process yields a precisely controlled $^{13}\mathrm{C}$ concentration and low-ppb nitrogen concentrations. Combined with the mitigation of 50 Hz noise using a real-time feedforward scheme and tailored decoupling sequences, this enables record defect-electron-spin coherence times of $T_2 = 6.8(1)$ ms for a Hahn echo and of $T_2^{DD} = 11.2(8)$ s under dynamical decoupling. In addition, we observe coherent optical transitions with a near-lifetime-limited homogeneous linewidth of 16.9(4) MHz and characterize the spectral diffusion dynamics. These results provide new avenues to investigate the incorporation of impurities in diamond and new opportunities for improved spin-qubit control for quantum networks and other quantum technologies.
When is randomization advantageous in quantum simulation?
We study the regimes in which Hamiltonian simulation benefits from randomization. We introduce a sparse-QSVT construction based on composite stochastic decompositions, where dominant terms are treated deterministically and smaller contributions are sampled stochastically. Crucially, we analyze how stochastic and approximation errors propagate through block-encoding and QSVT procedures. To benchmark this approach, we construct ensembles of random Hamiltonians with controlled coefficient dispersion, locality, and number of terms, designed to favor randomization, and therefore providing an upper bound on its practical advantage. For Hamiltonians with many terms and highly inhomogeneous coefficient distributions, randomized methods reduce gate counts by up to an order of magnitude. However, this advantage is confined to moderate-precision regimes: as the target error decreases, deterministic methods become more efficient, with a crossover near $\varepsilon \sim 10^{-3}$. Although this regime partially overlaps with quantum chemistry Hamiltonians, realistic systems exhibit additional structure, such as commutation patterns, not captured by our model, which are expected to further favor deterministic approaches.
Quantum Simulation of Collective Neutrino Oscillations using Dicke States
In dense neutrino gases, which exist for instance in supernovae, the flavour states of different neutrinos may become entangled with one another. The theoretical description of such systems may therefore call for simulations on a quantum computer. Existing quantum simulations of simple toy systems are not optimal in the sense that they do not fully exploit the symmetries of the system. Here, we propose a new class of qubit-efficient algorithms based on Dicke states and the $su(2)$ spin algebra. We demonstrate the excellent performance of these algorithms both on classical and on quantum hardware.
Optimal Quantum State Testing Even with Limited Entanglement
In this work, we consider the fundamental task of quantum state certification: given copies of an unknown quantum state $\rho$, test whether it matches some target state $\sigma$ or is $\epsilon$-far from it. For certifying $d$-dimensional states, $\Theta(d/\epsilon^2)$ copies of $\rho$ are known to be necessary and sufficient. However, the algorithm achieving this complexity makes fully entangled measurements over all $O(d/\epsilon^2)$ copies of $\rho$. Often, one is interested in certifying states to a high precision; this makes such joint measurements intractable even for low-dimensional states. Thus, we study whether one can obtain optimal rates for quantum state certification and related testing problems while only performing measurements on $t$ copies at once, for some $1 < t \ll d/\epsilon^2$. While it is well-understood how to use intermediate entanglement to achieve optimal quantum state learning, the only protocol known to achieve optimal testing is the one using fully entangled measurements. Our main result is a smooth copy complexity upper bound for state certification as a function of $t$, which achieves a near-optimal rate at $t = d^2$. In the high-precision regime, i.e., for $\epsilon < \frac{1}{\sqrt{d}}$, this is a strict improvement over the entanglement used by the aforementioned optimal protocol. We also extend our techniques to develop new algorithms for the related tasks of mixedness testing and purity estimation, and show tradeoffs achieving the optimal rates for these problems at $t = d^2$ as well. Our algorithms are based on novel reductions from testing to learning and leverage recent advances in quantum state tomography in a non-black-box fashion. We complement our upper bounds with smooth lower bounds that imply joint measurements on $t \geq d^{\Omega(1)}$ copies are necessary to achieve optimal rates for certification in the high-precision regime.
Exponential quantum advantage in processing massive classical data
Broadly applicable quantum advantage, particularly in classical data processing and machine learning, has been a fundamental open problem. In this work, we prove that a small quantum computer of polylogarithmic size can perform large-scale classification and dimension reduction on massive classical data by processing samples on the fly, whereas any classical machine achieving the same prediction performance requires exponentially larger size. Furthermore, classical machines that are exponentially larger yet below the required size need superpolynomially more samples and time. We validate these quantum advantages in real-world applications, including single-cell RNA sequencing and movie review sentiment analysis, demonstrating four to six orders of magnitude reduction in size with fewer than 60 logical qubits. These quantum advantages are enabled by quantum oracle sketching, an algorithm for accessing the classical world in quantum superposition using only random classical data samples. Combined with classical shadows, our algorithm circumvents the data loading and readout bottleneck to construct succinct classical models from massive classical data, a task provably impossible for any classical machine that is not exponentially larger than the quantum machine. These quantum advantages persist even when classical machines are granted unlimited time or if BPP=BQP, and rely only on the correctness of quantum mechanics. Together, our results establish machine learning on classical data as a broad and natural domain of quantum advantage and a fundamental test of quantum mechanics at the complexity frontier.
Control-centric quantum noise spectroscopy of time-ordered polyspectra
Precise environmental-noise characterisation in open quantum systems is a key step toward high-fidelity quantum control and targeted decoherence suppression in computing and sensing applications. Non-parametric quantum noise spectroscopy (QNS) provides a general-purpose, model-agnostic framework for estimating the spectral properties of an environment. The ability to perform such protocols under realistic constraints is key to their practical applicability. Notably, it is important to account for control constraints and understand how they limit the ability to learn about noise correlations as experiment-agnostic objects. We show how adopting a control-centric point of view allows one to recast the noise spectroscopy problem in such a way that (i) the central objects are now the time-ordered polyspectra, (ii) control filter functions are no longer encumbered by time-ordering. In particular, we show that this approach enables the seamless generalisation of frequency-comb QNS protocols to arbitrary control scenarios without introducing additional control symmetries that effectively remove time-ordering from filter functions, improving estimation in typically pathological scenarios. We demonstrate the targeted reconstruction of the time-ordered polyspectra across classical Gaussian and quantum non-Gaussian environments via simulations.
Trotterization with Many-body Coulomb Interactions: Convergence for General Initial Conditions and State-Dependent Improvements
Efficiently simulating many-body quantum systems with Coulomb interactions is a fundamental question in quantum physics, quantum chemistry, and quantum computing, yet it presents unique challenges: the Hamiltonian is an unbounded operator (both kinetic and potential parts are unbounded); its Hilbert space dimension grows exponentially with particle number; and the Coulomb potential is singular, long-ranged, non-smooth, and unbounded, violating the regularity assumptions of many prior state-of-the-art many-body simulation analyses. In this work, we establish rigorous error bounds for Trotter formulas applied to many-body quantum systems with Coulomb interactions. Our first main result shows that for general initial conditions in the domain of the Hamiltonian, second-order Trotter achieves a sharp $1/4$ convergence rate with explicit polynomial dependence of the error prefactor on the particle number. The polynomial dependence on system size suggests that the algorithm remains quantumly efficient, even without introducing any regularization of the Coulomb singularity. Notably, although the result under general conditions constitutes a worst-case bound, this rate has been observed in prior work for the hydrogen ground state, demonstrating its relevance to physically and practically important initial conditions. Our second main result identifies a set of physically meaningful conditions on the initial state under which the convergence rate improves to first and second order. For hydrogenic systems, these conditions are connected to excited states with sufficiently high angular momentum. Our theoretical findings are consistent with prior numerical observations.
Complexity phase transition for continuous-variable cluster state
Continuous-variable (CV) cluster states offer a promising platform for large-scale measurement-based quantum computations (MBQC). However, finite squeezing inevitably introduces Gaussian noise during MBQC. While fault-tolerant MBQC schemes exist in principle, they require the scalable incorporation of non-Gaussian resources, such as GKP states, which remain experimentally challenging. Consequently, a central question at this stage is how finite squeezing fundamentally constrains the intrinsic computational power of CV cluster states themselves. In this work, we address this question by analyzing the classical complexity of measurement-based linear optics (MBLO) implemented with such states, motivated by its near-term feasibility and recent experimental progress. We develop an explicit MBLO framework and examine how the squeezing level governs the complexity of the classical simulation of the resulting output states. Specifically, we identify squeezing-level thresholds that delineate classically tractable and intractable regimes, thereby revealing a squeezing-driven complexity phase transition. These findings advance our understanding of the squeezing resources necessary for meaningful quantum computation in current experimental regimes. Furthermore, they underscore the critical need to either scale the squeezing level or integrate error-correction schemes to achieve reliable, large-scale quantum computation with CV cluster states.
Optimal noisy quantum phase estimation with finite-dimensional states
Phase estimation in quantum interferometry is a major scenario where the quantum advantage is significantly revealed. Recently, the optimal finite-dimensional probe states (OFPSs) for phase estimation in two-mode quantum interferometry have been provided with the absence of noise [J.-F. Qin et al., Phys. Rev. A 112, 052428 (2025)]. However, the noise is inevitable in practice and the previously obtained OFPSs may cease to be optimal anymore. Hence, the forms of the true OFPSs in the existence of various noises are still open questions. Hereby, the noise of particle loss is studied and the true OFPSs under this noise have been investigated with the numerical algorithm named constrained optimization by linear approximation. Furthermore, a two-step measurement strategy is proposed to realize the ultimate precision limit in practice. The validity of this strategy is confirmed by the numerical simulation of practical experiments.
Analysis of State Teleportation using Noisy Quantum Gates
Noise is a major challenge in quantum computing, affecting the reliability of quantum protocols. In this work, we analytically study the impact of various noise processes, such as depolarization, bit flip, and phase flip, on the quantum state teleportation protocol. Each noise process is modeled as a quantum channel and is applied individually to all qubits after the corresponding unitary operations to simulate realistic conditions. We evaluate the fidelity between the ideal and noisy teleported states to quantify the effect of noise. Our analysis shows that the fidelity decreases polynomially, in general, as the noise strength increases for all noise types, highlighting the sensitivity of state teleportation to different noise mechanisms. However, in the low noise regime, the fidelity decreases only linearly, indicating the robustness of the teleportation protocol. These results provide insight into error characterization and can inform strategies for noise mitigation in practical quantum computing applications.
Hardware-Aware Quantum Support Vector Machines
Deploying quantum machine learning algorithms on near-term quantum hardware requires circuits that respect device-specific gate sets, connectivity constraints, and noise characteristics. We present a hardware-aware Neural Architecture Search (NAS) approach for designing quantum feature maps that are natively executable on IBM quantum processors without transpilation overhead. Using genetic algorithms to evolve circuit architectures constrained to IBM Torino native gates (ECR, RZ, SX, X), we demonstrate that automated architecture search can discover quantum Support Vector Machine (QSVM) feature maps achieving competitive performance while guaranteeing hardware compatibility. Evaluated on the UCI Breast Cancer Wisconsin dataset, our hardware-aware NAS discovers a 12-gate circuit using exclusively IBM native gates (6 ECR, 3 SX, 3 RZ) that achieves 91.23 % accuracy on 10 qubits-matching unconstrained gate search while requiring zero transpilation. This represents a 27 percentage point improvement over hand-crafted quantum feature maps (64 % accuracy) and approaches the classical RBF SVM baseline (93 %). We show that removing architectural constraints (fixed RZ placement) within hardware-aware search yields 3.5 percentage point gains, and that 100 % native gate usage eliminates decomposition errors that plague universal gate compilations. Our work demonstrates that hardware-aware NAS makes quantum kernel methods practically deployable on current noisy intermediate-scale quantum (NISQ) devices, with circuit architectures ready for immediate execution without modification.
Fast and Coherent Transfer of Atomic Qubits in Optical Tweezers using Fiber Array Architecture
Programmable neutral-atom arrays offer a promising route toward scalable quantum computing, where coherent qubit transfer enables non-local connectivity and reduces resource overhead. However, transfer speed and motional heating remain key bottlenecks for fast and deep quantum circuits. Here, we employ a fiber array neutral-atom quantum computing architecture with site-resolved control of trap depths to realize smooth amplitude exchange between static and moving traps, thereby enabling fast and coherent qubit transfer with ultralow motional heating. With a 10 $\mu$s in situ transfer between static and moving traps, we obtain a per-cycle heating rate of 0.156(9) $\mu$K, sustain over 500 cycles with negligible atom loss, and achieve a quantum state fidelity of 0.99992(5) per cycle. For inter-site transfer between two separated static traps, the operation takes 120 $\mu$s with 0.783(17) $\mu$K heating per transfer, and remains negligible atom loss for up to 100 repeated cycles with a fidelity of 0.9998(1) per transfer. Furthermore, through experimental studies of parallel transfer, we establish a model that elucidates the relationship between array inhomogeneity and the transfer heating rate. This fast, low-heating coherent transfer capability provides a practical route for improving both speed and fidelity in atom-shuttling based quantum computing.
Hybrid Quantum--Classical k-Means Clustering via Quantum Feature Maps
Clustering is one of the most fundamental tasks in machine learning, and the k-means clustering algorithm is perhaps one of the most widely used clustering algorithms. However, it suffers from several limitations, such as sensitivity to centroid initialization, difficulty capturing non-linear structure, and poor performance in high-dimensional spaces. Recent work has proposed improved initialization strategies and quantum-assisted distance computation, but the similarity metric itself has largely remained classical. In this study, we propose a quantum-enhanced variant of k-means that replaces the Euclidean distance with a quantum kernel derived from the inner product between feature-mapped quantum states. Using the Iris dataset, we use multiple quantum feature maps, including entangled SU2 and ZZ circuits, to embed classical data into a higher-dimensional Hilbert space where cluster structures become more separable. We will also be testing using another dataset, namely the breast cancer dataset. Similarity between data points is computed through the inner product between two states. Our results show that this approach achieves improved clustering stability and competitive accuracy compared to the classical algorithm, with the SU2 feature map yielding an accuracy of 88.6 % on the Iris dataset and 91.0 % on the breast cancer dataset, despite operating on NISQ-feasible shallow circuits. These findings suggest that quantum kernels provide a richer similarity landscape than traditional distance metrics, offering a promising path toward more robust unsupervised learning in the NISQ era.
Non-variational supervised quantum kernel methods: a review
Quantum kernel methods (QKMs) have emerged as a prominent framework for supervised quantum machine learning. Unlike variational quantum algorithms, which rely on gradient-based optimisation and may suffer from issues such as barren plateaus, non-variational QKMs employ fixed quantum feature maps, with model selection performed classically via convex optimisation and cross-validation. This separation of quantum feature embedding from classical training ensures stable optimisation while leveraging quantum circuits to encode data in high-dimensional Hilbert spaces. In this review, we provide a thorough analysis of non-variational supervised QKMs, covering their foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, and methods for their estimation in practice. We examine frameworks for assessing quantum advantage, including generalisation bounds and necessary conditions for separation from classical models, and analyse key challenges such as exponential concentration, dequantisation via tensor-network methods, and the spectral properties of kernel integral operators. We further discuss structured problem classes that may enable advantage, and synthesise insights from comparative and hardware studies. Overall, this review aims to clarify the regimes in which QKMs may offer genuine advantages, and to delineate the conceptual, methodological, and technical obstacles that must be overcome for practical quantum-enhanced learning.
A Review of Variational Quantum Algorithms: Insights into Fault-Tolerant Quantum Computing
Variational quantum algorithms (VQAs) have established themselves as a central computational paradigm in the Noisy Intermediate-Scale Quantum (NISQ) era. By coupling parameterized quantum circuits (PQCs) with classical optimization, they operate effectively under strict hardware limitations. However, as quantum architectures transition toward early fault-tolerant (EFT) and ultimate fault-tolerant (FT) regimes, the foundational principles and long-term viability of VQAs require systematic reassessment. This review offers an insightful analysis of VQAs and their progression toward the fault-tolerant regime. We deconstruct the core algorithmic framework by examining ansatz design and classical optimization strategies, including cost function formulation, gradient computation, and optimizer selection. Concurrently, we evaluate critical training bottlenecks, notably barren plateaus (BPs), alongside established mitigation strategies. The discussion then explores the EFT phase, detailing how the integration of quantum error mitigation and partial error correction can sustain algorithmic performance. Addressing the FT phase, we analyze the inherent challenges confronting current hybrid VQA models. Furthermore, we synthesize recent VQA applications across diverse domains, including many-body physics, quantum chemistry, machine learning, and mathematical optimization. Ultimately, this review outlines a theoretical roadmap for adapting quantum algorithms to future hardware generations, elucidating how variational principles can be systematically refined to maintain their relevance and efficiency within an error-corrected computational environment.
Informational Mpemba Effect for Fast State Purification in Non-Hermitian System
Quantum systems are inherently fragile to environmental fluctuations or decoherence, limiting their advantages in applications of quantum information and quantum computation. State purification offers a route to recover the purity of system under noisy conditions. Here, we demonstrate a rapid purification of initially mixed states by harnessing collective reservoir engineering in driven non-Hermitian qubit systems, together with multipartite entanglement generation in larger systems. We show that the onset of efficient purification-assisted entanglement generation is dictated by the degeneracy of collective subradiant modes, rather than by exceptional points. Moreover, the system dynamics manifests an informational Mpemba effect, i.e., a more mixed initial state reaches its steady state with unit purity at a faster rate, resembling the conventional Mpemba effect where a hotter system cools more rapidly. These results reveal a unique advantage of driven non-Hermitian quantum systems with engineered collective dissipation, enabling enhanced purification efficiency and offering new opportunities for quantum engineering.
Investigation of Automated Design of Quantum Circuits for Imaginary Time Evolution Methods Using Deep Reinforcement Learning
Efficient ground state search is fundamental to advancing combinatorial optimization problems and quantum chemistry. While the Variational Imaginary Time Evolution (VITE) method offers a useful alternative to Variational Quantum Eigensolver (VQE), and Quantum Approximate Optimization Algorithm (QAOA), its implementation on Noisy Intermediate-Scale Quantum (NISQ) devices is severely limited by the gate counts and depth of manually designed ansatz. Here, we present an automated framework for VITE circuit design using Double Deep-Q Networks (DDQN). Our approach treats circuit construction as a multi-objective optimization problem, simultaneously minimizing energy expectation values and optimizing circuit complexity. By introducing adoptive thresholds, we demonstrate significant hardware overhead reductions. In Max-Cut problems, our agent autonomously discovered circuits with approximately 37\% fewer gates and 43\% less depth than standard hardware-efficient ansatz on average. For molecular hydrogen ($H_2$), the DDQN also achieved the Full-CI limit, with maintaining a significantly shallower circuit. These results suggest that deep reinforcement learning can be helpful to find non-intuitive, optimal circuit structures, providing a pathway toward efficient, hardware-aware quantum algorithm design.
Belief Propagation Convergence Prediction for Bivariate Bicycle Quantum Error Correction Codes
Decoding Bivariate Bicycle (BB) quantum error correction codes typically requires Belief Propagation (BP) followed by Ordered Statistics Decoding (OSD) post-processing when BP fails to converge. Whether BP will converge on a given syndrome is currently determined only after running BP to completion. We show that convergence can be predicted in advance by a single modulo operation: if the syndrome defect count is divisible by the code's column weight w, BP converges with high probability (100% at p <= 0.001, degrading to 87% at p = 0.01); otherwise, BP fails with probability >= 90%. The mechanism is structural: each physical data error activates exactly w stabilizers, so a defect count not divisible by w implies the presence of measurement errors outside BP's model space. Validated on five BB codes with column weights w = 2, 3, and 4, mod-w achieves AUC = 0.995 as a convergence classifier at p = 0.001 under phenomenological noise, dominating all other syndrome features (next best: AUC = 0.52). The false positive rate scales empirically as O(p^2.05) (R^2 = 0.98), confirming the analytical bound from Proposition 2. Among BP failures on mod-w = 0 syndromes, 82% contain weight-2 data error clusters, directly confirming the dominant failure mechanism. The prediction is invariant under BP scheduling strategy and decoder variant, including Relay-BP - the strongest known BP enhancement for quantum LDPC codes. These results apply directly to IBM's Gross code [[144, 12, 12]] and Two-Gross code [[288, 12, 18]], targeted for deployment in 2026-2028.
Harnessing dark states: coherent control in coupled cavity-Rydberg-atom systems
The dark-state effect, caused by destructive interference, not only is an important fundamental research topic in atomic physics and quantum optics, but also has wide potential application in quantum physics and quantum information science. Using the arrowhead-matrix method, here we study the dark-state effect in a coupled cavity-Rydberg-atom system, in which $N$ Rydberg atoms with the dipole-dipole interactions are coupled to a single-mode cavity field. We obtain the numbers and form of the dark states in certain excitation-number subspaces for the two-, three-, and four-atom cases, as well as in the single-excitation subspace for a general $N$-atom case. We also suggest to characterize the dark states by inspecting the populations of some specific quantum states, which can be detected in experiments. Furthermore, we analyze the dark-state effect in a realistic case, where both the atomic dipole-dipole interaction strengths and the atom-cavity-field coupling strengths depend on the position of the atoms. Our findings pave the way for studying dark-state physics and applications in the cavity-Rydberg-atom platform.
Divide et impera: hybrid multinomial classifiers from quantum binary models
We investigate how to combine a collection of quantum binary models into a multinomial classifier. We employ a hybrid approach, adopting strategies like one-vs-one, one-vs-rest and a binary decision tree. We benchmark each method, by emphasizing their computational overhead and their impact on the quantum advantage. By comparison against a classical binary model (generalized using the same approach), we show that the decision tree represents a cost-effective solution, achieving similar accuracies to other methods with an overhead at most logarithmic in the total number of classes.