XML / RSS feed — for news readers and subscription

Ten-Second Electron-Spin Coherence in Isotopically Engineered Diamond

Solid-state spin defects are a promising platform for quantum networks. A key requirement is to combine long ground-state spin-coherence times with a coherent optical transition for spin-photon entanglement. Here, we investigate the spin and optical coherence of single nitrogen-vacancy (NV) centres in (111)-grown isotopically engineered diamond. Our diamond-growth process yields a precisely controlled $^{13}\mathrm{C}$ concentration and low-ppb nitrogen concentrations. Combined with the mitigation of 50 Hz noise using a real-time feedforward scheme and tailored decoupling sequences, this enables record defect-electron-spin coherence times of $T_2 = 6.8(1)$ ms for a Hahn echo and of $T_2^{DD} = 11.2(8)$ s under dynamical decoupling. In addition, we observe coherent optical transitions with a near-lifetime-limited homogeneous linewidth of 16.9(4) MHz and characterize the spectral diffusion dynamics. These results provide new avenues to investigate the incorporation of impurities in diamond and new opportunities for improved spin-qubit control for quantum networks and other quantum technologies.

When is randomization advantageous in quantum simulation?

We study the regimes in which Hamiltonian simulation benefits from randomization. We introduce a sparse-QSVT construction based on composite stochastic decompositions, where dominant terms are treated deterministically and smaller contributions are sampled stochastically. Crucially, we analyze how stochastic and approximation errors propagate through block-encoding and QSVT procedures. To benchmark this approach, we construct ensembles of random Hamiltonians with controlled coefficient dispersion, locality, and number of terms, designed to favor randomization, and therefore providing an upper bound on its practical advantage. For Hamiltonians with many terms and highly inhomogeneous coefficient distributions, randomized methods reduce gate counts by up to an order of magnitude. However, this advantage is confined to moderate-precision regimes: as the target error decreases, deterministic methods become more efficient, with a crossover near $\varepsilon \sim 10^{-3}$. Although this regime partially overlaps with quantum chemistry Hamiltonians, realistic systems exhibit additional structure, such as commutation patterns, not captured by our model, which are expected to further favor deterministic approaches.

Quantum Simulation of Collective Neutrino Oscillations using Dicke States

In dense neutrino gases, which exist for instance in supernovae, the flavour states of different neutrinos may become entangled with one another. The theoretical description of such systems may therefore call for simulations on a quantum computer. Existing quantum simulations of simple toy systems are not optimal in the sense that they do not fully exploit the symmetries of the system. Here, we propose a new class of qubit-efficient algorithms based on Dicke states and the $su(2)$ spin algebra. We demonstrate the excellent performance of these algorithms both on classical and on quantum hardware.

Optimal Quantum State Testing Even with Limited Entanglement

In this work, we consider the fundamental task of quantum state certification: given copies of an unknown quantum state $\rho$, test whether it matches some target state $\sigma$ or is $\epsilon$-far from it. For certifying $d$-dimensional states, $\Theta(d/\epsilon^2)$ copies of $\rho$ are known to be necessary and sufficient. However, the algorithm achieving this complexity makes fully entangled measurements over all $O(d/\epsilon^2)$ copies of $\rho$. Often, one is interested in certifying states to a high precision; this makes such joint measurements intractable even for low-dimensional states. Thus, we study whether one can obtain optimal rates for quantum state certification and related testing problems while only performing measurements on $t$ copies at once, for some $1 < t \ll d/\epsilon^2$. While it is well-understood how to use intermediate entanglement to achieve optimal quantum state learning, the only protocol known to achieve optimal testing is the one using fully entangled measurements. Our main result is a smooth copy complexity upper bound for state certification as a function of $t$, which achieves a near-optimal rate at $t = d^2$. In the high-precision regime, i.e., for $\epsilon < \frac{1}{\sqrt{d}}$, this is a strict improvement over the entanglement used by the aforementioned optimal protocol. We also extend our techniques to develop new algorithms for the related tasks of mixedness testing and purity estimation, and show tradeoffs achieving the optimal rates for these problems at $t = d^2$ as well. Our algorithms are based on novel reductions from testing to learning and leverage recent advances in quantum state tomography in a non-black-box fashion. We complement our upper bounds with smooth lower bounds that imply joint measurements on $t \geq d^{\Omega(1)}$ copies are necessary to achieve optimal rates for certification in the high-precision regime.

Exponential quantum advantage in processing massive classical data

Broadly applicable quantum advantage, particularly in classical data processing and machine learning, has been a fundamental open problem. In this work, we prove that a small quantum computer of polylogarithmic size can perform large-scale classification and dimension reduction on massive classical data by processing samples on the fly, whereas any classical machine achieving the same prediction performance requires exponentially larger size. Furthermore, classical machines that are exponentially larger yet below the required size need superpolynomially more samples and time. We validate these quantum advantages in real-world applications, including single-cell RNA sequencing and movie review sentiment analysis, demonstrating four to six orders of magnitude reduction in size with fewer than 60 logical qubits. These quantum advantages are enabled by quantum oracle sketching, an algorithm for accessing the classical world in quantum superposition using only random classical data samples. Combined with classical shadows, our algorithm circumvents the data loading and readout bottleneck to construct succinct classical models from massive classical data, a task provably impossible for any classical machine that is not exponentially larger than the quantum machine. These quantum advantages persist even when classical machines are granted unlimited time or if BPP=BQP, and rely only on the correctness of quantum mechanics. Together, our results establish machine learning on classical data as a broad and natural domain of quantum advantage and a fundamental test of quantum mechanics at the complexity frontier.

Control-centric quantum noise spectroscopy of time-ordered polyspectra

Precise environmental-noise characterisation in open quantum systems is a key step toward high-fidelity quantum control and targeted decoherence suppression in computing and sensing applications. Non-parametric quantum noise spectroscopy (QNS) provides a general-purpose, model-agnostic framework for estimating the spectral properties of an environment. The ability to perform such protocols under realistic constraints is key to their practical applicability. Notably, it is important to account for control constraints and understand how they limit the ability to learn about noise correlations as experiment-agnostic objects. We show how adopting a control-centric point of view allows one to recast the noise spectroscopy problem in such a way that (i) the central objects are now the time-ordered polyspectra, (ii) control filter functions are no longer encumbered by time-ordering. In particular, we show that this approach enables the seamless generalisation of frequency-comb QNS protocols to arbitrary control scenarios without introducing additional control symmetries that effectively remove time-ordering from filter functions, improving estimation in typically pathological scenarios. We demonstrate the targeted reconstruction of the time-ordered polyspectra across classical Gaussian and quantum non-Gaussian environments via simulations.

Trotterization with Many-body Coulomb Interactions: Convergence for General Initial Conditions and State-Dependent Improvements

Efficiently simulating many-body quantum systems with Coulomb interactions is a fundamental question in quantum physics, quantum chemistry, and quantum computing, yet it presents unique challenges: the Hamiltonian is an unbounded operator (both kinetic and potential parts are unbounded); its Hilbert space dimension grows exponentially with particle number; and the Coulomb potential is singular, long-ranged, non-smooth, and unbounded, violating the regularity assumptions of many prior state-of-the-art many-body simulation analyses. In this work, we establish rigorous error bounds for Trotter formulas applied to many-body quantum systems with Coulomb interactions. Our first main result shows that for general initial conditions in the domain of the Hamiltonian, second-order Trotter achieves a sharp $1/4$ convergence rate with explicit polynomial dependence of the error prefactor on the particle number. The polynomial dependence on system size suggests that the algorithm remains quantumly efficient, even without introducing any regularization of the Coulomb singularity. Notably, although the result under general conditions constitutes a worst-case bound, this rate has been observed in prior work for the hydrogen ground state, demonstrating its relevance to physically and practically important initial conditions. Our second main result identifies a set of physically meaningful conditions on the initial state under which the convergence rate improves to first and second order. For hydrogenic systems, these conditions are connected to excited states with sufficiently high angular momentum. Our theoretical findings are consistent with prior numerical observations.

Complexity phase transition for continuous-variable cluster state

Continuous-variable (CV) cluster states offer a promising platform for large-scale measurement-based quantum computations (MBQC). However, finite squeezing inevitably introduces Gaussian noise during MBQC. While fault-tolerant MBQC schemes exist in principle, they require the scalable incorporation of non-Gaussian resources, such as GKP states, which remain experimentally challenging. Consequently, a central question at this stage is how finite squeezing fundamentally constrains the intrinsic computational power of CV cluster states themselves. In this work, we address this question by analyzing the classical complexity of measurement-based linear optics (MBLO) implemented with such states, motivated by its near-term feasibility and recent experimental progress. We develop an explicit MBLO framework and examine how the squeezing level governs the complexity of the classical simulation of the resulting output states. Specifically, we identify squeezing-level thresholds that delineate classically tractable and intractable regimes, thereby revealing a squeezing-driven complexity phase transition. These findings advance our understanding of the squeezing resources necessary for meaningful quantum computation in current experimental regimes. Furthermore, they underscore the critical need to either scale the squeezing level or integrate error-correction schemes to achieve reliable, large-scale quantum computation with CV cluster states.

Optimal noisy quantum phase estimation with finite-dimensional states

Phase estimation in quantum interferometry is a major scenario where the quantum advantage is significantly revealed. Recently, the optimal finite-dimensional probe states (OFPSs) for phase estimation in two-mode quantum interferometry have been provided with the absence of noise [J.-F. Qin et al., Phys. Rev. A 112, 052428 (2025)]. However, the noise is inevitable in practice and the previously obtained OFPSs may cease to be optimal anymore. Hence, the forms of the true OFPSs in the existence of various noises are still open questions. Hereby, the noise of particle loss is studied and the true OFPSs under this noise have been investigated with the numerical algorithm named constrained optimization by linear approximation. Furthermore, a two-step measurement strategy is proposed to realize the ultimate precision limit in practice. The validity of this strategy is confirmed by the numerical simulation of practical experiments.

Analysis of State Teleportation using Noisy Quantum Gates

Noise is a major challenge in quantum computing, affecting the reliability of quantum protocols. In this work, we analytically study the impact of various noise processes, such as depolarization, bit flip, and phase flip, on the quantum state teleportation protocol. Each noise process is modeled as a quantum channel and is applied individually to all qubits after the corresponding unitary operations to simulate realistic conditions. We evaluate the fidelity between the ideal and noisy teleported states to quantify the effect of noise. Our analysis shows that the fidelity decreases polynomially, in general, as the noise strength increases for all noise types, highlighting the sensitivity of state teleportation to different noise mechanisms. However, in the low noise regime, the fidelity decreases only linearly, indicating the robustness of the teleportation protocol. These results provide insight into error characterization and can inform strategies for noise mitigation in practical quantum computing applications.

Hardware-Aware Quantum Support Vector Machines

Deploying quantum machine learning algorithms on near-term quantum hardware requires circuits that respect device-specific gate sets, connectivity constraints, and noise characteristics. We present a hardware-aware Neural Architecture Search (NAS) approach for designing quantum feature maps that are natively executable on IBM quantum processors without transpilation overhead. Using genetic algorithms to evolve circuit architectures constrained to IBM Torino native gates (ECR, RZ, SX, X), we demonstrate that automated architecture search can discover quantum Support Vector Machine (QSVM) feature maps achieving competitive performance while guaranteeing hardware compatibility. Evaluated on the UCI Breast Cancer Wisconsin dataset, our hardware-aware NAS discovers a 12-gate circuit using exclusively IBM native gates (6 ECR, 3 SX, 3 RZ) that achieves 91.23 % accuracy on 10 qubits-matching unconstrained gate search while requiring zero transpilation. This represents a 27 percentage point improvement over hand-crafted quantum feature maps (64 % accuracy) and approaches the classical RBF SVM baseline (93 %). We show that removing architectural constraints (fixed RZ placement) within hardware-aware search yields 3.5 percentage point gains, and that 100 % native gate usage eliminates decomposition errors that plague universal gate compilations. Our work demonstrates that hardware-aware NAS makes quantum kernel methods practically deployable on current noisy intermediate-scale quantum (NISQ) devices, with circuit architectures ready for immediate execution without modification.

Fast and Coherent Transfer of Atomic Qubits in Optical Tweezers using Fiber Array Architecture

Programmable neutral-atom arrays offer a promising route toward scalable quantum computing, where coherent qubit transfer enables non-local connectivity and reduces resource overhead. However, transfer speed and motional heating remain key bottlenecks for fast and deep quantum circuits. Here, we employ a fiber array neutral-atom quantum computing architecture with site-resolved control of trap depths to realize smooth amplitude exchange between static and moving traps, thereby enabling fast and coherent qubit transfer with ultralow motional heating. With a 10 $\mu$s in situ transfer between static and moving traps, we obtain a per-cycle heating rate of 0.156(9) $\mu$K, sustain over 500 cycles with negligible atom loss, and achieve a quantum state fidelity of 0.99992(5) per cycle. For inter-site transfer between two separated static traps, the operation takes 120 $\mu$s with 0.783(17) $\mu$K heating per transfer, and remains negligible atom loss for up to 100 repeated cycles with a fidelity of 0.9998(1) per transfer. Furthermore, through experimental studies of parallel transfer, we establish a model that elucidates the relationship between array inhomogeneity and the transfer heating rate. This fast, low-heating coherent transfer capability provides a practical route for improving both speed and fidelity in atom-shuttling based quantum computing.

Hybrid Quantum--Classical k-Means Clustering via Quantum Feature Maps

Clustering is one of the most fundamental tasks in machine learning, and the k-means clustering algorithm is perhaps one of the most widely used clustering algorithms. However, it suffers from several limitations, such as sensitivity to centroid initialization, difficulty capturing non-linear structure, and poor performance in high-dimensional spaces. Recent work has proposed improved initialization strategies and quantum-assisted distance computation, but the similarity metric itself has largely remained classical. In this study, we propose a quantum-enhanced variant of k-means that replaces the Euclidean distance with a quantum kernel derived from the inner product between feature-mapped quantum states. Using the Iris dataset, we use multiple quantum feature maps, including entangled SU2 and ZZ circuits, to embed classical data into a higher-dimensional Hilbert space where cluster structures become more separable. We will also be testing using another dataset, namely the breast cancer dataset. Similarity between data points is computed through the inner product between two states. Our results show that this approach achieves improved clustering stability and competitive accuracy compared to the classical algorithm, with the SU2 feature map yielding an accuracy of 88.6 % on the Iris dataset and 91.0 % on the breast cancer dataset, despite operating on NISQ-feasible shallow circuits. These findings suggest that quantum kernels provide a richer similarity landscape than traditional distance metrics, offering a promising path toward more robust unsupervised learning in the NISQ era.

Non-variational supervised quantum kernel methods: a review

Quantum kernel methods (QKMs) have emerged as a prominent framework for supervised quantum machine learning. Unlike variational quantum algorithms, which rely on gradient-based optimisation and may suffer from issues such as barren plateaus, non-variational QKMs employ fixed quantum feature maps, with model selection performed classically via convex optimisation and cross-validation. This separation of quantum feature embedding from classical training ensures stable optimisation while leveraging quantum circuits to encode data in high-dimensional Hilbert spaces. In this review, we provide a thorough analysis of non-variational supervised QKMs, covering their foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, and methods for their estimation in practice. We examine frameworks for assessing quantum advantage, including generalisation bounds and necessary conditions for separation from classical models, and analyse key challenges such as exponential concentration, dequantisation via tensor-network methods, and the spectral properties of kernel integral operators. We further discuss structured problem classes that may enable advantage, and synthesise insights from comparative and hardware studies. Overall, this review aims to clarify the regimes in which QKMs may offer genuine advantages, and to delineate the conceptual, methodological, and technical obstacles that must be overcome for practical quantum-enhanced learning.

A Review of Variational Quantum Algorithms: Insights into Fault-Tolerant Quantum Computing

Variational quantum algorithms (VQAs) have established themselves as a central computational paradigm in the Noisy Intermediate-Scale Quantum (NISQ) era. By coupling parameterized quantum circuits (PQCs) with classical optimization, they operate effectively under strict hardware limitations. However, as quantum architectures transition toward early fault-tolerant (EFT) and ultimate fault-tolerant (FT) regimes, the foundational principles and long-term viability of VQAs require systematic reassessment. This review offers an insightful analysis of VQAs and their progression toward the fault-tolerant regime. We deconstruct the core algorithmic framework by examining ansatz design and classical optimization strategies, including cost function formulation, gradient computation, and optimizer selection. Concurrently, we evaluate critical training bottlenecks, notably barren plateaus (BPs), alongside established mitigation strategies. The discussion then explores the EFT phase, detailing how the integration of quantum error mitigation and partial error correction can sustain algorithmic performance. Addressing the FT phase, we analyze the inherent challenges confronting current hybrid VQA models. Furthermore, we synthesize recent VQA applications across diverse domains, including many-body physics, quantum chemistry, machine learning, and mathematical optimization. Ultimately, this review outlines a theoretical roadmap for adapting quantum algorithms to future hardware generations, elucidating how variational principles can be systematically refined to maintain their relevance and efficiency within an error-corrected computational environment.

Informational Mpemba Effect for Fast State Purification in Non-Hermitian System

Quantum systems are inherently fragile to environmental fluctuations or decoherence, limiting their advantages in applications of quantum information and quantum computation. State purification offers a route to recover the purity of system under noisy conditions. Here, we demonstrate a rapid purification of initially mixed states by harnessing collective reservoir engineering in driven non-Hermitian qubit systems, together with multipartite entanglement generation in larger systems. We show that the onset of efficient purification-assisted entanglement generation is dictated by the degeneracy of collective subradiant modes, rather than by exceptional points. Moreover, the system dynamics manifests an informational Mpemba effect, i.e., a more mixed initial state reaches its steady state with unit purity at a faster rate, resembling the conventional Mpemba effect where a hotter system cools more rapidly. These results reveal a unique advantage of driven non-Hermitian quantum systems with engineered collective dissipation, enabling enhanced purification efficiency and offering new opportunities for quantum engineering.

Investigation of Automated Design of Quantum Circuits for Imaginary Time Evolution Methods Using Deep Reinforcement Learning

Efficient ground state search is fundamental to advancing combinatorial optimization problems and quantum chemistry. While the Variational Imaginary Time Evolution (VITE) method offers a useful alternative to Variational Quantum Eigensolver (VQE), and Quantum Approximate Optimization Algorithm (QAOA), its implementation on Noisy Intermediate-Scale Quantum (NISQ) devices is severely limited by the gate counts and depth of manually designed ansatz. Here, we present an automated framework for VITE circuit design using Double Deep-Q Networks (DDQN). Our approach treats circuit construction as a multi-objective optimization problem, simultaneously minimizing energy expectation values and optimizing circuit complexity. By introducing adoptive thresholds, we demonstrate significant hardware overhead reductions. In Max-Cut problems, our agent autonomously discovered circuits with approximately 37\% fewer gates and 43\% less depth than standard hardware-efficient ansatz on average. For molecular hydrogen ($H_2$), the DDQN also achieved the Full-CI limit, with maintaining a significantly shallower circuit. These results suggest that deep reinforcement learning can be helpful to find non-intuitive, optimal circuit structures, providing a pathway toward efficient, hardware-aware quantum algorithm design.

Belief Propagation Convergence Prediction for Bivariate Bicycle Quantum Error Correction Codes

Decoding Bivariate Bicycle (BB) quantum error correction codes typically requires Belief Propagation (BP) followed by Ordered Statistics Decoding (OSD) post-processing when BP fails to converge. Whether BP will converge on a given syndrome is currently determined only after running BP to completion. We show that convergence can be predicted in advance by a single modulo operation: if the syndrome defect count is divisible by the code's column weight w, BP converges with high probability (100% at p <= 0.001, degrading to 87% at p = 0.01); otherwise, BP fails with probability >= 90%. The mechanism is structural: each physical data error activates exactly w stabilizers, so a defect count not divisible by w implies the presence of measurement errors outside BP's model space. Validated on five BB codes with column weights w = 2, 3, and 4, mod-w achieves AUC = 0.995 as a convergence classifier at p = 0.001 under phenomenological noise, dominating all other syndrome features (next best: AUC = 0.52). The false positive rate scales empirically as O(p^2.05) (R^2 = 0.98), confirming the analytical bound from Proposition 2. Among BP failures on mod-w = 0 syndromes, 82% contain weight-2 data error clusters, directly confirming the dominant failure mechanism. The prediction is invariant under BP scheduling strategy and decoder variant, including Relay-BP - the strongest known BP enhancement for quantum LDPC codes. These results apply directly to IBM's Gross code [[144, 12, 12]] and Two-Gross code [[288, 12, 18]], targeted for deployment in 2026-2028.

Harnessing dark states: coherent control in coupled cavity-Rydberg-atom systems

The dark-state effect, caused by destructive interference, not only is an important fundamental research topic in atomic physics and quantum optics, but also has wide potential application in quantum physics and quantum information science. Using the arrowhead-matrix method, here we study the dark-state effect in a coupled cavity-Rydberg-atom system, in which $N$ Rydberg atoms with the dipole-dipole interactions are coupled to a single-mode cavity field. We obtain the numbers and form of the dark states in certain excitation-number subspaces for the two-, three-, and four-atom cases, as well as in the single-excitation subspace for a general $N$-atom case. We also suggest to characterize the dark states by inspecting the populations of some specific quantum states, which can be detected in experiments. Furthermore, we analyze the dark-state effect in a realistic case, where both the atomic dipole-dipole interaction strengths and the atom-cavity-field coupling strengths depend on the position of the atoms. Our findings pave the way for studying dark-state physics and applications in the cavity-Rydberg-atom platform.

Divide et impera: hybrid multinomial classifiers from quantum binary models

We investigate how to combine a collection of quantum binary models into a multinomial classifier. We employ a hybrid approach, adopting strategies like one-vs-one, one-vs-rest and a binary decision tree. We benchmark each method, by emphasizing their computational overhead and their impact on the quantum advantage. By comparison against a classical binary model (generalized using the same approach), we show that the decision tree represents a cost-effective solution, achieving similar accuracies to other methods with an overhead at most logarithmic in the total number of classes.

Photon pairs, squeezed light and the quantum wave mixing effect in a cascaded qubit system

We develop a theoretical description of quantum wave mixing (QWM) in a cascaded waveguide-QED system of two superconducting qubits, where the probe is driven by an external coherent tone and by the resonance fluorescence of a strongly driven source qubit. Starting from the field correlation functions of the source emission, we derive an effective master-equation treatment for the probe and identify the regime in which the incident fluorescence is characterized by anomalous correlations. When the coherent Rayleigh component of the source spectrum is suppressed, the probe equations of motion become equivalent to those for a qubit driven by a coherent tone and broadband squeezed light. This equivalence implies a selection rule for the peaks of the QWM spectrum, with a strong suppression of sidebands associated with processes involving an odd number of photons taken from the source field. Numerical simulations of the full cascaded two-qubit model for different ratios of radiative decay rates unambiguously confirm the participation of correlated photon pairs in QWM processes. The current research illustrates that the analysis of peak amplitudes can be used to probe photon statistics in the incident nonclassical field.

Optimized Gottesman-Kitaev-Preskill Error Correction via Tunable Preprocessing

The Gottesman-Kitaev-Preskill (GKP) code is a promising bosonic candidate for realizing fault-tolerant quantum computation. Among existing error-correction protocols for GKP code, the Steane-type scheme is a canonical and widely adopted paradigm, yet its intrinsic noise propagation pattern limits further performance improvement. In this work, we propose a preprocessing-based Steane-type (P-Steane) scheme, which introduces a tunable preprocessing stage with squeezing parameters $a$ and $b$ to actively reshape noise propagation, thereby constituting a parameter framework. This framework spans a spectrum of protocols beyond existing methods, reproducing the performance of both the ME-Steane scheme ($a=1$, $b=1$) and the teleportation-based scheme ($a=1/\sqrt{2}$, $b=\sqrt{2}$) as special cases. Crucially, in the small-noise regime and when the data qubit is noisier than the ancilla qubits, P-Steane scheme achieves the minimum product of position- and momentum-quadrature output noise variances when $2a = b$, and consistently outperforms the ME-Steane scheme within a specific squeezing-parameter range under this condition.

A Model Context Protocol Server for Quantum Execution in Hybrid Quantum-HPC Environments

The integration of large language models (LLMs) into scientific research is accelerating the realization of autonomous ``AI Scientists.'' While recent advancements have empowered AI to formulate hypotheses and design experiments, a critical gap remains in the execution of these tasks, particularly in the domain of quantum computing (QC). Executing quantum algorithms requires not only generating code but also managing complex computational resources such as QPUs and high-performance computing (HPC) clusters. In this paper, we propose an AI-driven framework specifically designed to bridge this execution gap through the implementation of a Model Context Protocol (MCP) server. Our system enables an LLM agent to process natural language prompts submitted as part of a job, autonomously executing quantum computing workflows by invoking our tools via the MCP. We demonstrate the framework's capability by performing essential quantum algorithmic primitives, including sampling and computation of expectation values. Key technical contributions include the development of an MCP server for quantum execution, a pipeline for interpreting OpenQASM code, an automated workflow with CUDA-Q for the ABCI-Q hybrid platform, and an asynchronous execution pipeline for remote quantum hardware using the Quantinuum emulator via CUDA-Q. This work validates that AI agents can effectively abstract the complexities of hardware interaction through an MCP-based architecture, thereby facilitating the automation of practical quantum research.

Scalable Neural Decoders for Practical Fault-Tolerant Quantum Computation

Quantum error correction (QEC) is essential for scalable quantum computing. However, it requires classical decoders that are fast and accurate enough to keep pace with quantum hardware. While quantum low-density parity-check codes have recently emerged as a promising route to efficient fault tolerance, current decoding algorithms do not allow one to realize the full potential of these codes in practical settings. Here, we introduce a convolutional neural network decoder that exploits the geometric structure of QEC codes, and use it to probe a novel "waterfall" regime of error suppression, demonstrating that the logical error rates required for large-scale fault-tolerant algorithms are attainable with modest code sizes at current physical error rates, and with latencies within the real-time budgets of several leading hardware platforms. For example, for the $[144, 12, 12]$ Gross code, the decoder achieves logical error rates up to $\sim 17$x below existing decoders - reaching logical error rates $\sim 10^{-10}$ at physical error $p=0.1\%$ - with 3-5 orders of magnitude higher throughput. This decoder also produces well-calibrated confidence estimates that can significantly reduce the time overhead of repeat-until-success protocols. Taken together, these results suggest that the space-time costs associated with fault-tolerant quantum computation may be significantly lower than previously anticipated.

Per-Shot Evaluation of QAOA on Max-Cut: A Black-Box Implementation Comparison with Goemans-Williamson

The Quantum Approximate Optimization Algorithm (QAOA) has emerged as a promising approach for addressing combinatorial optimization problems on near-term quantum hardware. In this work, we conduct an empirical evaluation of QAOA on the Max-Cut problem, using the Goemans-Williamson (GW) algorithm as a classical baseline for comparison. Unlike many prior studies, our methodology treats QAOA implementations as black-box optimizers, relying solely on default parameter settings without manual fine-tuning. We evaluate specific off-the-shelf QAOA implementations under default settings, not the algorithmic potential of QAOA with optimized parameters. This reflects a more realistic use case for end users who may lack the resources or expertise for instance-specific optimization. To facilitate fair and informative evaluation, we construct benchmark instances using well-known graph generation models that emulate practical graph structures, avoiding synthetic constructions tailored to either quantum or classical algorithms. A central component of our analysis is a per-shot statistical framework, which tracks the quality of QAOA outputs as a function of the number of circuit executions. This enables probabilistic comparisons with the GW algorithm by examining when and how frequently QAOA surpasses classical performance baselines such as the GW expectation and lower bound. Our results provide insight into the practical applicability of QAOA for Max-Cut and highlight its current limitations, offering a framework that can guide the assessment and development of future QAOA implementations.