Conventional methods of quantum simulation involve trade-offs that limit their applicability to specific contexts where their use is optimal. This paper demonstrates how different simulation methods can be hybridized to improve performance for interaction picture simulations over known algorithms. These approaches show asymptotic improvements over the individual methods that comprise them and further make interaction picture simulation methods practical in the near term. Physical applications of these hybridized methods yields a gate complexity scaling as log²Λ in the electric cutoff Λ for the Schwinger Model and independent of the electron density for collective neutrino oscillations, outperforming the scaling for all current algorithms with these parameters. For the general problem of Hamiltonian simulation subject to dynamical constraints, these methods yield a query complexity independent of the penalty parameter λ used to impose an energy cost on time-evolution into an unphysical subspace.
We consider hierarchically implemented quantum error correction (HI-QEC) in which the fidelities of logical qubits are differentially optimized to enhance the capabilities of quantum devices in scientific applications. By employing qubit representations that propagate hierarchies in simulated systems to those in logical qubit noise sensitivities, heterogeneity in the distribution of physical qubits among logical qubits can be systematically structured. For concreteness, we estimate HI-QEC’s impact on surface code resources in computing low-energy observables to a fixed precision, finding up to ~60% reductions in qubit requirements possible in early error corrected simulations. This heterogeneous distribution of physical-to-logical qubits is identified as another element that can be optimized in the co-design process of quantum simulations of Standard Model physics.
Recent work conjectured that
entanglement is minimized in low-energy hadronic scattering
processes. It was shown that the minimization of the entanglement
power (EP) of the low-energy baryon-baryon S-matrix implies novel
spin-flavor symmetries that are distinct from large-N_c QCD
predictions and are confirmed by high-precision lattice QCD
simulations. Here the conjecture of minimal entanglement is
investigated for scattering processes involving pions and
nucleons. The EP of the S-matrix is constructed for the pi-pi
and pi-N systems, and the consequences of minimization of
entanglement are discussed and compared with large-N_c QCD
Remarkable advances in isolating, controlling and entangling quantum systems are transforming what was once a curious feature of quantum mechanics into a vehicle for disruptive scientific and technological progress. Pursuing the vision articulated by Feynman, a concerted effort across many areas of research and development is introducing prototypical digital quantum devices into the computing ecosystem available to domain scientists. Through interactions with these early quantum devices, the abstract vision of exploring classically-intractable quantum systems is evolving toward becoming a tangible reality. Beyond catalyzing these technological advances, entanglement is enabling parallel progress as a diagnostic for quantum correlations and as an organizational tool, both guiding improved understanding of quantum many-body systems and quantum field theories defining and emerging from the Standard Model. From the perspective of three domain science theorists, this article compiles “thoughts about the interface” on entanglement, complexity, and quantum simulation in an effort to contextualize recent NISQ-era progress with the scientific objectives of nuclear and high-energy physics.
In this work we present the Scaled QUantum IDentifier (SQUID), an open-source framework for exploring hybrid Quantum-Classical algorithms for classification problems. The classical infrastructure is based on PyTorch and we provide a standardized design to implement a variety of quantum models with the capability of back-propagation for efficient training. We present the structure of our framework and provide examples of using SQUID in a standard binary classification problem from the popular MNIST dataset. In particular we highlight the implications for scalability for gradient based optimization of quantum models on the choice of output for variational quantum models.
Disjoint regions of the latticized, massless scalar field vacuum become separable at large distances beyond the entanglement sphere, a distance that extends to infinity in the continuum limit. Through numerical calculations in one-, two- and three-dimensions, the radius of an entanglement sphere is found to be determined by the highest momentum mode of the field supported across the diameter, d, of two identical regions. As a result, the long-distance behavior of the entanglement is determined by the short-distance structure of the field. Effective eld theories (EFTs), describing a system up to a given momentum scale Lambda, are expected to share this feature, with regions of the EFT vacuum separable (or dependent on the UV-completion) beyond a distance proportional to Λ. The smallest non-zero value of the entanglement negativity supported by the field at large distances is conjectured to be NN~exp(-Λ d), independent of the number of spatial dimensions. This phenomenon may be manifest in perturbative QCD processes.
Collective neutrino oscillations can potentially play an important role in transporting lepton flavor in astrophysical scenarios where the neutrino density is large, typical examples are the early universe and supernova explosions. It has been argued in the past that simple models of the neutrino Hamiltonian designed to describe forward scattering can support substantial flavor evolution on very short time scales t≈log(N)/(GFρ), with N the number of neutrinos, GF the Fermi constant and ρ the neutrino density. This finding is in tension with results for a similar but exactly solvable model for which t≈√N/(GFρ) instead. In this work we provide a coherent explanation of this tension in terms of Dynamical Phase Transitions (DPT) and study the possible impact that a DPT could have in more realistic models of neutrino oscillations and their mean-field approximation.
In astrophysical scenarios with large neutrino density, like supernovae and the early universe, the presence of neutrino-neutrino interactions can give rise to collective flavor oscillations in the out-of-equilibrium collective dynamics of a neutrino cloud. The role of quantum correlations in these phenomena is not yet well understood, in large part due to complications in solving for the real-time evolution of the strongly coupled many-body system. Future fault-tolerant quantum computers hold the promise to overcome much of these limitations and provide direct access to the correlated neutrino dynamic. In this work, we present the first simulation of a small system of interacting neutrinos using current generation quantum devices. We introduce a strategy to overcome limitations in the natural connectivity of the qubits and use it to track the evolution of entanglement in real-time. The results show the critical importance of error-mitigation techniques to extract meaningful results for entanglement measures using noisy, near term, quantum devices.
Collective neutrino oscillations play a crucial role in transporting lepton flavor in astrophysical settings, such as supernovae, where the neutrino density is large. In this regime, neutrino-neutrino interactions are important and simulations in mean-field approximations show evidence for collective oscillations occurring at time scales much larger than those associated with vacuum oscillations. In this work, we study the out-of-equilibrium dynamics of a corresponding spin model using Matrix Product States and show how collective bipolar oscillations can be triggered by quantum fluctuations if appropriate initial conditions are present. The origin of these flavor oscillations, absent in the mean-field, can be traced to the presence of a dynamical phase transition, which drastically modifies the real-time evolution of the entanglement entropy. We find entanglement entropies scaling at most logarithmically in the system size, suggesting that classical tensor network methods could be efficient in describing collective neutrino dynamics more generally.
Maintaining local interactions in the quantum simulation of gauge field theories relegates most states in the Hilbert space to be unphysical—theoretically benign, but experimentally difficult to avoid. Reformulations of the gauge fields can modify the ratio of physical to gauge-variant states often through classically preprocessing the Hilbert space and modifying the representation of the field on qubit degrees of freedom. This paper considers the implications of representing SU(3) Yang-Mills gauge theory on a lattice of irreducible representations in both a global basis of projected global quantum numbers and a local basis in which controlled-plaquette operators support efficient time evolution. Classically integrating over the internal gauge space at each vertex (e.g., color isospin and color hypercharge) significantly reduces both the qubit requirements and the dimensionality of the unphysical Hilbert space. Initiating tuning procedures that may inform future calculations at scale, the time evolution of one- and two-plaquettes are implemented on one of IBM’s superconducting quantum devices, and early benchmark quantities are identified. The potential advantages of qudit environments, with either constrained 2D hexagonal or 1D nearest-neighbor internal state connectivity, are discussed for future large-scale calculations.
Editors Suggestion in Physical Review D.