Simulation of Collective Neutrino Oscillations on a Quantum Computer

Benjamin Hall, Alessandro Roggero, Alessandro Baroni, Joseph A Carlson | ArXiv:2102.12556

In astrophysical scenarios with large neutrino density, like supernovae and the early universe, the presence of neutrino-neutrino interactions can give rise to collective flavor oscillations in the out-of-equilibrium collective dynamics of a neutrino cloud. The role of quantum correlations in these phenomena is not yet well understood, in large part due to complications in solving for the real-time evolution of the strongly coupled many-body system. Future fault-tolerant quantum computers hold the promise to overcome much of these limitations and provide direct access to the correlated neutrino dynamic. In this work, we present the first simulation of a small system of interacting neutrinos using current generation quantum devices. We introduce a strategy to overcome limitations in the natural connectivity of the qubits and use it to track the evolution of entanglement in real-time. The results show the critical importance of error-mitigation techniques to extract meaningful results for entanglement measures using noisy, near term, quantum devices.


Entanglement and Many-Body Effects in Collective Neutrino Oscillations

Alessandro Roggero | ArXiv:2102.10188

Collective neutrino oscillations play a crucial role in transporting lepton flavor in astrophysical settings, such as supernovae, where the neutrino density is large. In this regime, neutrino-neutrino interactions are important and simulations in mean-field approximations show evidence for collective oscillations occurring at time scales much larger than those associated with vacuum oscillations. In this work, we study the out-of-equilibrium dynamics of a corresponding spin model using Matrix Product States and show how collective bipolar oscillations can be triggered by quantum fluctuations if appropriate initial conditions are present. The origin of these flavor oscillations, absent in the mean-field, can be traced to the presence of a dynamical phase transition, which drastically modifies the real-time evolution of the entanglement entropy. We find entanglement entropies scaling at most logarithmically in the system size, suggesting that classical tensor network methods could be efficient in describing collective neutrino dynamics more generally.


A Trailhead for Quantum Simulation of SU(3) Yang-Mills Lattice Gauge Theory in the Local Multiplet Basis

Anthony Ciavarella, Natalie Klco, Martin Savage | arXiv:2101.10227

Maintaining local interactions in the quantum simulation of gauge field theories relegates most states in the Hilbert space to be unphysical—theoretically benign, but experimentally difficult to avoid. Reformulations of the gauge fields can modify the ratio of physical to gauge-variant states often through classically preprocessing the Hilbert space and modifying the representation of the field on qubit degrees of freedom. This paper considers the implications of representing SU(3) Yang-Mills gauge theory on a lattice of irreducible representations in both a global basis of projected global quantum numbers and a local basis in which controlled-plaquette operators support efficient time evolution. Classically integrating over the internal gauge space at each vertex (e.g., color isospin and color hypercharge) significantly reduces both the qubit requirements and the dimensionality of the unphysical Hilbert space. Initiating tuning procedures that may inform future calculations at scale, the time evolution of one- and two-plaquettes are implemented on one of IBM’s superconducting quantum devices, and early benchmark quantities are identified. The potential advantages of qudit environments, with either constrained 2D hexagonal or 1D nearest-neighbor internal state connectivity, are discussed for future large-scale calculations.

Editors Suggestion in Physical Review D.


Preparation of Excited States for Nuclear Dynamics on a Quantum Computer

Alessandro Roggero, Chenyi Gu, Alessandro Baroni, Thomas Papenbrock | arXiv:2009.13485

We study two different methods to prepare excited states on a quantum computer, a key initial step to study dynamics within linear response theory. The first method uses unitary evolution for a short timeT=O(√(1−F)) to approximate the action of an excitation operator O with fidelity Fand success probability P≈1−F. The second method probabilistically applies the excitation operator using the Linear Combination of Unitaries (LCU) algorithm. We benchmark these techniques on emulated and real quantum devices, using a toy model for thermal neutron-proton capture. Despite its larger memory footprint, the LCU-based method is efficient even on current generation noisy devices and can be implemented at a lower gate cost than a naive analysis would suggest. These findings show that quantum techniques designed to achieve good asymptotic scaling on fault tolerant quantum devices might also provide practical benefits on devices with limited connectivity and gate fidelity.


Geometry and entanglement in the scattering matrix

Silas Beane, Roland Farrell | arXiv:2011.01278

A formulation of nucleon-nucleon scattering is developed in which the S-matrix, rather than an effective-field theory (EFT) action, is the fundamental object. Spacetime plays no role in this description: the S-matrix is a trajectory that moves between RG fixed points in a compact theory space defined by unitarity. This theory space has a natural operator definition, and a geometric embedding of the unitarity constraints in four-dimensional Euclidean space yields a flat torus, which serves as the stage on which the S-matrix propagates. Trajectories with vanishing entanglement are special geodesics between RG fixed points on the flat torus, while entanglement is driven by an external potential. The system of equations describing S-matrix trajectories is in general complicated, however the very-low-energy S-matrix — that appears at leading-order in the EFT description — possesses a UV/IR conformal invariance which renders the system of equations integrable, and completely determines the potential. In this geometric viewpoint, inelasticity is in correspondence with the radius of a three-dimensional hyperbolic space whose two-dimensional boundary is the flat torus. This space has a singularity at vanishing radius, corresponding to maximal violation of unitarity. The trajectory on the flat torus boundary can be explicitly constructed from a bulk trajectory with a quantifiable error, providing a simple example of a holographic quantum error correcting code.


An Algorithm for Quantum Computation of Particle Decays

Anthony Ciavarella | arXiv:2007.04447

A quantum algorithm is developed to calculate decay rates and cross sections using quantum resources that scale polynomially in the system size assuming similar scaling for state preparation and time evolution. This is done by computing finite-volume one- and two-particle Green’s functions on the quantum hardware. Particle decay rates and two particle scattering cross sections are extracted from the imaginary parts of the Green’s function. A 0 + 1 dimensional implementation of this method is demonstrated on IBM’s superconducting quantum hardware for the decay of a heavy scalar particle to a pair of light scalars.


Spectral Density Estimation with the Gaussian Integral Transform

Alessandro Roggero | arXiv:2004.04889

The spectral density operator ρ(ω)=δ(ωH) plays a central role in linear response theory as it’s expectation value, the dynamical response function, can be used to compute scattering cross-sections. In this work, we describe a near optimal quantum algorithm providing an approximation to the spectral density with energy resolution Δ and error ϵ using O(√(log(1/ϵ)(log(1/Δ)+log(1/ϵ)))/Δ) operations. This is achieved without using expensive approximations to the time-evolution operator but exploiting instead qubitization to implement an approximate Gaussian Integral Transform (GIT) of the spectral density. We also describe appropriate error metrics to assess the quality of spectral function approximations more generally.


Geometric Quantum Information Structure in Quantum Fields and their Lattice Simulation

Natalie Klco, Martin Savage | 2008.03647

An upper limit to distillable entanglement between two disconnected regions of massless noninteracting scalar field theory has an exponential decay defined by a geometric decay constant. When regulated at short distances with a spatial lattice, this entanglement abruptly vanishes beyond a dimensionless separation, defining a negativity sphere. In two spatial dimensions, we determine this geometric decay constant between a pair of disks and the growth of the negativity sphere toward the continuum through a series of lattice calculations. Making the connection to quantum field theories in three-spatial dimensions, assuming such quantum information scales appear also in quantum chromodynamics (QCD), a new relative scale may be present in effective field theories describing the low-energy dynamics of nucleons and nuclei. We highlight potential impacts of the distillable entanglement structure on effective field theories, lattice QCD calculations and future quantum simulations.


Quantum Algorithms for Simulating the Lattice Schwinger Model

Alexander Shaw, Pavel Lougovski, Jesse Stryker, Nathan Wiebe | arXiv:2002.11146

The Schwinger model (quantum electrodynamics in 1+1 dimensions) is a testbed for the study of quantum gauge field theories. We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings. In particular, we perform a tight analysis of low-order Trotter formula simulations of the Schwinger model, using recently derived commutator bounds, and give upper bounds on the resources needed for simulations in both scenarios. In lattice units, we find a Schwinger model on N/2 physical sites with coupling constant x-1/2 and electric field cutoff x-1/2Λ can be simulated on a quantum computer for time 2xT using a number of T-gates or CNOTs in Õ(N3/2 T3/2 x1/2 Λ) for fixed operator error. This scaling with the truncation Λ is better than that expected from algorithms such as qubitization or QDRIFT. Furthermore, we give scalable measurement schemes and algorithms to estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable—the mean pair density. Finally, we bound the root-mean-square error in estimating this observable via simulation as a function of the diamond distance between the ideal and actual CNOT channels. This work provides a rigorous analysis of simulating the Schwinger model, while also providing benchmarks against which subsequent simulation algorithms can be tested.


Entanglement Rearrangement in Self-Consistent Nuclear Structure Calculations

Caroline Robin, Martin J. Savage, Nathalie Pillet | arXiv:2007.09157

Entanglement properties of 4He and 6He are investigated using nuclear many-body calculations, specifically the single-nucleon entanglement entropy, and the two-nucleon mutual information and negativity. Nuclear wavefunctions are obtained by performing active-space no-core configuration interaction calculations using a two-body nucleon-nucleon interaction derived from chiral effective field theory. Entanglement measures within single-particle bases, the harmonic oscillator (HO), Hartree-Fock (HF), natural (NAT) and variational natural (VNAT) bases, are found to exhibit different degrees of complexity. Entanglement in both nuclei is found to be more localized within NAT and VNAT bases than within a HO basis for the optimal HO parameters, and, as anticipated, a core-valence (tensor product) structure emerges from the full six-body calculation of 6He. The two-nucleon mutual information shows that the VNAT basis, which typically exhibits good convergence properties, effectively decouples the active and inactive spaces. We conclude that measures of one- and two-nucleon entanglement are useful in analyzing the structure of nuclear wave functions, in particular the efficacy of basis states, and may provide useful metrics toward developing more efficient schemes for ab initio computations of the structure and reactions of nuclei, and quantum many-body systems more generally.