*Note: The contents of these manuscripts are the IP of the authors, and may not be reproduced, presented to funding agencies or private foundations or in other presentations without attribution.*

## Quantum Simulation for High-energy Physics

It is for the first time that quantum simulation for high-energy physics (HEP) is studied in the U.S. decadal particle-physics community planning, and in fact until recently, this was not considered a mainstream topic in the community. This fact speaks of a remarkable rate of growth of this subfield over the past few years, stimulated by the impressive advancements in quantum-information sciences (QIS) and associated technologies over the past decade, and the significant investment in this area by the government and private sectors in the U.S. and other countries. High-energy physicists have quickly identified problems of importance to our understanding of nature at the most fundamental level, from tiniest distances to cosmological extents, that are intractable with classical computers but may benefit from quantum advantage. They have initiated, and continue to carry out, a vigorous program in theory, algorithm, and hardware co-design for simulations of relevance to the HEP mission. This community whitepaper is an attempt to bring this exciting and yet challenging area of research to the spotlight, and to elaborate on what the promises, requirements, challenges, and potential solutions are over the next decade and beyond.

This whitepaper is prepared for the topical groups CompF6 (Quantum computing), TF05 (Lattice Gauge Theory), and TF10 (Quantum Information Science) within the Computational Frontier and Theory Frontier of the U.S. Community Study on the Future of Particle Physics (Snowmass 2021).

## From asymptotic freedom to theta vacua: Qubit embeddings of the O(3) nonlinear sigma model

Conventional lattice formulations of $\theta$ vacua in the $1+1$-dimensional $O(3)$ nonlinear sigma model suffer from a sign problem. Here, we construct the first sign-problem-free regularization for \emph{arbitrary} $\theta$. Using efficient lattice \ac{MC} computations, we demonstrate how a Hamiltonian model of spin-$\tfrac12$ degrees of freedom on a 2-dimensional spatial lattice reproduces both the infrared sector for arbitrary $\theta$, as well as the ultraviolet physics of asymptotic freedom. Furthermore, as a model of qubits on a two-dimensional square lattice with only nearest neighbor interactions, it is naturally suited for studying the physics of $\theta$ vacua and asymptotic freedom on near-term quantum devices. Our construction generalizes to $\theta$-vacua in all $\CP(N-1)$ models, solving a long standing sign problem.

## Some Conceptual Aspects of Operator Design for Quantum Simulations of Non-Abelian Lattice Gauge Theories

In the Kogut-Susskind formulation of lattice gauge theories, a set of quantum numbers resides at the ends of each link to characterize the vertex-local gauge field. We discuss the role of these quantum numbers in propagating correlations and generating entanglement that ensures each vertex remains gauge invariant, despite time evolution induced by operators with (only) partial access to each vertex Hilbert space. Applied to recent proposals for eliminating vertex-local Hilbert spaces in quantum simulation, we describe how the required entanglement is generated via delocalization of the time evolution operator with nearest-neighbor controls. These hybridizations, organized with qudits or qubits, exchange classical operator pre-processing for reductions in resource requirements that extend throughout the lattice volume.

*(Contribution to proceedings of the 2021 Quantum Simulation for Strong Interactions (QuaSi) Workshops at the InQubator for Quantum Simulation (IQuS))*

## Entanglement and correlations in fast collective neutrino flavor oscillations

Collective neutrino oscillations play a crucial role in transporting lepton flavor in astrophysical settings like supernovae and neutron star binary merger remnants, which are characterized by large neutrino densities. In these settings, simulations in the mean-field approximation show that neutrino-neutrino interactions can overtake vacuum oscillations and give rise to fast collective flavor evolution on time-scales t ~ μ^{-1}, with μ proportional to the local neutrino density. In this work, we study the full out-of-equilibrium flavor dynamics in simple multi-angle geometries displaying fast oscillations. Focusing on simple initial conditions, we analyze the production of pair correlations and entanglement in the complete many-body-dynamics as a function of the number N of neutrinos in the system.

Similarly to simpler geometries with only two neutrino beams, we identify three regimes: stable configurations with vanishing flavor oscillations, marginally unstable configurations with evolution occurring at long time scales τ~μ^{-1}√N and unstable configurations showing flavor evolution on short time scales τ~μ^{-1}log(N). We present evidence that these fast collective modes are generated by the same dynamical phase transition which leads to the slow bipolar oscillations, establishing a connection between these two phenomena and explaining the difference in their time scales.

We conclude by discussing a semi-classical approximation which reproduces the entanglement entropy at short to medium time scales and can be potentially useful in situations with more complicated geometries where classical simulation methods starts to become inefficient.

## Large-charge conformal dimensions at the O(N) Wilson-Fisher fixed point

Recent work using a large-charge effective field theory (EFT) for the O(N) Wilson-Fisher conformal field theory has shown that the anomalous dimensions of large-charge operators can be expressed in terms of a few low-energy constants (LECs) of the EFT. By performing lattice Monte Carlo computations at the O(N) Wilson-Fisher critical point, we compute the anomalous dimensions of large-charge operators up to charge Q=10, and extract the leading and subleading LECs of the O(N) large-charge EFT for N=2,4,6,8. To alleviate the signal-to-noise ratio problem present in the large-charge sector of conventional lattice formulations of the O(N) theory, we employ a recently developed qubit formulation of the O(N) nonlinear sigma models with a worm algorithm. This enables us to test the validity of the large-charge expansion, and the recent predictions of large-N expansion for the coefficients of the large-charge EFT.

## Qubit regularized O(N) nonlinear sigma models

Motivated by the prospect of quantum simulation of quantum field theories, we formulate the O(N) nonlinear sigma model as a “qubit” model with an (N+1)-dimensional local Hilbert space at each lattice site. Using an efficient worm algorithm in the worldline formulation, we demonstrate that the model has a second-order critical point in 2+1 dimensions, where the continuum physics of the nontrivial O(N) Wilson-Fisher fixed point is reproduced. We compute the critical exponents nu and eta for the O(N) qubit models up to N=8, and find excellent agreement with known results in literature from various analytic and numerical techniques for the O(N) Wilson-Fisher universality class. Our models are suited for studying O(N) nonlinear sigma models on quantum computers up to N=8 in d=2,3 spatial dimensions.

## Basic Elements for Simulations of Standard Model Physics with Quantum Annealers: Multigrid and Clock States

We explore the potential of D-Wave’s quantum annealers for computing basic components required for quantum simulations of Standard Model physics. By implementing a basic multigrid (including zooming) and specializing Feynman-clock algorithms, D-Wave’s Advantage is used to study harmonic and anharmonic oscillators relevant for lattice scalar field theories and effective field theories, the time evolution of a single plaquette of SU(3) Yang-Mills lattice gauge field theory, and the dynamics of flavor entanglement in four neutrino systems.

## Multiproduct formulas for time-dependent Hamiltonian simulation

In this work we provide a new approach for approximating an ordered operator exponential using

an ordinary operator exponential that acts on the Hilbert space of the simulation as well as a finite-

dimensional clock register. We show that as the number of qubits used for the clock grows, the error in

the ordered operator exponential vanishes, as well as the entanglement between the clock register and

the register of the state being acted upon. Hence, the clock is a convenient device that allows us to

translate results for simulating time-independent systems to the time-dependent case. As an application,

we provide a new family of multiproduct formulas (MPFs) for time-dependent Hamiltonians that yield

both commutator scaling and poly-logarithmic error scaling. This in turn, means that this method

outperforms existing methods for simulating physically-local, time-dependent Hamiltonians. Finally, we

apply the formalism to show how qubitization can be generalized to the time-dependent case and show

that such a translation can be practical if the second derivative of the Hamiltonian is sufficiently small.

## Nematic Confined Phases in the U(1) Quantum Link Model on a Triangular Lattice: An Opportunity for Near-Term Quantum Computations of String Dynamics on a Chip

The U(1) quantum link model on the triangular lattice has two rotation-symmetry-breaking nematic confined phases. Static external charges are connected by confining strings consisting of individual strands with fractionalized electric flux. The two phases are separated by a weak first order phase transition with an emergent almost exact SO(2) symmetry. We construct a quantum circuit on a chip to facilitate near-term quantum computations of the non-trivial string dynamics.

## Preparation of the SU(3) Lattice Yang-Mills Vacuum with Variational Quantum Methods

Studying QCD and other gauge theories on quantum hardware requires the preparation of physically interesting states. The Variational Quantum Eigensolver (VQE) provides a way of performing vacuum state preparation on quantum hardware. In this work, VQE is applied to pure SU(3) lattice Yang-Mills on a single plaquette and one dimensional plaquette chains. Bayesian optimization and gradient descent were investigated for performing the classical optimization. Ansatz states for plaquette chains are constructed in a scalable manner from smaller systems using domain decomposition and a stitching procedure analogous to the Density Matrix Renormalization Group (DMRG). Small examples are performed on IBM’s superconducting Manila processor.

## Classical and Quantum Evolution in a Simple Coherent Neutrino Problem

The extraordinary neutrino flux produced in extreme astrophysical environments like the early universe, core-collapse supernovae and neutron star mergers may produce coherent quantum neutrino oscillations on macroscopic length scales. The Hamiltonian describing this evolution can be mapped into quantum spin models with all-to-all couplings arising from neutrino-neutrino forward scattering. To date many studies of these oscillations have been performed in a mean-field limit where the neutrinos time evolve in a product state.

We examine a simple two-beam model evolving from an initial product state and compare the mean-field and many-body evolution. The symmetries in this model allow us to solve the real-time evolution for the quantum many-body system for hundreds or thousands of spins, far beyond what would be possible in a more general case with an exponential number (2^{N}) of quantum states. We compare mean-field and many-body solutions for different initial product states and ratios of one- and two-body couplings, and find that in all cases in the limit of infinite spins the mean-field (product state) and many-body solutions coincide for simple observables. This agreement can be understood as a consequence of the spectrum of the Hamiltonian and the initial energy distribution of the product states. We explore quantum information measures like entanglement entropy and purity of the many-body solutions, finding intriguing relationships between the quantum information measures and the dynamical behavior of simple physical observables.

## Causality and dimensionality in geometric scattering

The scattering matrix which describes low-energy,

non-relativistic scattering of two species of spin-1/2 fermions

interacting via finite-range potentials can be obtained from a

geometric action principle in which space and time do not appear

explicitly arXiv:2011.01278. In the case of zero-range

forces, constraints imposed by

causality –requiring that the scattered wave not be emitted before

the particles have interacted– translate into non-trivial

geometric constraints on scattering trajectories in the geometric

picture. The dependence of scattering on the number of spatial dimensions

is also considered; by varying from three to two spatial dimensions, the

dependence on spatial dimensionality in the geometric picture is

found to be encoded in the phase of the harmonic potential that appears in

the geometric action.

## UV/IR symmetries of the S-matrix and RG flow

The low energy S-matrix which describes non-relativistic two-body

scattering arising from finite-range forces has UV/IR symmetries

that are hidden in the corresponding effective field theory (EFT) action.

It is shown that the S-matrix symmetries are manifest as geometric properties

of the RG flow of the coefficients of local operators in the EFT action.

## Symmetries of the nucleon-nucleon S-matrix and effective field theory expansions

The s-wave nucleon-nucleon (NN) scattering matrix

(S-matrix) exhibits UV/IR symmetries which are hidden in the

effective field theory (EFT) action and scattering amplitudes, and

which explain some interesting generic features of the phase

shifts. These symmetries offer clarifying interpretations of

existing pionless EFT expansions, and suggest starting points for

novel expansions. The leading-order (LO) S-matrix obtained in the

pionless EFT with scattering lengths treated exactly is shown to

have a UV/IR symmetry which leaves the sum of s-wave phase shifts

invariant. A new scheme, which treats effective range corrections

exactly, and which possesses a distinct UV/IR symmetry at LO, is

developed up to NLO (next-to-LO) and compared with data.

## Quantum Error Correction with Gauge Symmetries

Quantum simulations of Lattice Gauge Theories (LGTs) are often formulated on an enlarged Hilbert space containing both physical and unphysical sectors in order to retain a local Hamiltonian. We provide simple fault-tolerant procedures that exploit such redundancy by combining a phase flip error correction code with the Gauss’ law constraint to correct one-qubit errors for a Z_{2} or truncated U(1) LGT in 1+1 dimensions with a link flux cutoff of 1. Unlike existing work on detecting violations of Gauss’ law, our circuits are fault tolerant and the overall error correction scheme outperforms a naive application of the [5,1,3] code. The constructions outlined can be extended to LGT systems with larger cutoffs and may be of use in understanding how to hybridize error correction and quantum simulation for LGTs in higher space-time dimensions and with different symmetry groups.

## Entanglement Structures in Quantum Field Theories: Negativity Cores and Bound Entanglement in the Vacuum

We present numerical evidence for the presence of bound entanglement, in addition to distillable entanglement, between disjoint regions of the one-dimensional non-interacting scalar field vacuum. To reveal the entanglement structure, we construct a local unitary operation that transforms the high-body entanglement of latticized field regions into a tensor-product *core* of mixed (1 x 1) pairs exhibiting an exponential negativity hierarchy and a separable *halo* with non-zero entanglement. This separability-obscured entanglement (SOE) is driven by non-simultaneous separability within the full mixed state, rendering unentangled descriptions of the halo incompatible with a classically connected core. We quantify the halo SOE and find it to mirror the full negativity as a function of region separation, and conjecture that SOE provides a physical framework encompassing bound entanglement. Similar entanglement structures are expected to persist in higher dimension and in more complex theories relevant to high-energy and nuclear physics, and may provide a language for describing the dynamics of information in transitioning from quarks and gluons to hadrons.

## Nuclear two point correlation functions on a quantum computer

The calculation of dynamic response functions are expected to be an early application benefiting from rapidly developing quantum hardware resources. The ability to calculate real-time quantities of strongly-correlated quantum systems is one of the most exciting applications that can easily reach beyond the capabilities of traditional classical hardware. Response functions of fermionic systems at moderate momenta and energies corresponding roughly to the Fermi energy of the system are a potential early application because the relevant operators are nearly local and the energies can be resolved in moderately short real time, reducing the spatial resolution and gate depth required.

This is particularly the case in quasielastic electron and neutrino scattering from nuclei, a topic of great interest in the nuclear and particle physics communities and directly related to experiments designed to probe neutrino properties. In this work we use current hardware to calculate response functions for a highly simplified nuclear model through calculations of a 2-point real time correlation function for a Fermi-Hubbard model in two dimensions with three distinguishable nucleons on four lattice sites on current quantum hardware, and evaluate current error mitigation strategies.

## Spectral density reconstruction with Chebyshev polynomials

Accurate calculations of the spectral density in a strongly correlated quantum many body system are of fundamental importance to study many-particle dynamics in the linear response regime. Typical examples are the calculation of inclusive and semi-exclusive scattering cross sections in atomic nuclei and transport properties of nuclear and neutron star matter.

Integral transform techniques have played an important role in accessing the spectral density in a variety of nuclear systems. However, their accuracy is in practice limited by the need to perform a numerical inversion which is often ill-conditioned.

In order to circumvent this problem, a quantum algorithm based on an appropriate expansion in Chebyshev polynomials was recently proposed. In the present work we build on this idea. We show how to perform controllable reconstructions of the spectral density over a finite energy resolution with rigours error estimates while allowing for efficient simulations on classical computers. We apply our idea to simple model response functions and comment on the applicability of the method to study realistic systems using scalable nuclear many-body methods.

## Hybridized Methods for Quantum Simulation in the Interaction Picture

Conventional methods of quantum simulation involve trade-offs that limit their applicability to specific contexts where their use is optimal. This paper demonstrates how different simulation methods can be hybridized to improve performance for interaction picture simulations over known algorithms. These approaches show asymptotic improvements over the individual methods that comprise them and further make interaction picture simulation methods practical in the near term. Physical applications of these hybridized methods yields a gate complexity scaling as log²Λ in the electric cutoff Λ for the Schwinger Model and independent of the electron density for collective neutrino oscillations, outperforming the scaling for all current algorithms with these parameters. For the general problem of Hamiltonian simulation subject to dynamical constraints, these methods yield a query complexity independent of the penalty parameter λ used to impose an energy cost on time-evolution into an unphysical subspace.

## Hierarchical Qubit Maps and Hierarchical Quantum Error Correction

We consider hierarchically implemented quantum error correction (HI-QEC) in which the fidelities of logical qubits are differentially optimized to enhance the capabilities of quantum devices in scientific applications. By employing qubit representations that propagate hierarchies in simulated systems to those in logical qubit noise sensitivities, heterogeneity in the distribution of physical qubits among logical qubits can be systematically structured. For concreteness, we estimate HI-QEC’s impact on surface code resources in computing low-energy observables to a fixed precision, finding up to ~60% reductions in qubit requirements possible in early error corrected simulations. This heterogeneous distribution of physical-to-logical qubits is identified as another element that can be optimized in the co-design process of quantum simulations of Standard Model physics.

## Entanglement minimization in hadronic scattering with pions

Recent work conjectured that

entanglement is minimized in low-energy hadronic scattering

processes. It was shown that the minimization of the entanglement

power (EP) of the low-energy baryon-baryon S-matrix implies novel

spin-flavor symmetries that are distinct from large-N_c QCD

predictions and are confirmed by high-precision lattice QCD

simulations. Here the conjecture of minimal entanglement is

investigated for scattering processes involving pions and

nucleons. The EP of the S-matrix is constructed for the pi-pi

and pi-N systems, and the consequences of minimization of

entanglement are discussed and compared with large-N_c QCD

expectations.

## Standard Model Physics and the Digital Quantum Revolution: Thoughts about the Interface

Remarkable advances in isolating, controlling and entangling quantum systems are transforming what was once a curious feature of quantum mechanics into a vehicle for disruptive scientific and technological progress. Pursuing the vision articulated by Feynman, a concerted effort across many areas of research and development is introducing prototypical digital quantum devices into the computing ecosystem available to domain scientists. Through interactions with these early quantum devices, the abstract vision of exploring classically-intractable quantum systems is evolving toward becoming a tangible reality. Beyond catalyzing these technological advances, entanglement is enabling parallel progress as a diagnostic for quantum correlations and as an organizational tool, both guiding improved understanding of quantum many-body systems and quantum field theories defining and emerging from the Standard Model. From the perspective of three domain science theorists, this article compiles “thoughts about the interface” on entanglement, complexity, and quantum simulation in an effort to contextualize recent NISQ-era progress with the scientific objectives of nuclear and high-energy physics.

## Quantum Machine Learning with SQUID

In this work we present the Scaled QUantum IDentifier (SQUID), an open-source framework for exploring hybrid Quantum-Classical algorithms for classification problems. The classical infrastructure is based on PyTorch and we provide a standardized design to implement a variety of quantum models with the capability of back-propagation for efficient training. We present the structure of our framework and provide examples of using SQUID in a standard binary classification problem from the popular MNIST dataset. In particular we highlight the implications for scalability for gradient based optimization of quantum models on the choice of output for variational quantum models.

## Entanglement Spheres and a UV-IR connection in Effective Field Theories

Disjoint regions of the latticized, massless scalar field vacuum become separable at large distances beyond the entanglement sphere, a distance that extends to infinity in the continuum limit. Through numerical calculations in one-, two- and three-dimensions, the radius of an entanglement sphere is found to be determined by the highest momentum mode of the field supported across the diameter, d, of two identical regions. As a result, the long-distance behavior of the entanglement is determined by the short-distance structure of the field. Effective eld theories (EFTs), describing a system up to a given momentum scale Lambda, are expected to share this feature, with regions of the EFT vacuum separable (or dependent on the UV-completion) beyond a distance proportional to Λ. The smallest non-zero value of the entanglement negativity supported by the field at large distances is conjectured to be N_{N}~exp(-Λ d), independent of the number of spatial dimensions. This phenomenon may be manifest in perturbative QCD processes.

## Dynamical Phase Transitions in models of Collective Neutrino Oscillations

Collective neutrino oscillations can potentially play an important role in transporting lepton flavor in astrophysical scenarios where the neutrino density is large, typical examples are the early universe and supernova explosions. It has been argued in the past that simple models of the neutrino Hamiltonian designed to describe forward scattering can support substantial flavor evolution on very short time scales t≈log(N)/(GFρ), with N the number of neutrinos, GF the Fermi constant and ρ the neutrino density. This finding is in tension with results for a similar but exactly solvable model for which t≈√N/(GFρ) instead. In this work we provide a coherent explanation of this tension in terms of Dynamical Phase Transitions (DPT) and study the possible impact that a DPT could have in more realistic models of neutrino oscillations and their mean-field approximation.

## Simulation of Collective Neutrino Oscillations on a Quantum Computer

In astrophysical scenarios with large neutrino density, like supernovae and the early universe, the presence of neutrino-neutrino interactions can give rise to collective flavor oscillations in the out-of-equilibrium collective dynamics of a neutrino cloud. The role of quantum correlations in these phenomena is not yet well understood, in large part due to complications in solving for the real-time evolution of the strongly coupled many-body system. Future fault-tolerant quantum computers hold the promise to overcome much of these limitations and provide direct access to the correlated neutrino dynamic. In this work, we present the first simulation of a small system of interacting neutrinos using current generation quantum devices. We introduce a strategy to overcome limitations in the natural connectivity of the qubits and use it to track the evolution of entanglement in real-time. The results show the critical importance of error-mitigation techniques to extract meaningful results for entanglement measures using noisy, near term, quantum devices.

## Entanglement and Many-Body Effects in Collective Neutrino Oscillations

Collective neutrino oscillations play a crucial role in transporting lepton flavor in astrophysical settings, such as supernovae, where the neutrino density is large. In this regime, neutrino-neutrino interactions are important and simulations in mean-field approximations show evidence for collective oscillations occurring at time scales much larger than those associated with vacuum oscillations. In this work, we study the out-of-equilibrium dynamics of a corresponding spin model using Matrix Product States and show how collective bipolar oscillations can be triggered by quantum fluctuations if appropriate initial conditions are present. The origin of these flavor oscillations, absent in the mean-field, can be traced to the presence of a dynamical phase transition, which drastically modifies the real-time evolution of the entanglement entropy. We find entanglement entropies scaling at most logarithmically in the system size, suggesting that classical tensor network methods could be efficient in describing collective neutrino dynamics more generally.

## A Trailhead for Quantum Simulation of SU(3) Yang-Mills Lattice Gauge Theory in the Local Multiplet Basis

Maintaining local interactions in the quantum simulation of gauge field theories relegates most states in the Hilbert space to be unphysical—theoretically benign, but experimentally difficult to avoid. Reformulations of the gauge fields can modify the ratio of physical to gauge-variant states often through classically preprocessing the Hilbert space and modifying the representation of the field on qubit degrees of freedom. This paper considers the implications of representing SU(3) Yang-Mills gauge theory on a lattice of irreducible representations in both a global basis of projected global quantum numbers and a local basis in which controlled-plaquette operators support efficient time evolution. Classically integrating over the internal gauge space at each vertex (e.g., color isospin and color hypercharge) significantly reduces both the qubit requirements and the dimensionality of the unphysical Hilbert space. Initiating tuning procedures that may inform future calculations at scale, the time evolution of one- and two-plaquettes are implemented on one of IBM’s superconducting quantum devices, and early benchmark quantities are identified. The potential advantages of qudit environments, with either constrained 2D hexagonal or 1D nearest-neighbor internal state connectivity, are discussed for future large-scale calculations.

Editors Suggestion in Physical Review D.

## Preparation of Excited States for Nuclear Dynamics on a Quantum Computer

We study two different methods to prepare excited states on a quantum computer, a key initial step to study dynamics within linear response theory. The first method uses unitary evolution for a short time*T*=O(√(1−*F))* to approximate the action of an excitation operator *O* with fidelity *F*and success probability *P*≈1−*F*. The second method probabilistically applies the excitation operator using the Linear Combination of Unitaries (LCU) algorithm. We benchmark these techniques on emulated and real quantum devices, using a toy model for thermal neutron-proton capture. Despite its larger memory footprint, the LCU-based method is efficient even on current generation noisy devices and can be implemented at a lower gate cost than a naive analysis would suggest. These findings show that quantum techniques designed to achieve good asymptotic scaling on fault tolerant quantum devices might also provide practical benefits on devices with limited connectivity and gate fidelity.

## Geometry and entanglement in the scattering matrix

A formulation of nucleon-nucleon scattering is developed in which the S-matrix, rather than an effective-field theory (EFT) action, is the fundamental object. Spacetime plays no role in this description: the S-matrix is a trajectory that moves between RG fixed points in a compact theory space defined by unitarity. This theory space has a natural operator definition, and a geometric embedding of the unitarity constraints in four-dimensional Euclidean space yields a flat torus, which serves as the stage on which the S-matrix propagates. Trajectories with vanishing entanglement are special geodesics between RG fixed points on the flat torus, while entanglement is driven by an external potential. The system of equations describing S-matrix trajectories is in general complicated, however the very-low-energy S-matrix — that appears at leading-order in the EFT description — possesses a UV/IR conformal invariance which renders the system of equations integrable, and completely determines the potential. In this geometric viewpoint, inelasticity is in correspondence with the radius of a three-dimensional hyperbolic space whose two-dimensional boundary is the flat torus. This space has a singularity at vanishing radius, corresponding to maximal violation of unitarity. The trajectory on the flat torus boundary can be explicitly constructed from a bulk trajectory with a quantifiable error, providing a simple example of a holographic quantum error correcting code.

## An Algorithm for Quantum Computation of Particle Decays

A quantum algorithm is developed to calculate decay rates and cross sections using quantum resources that scale polynomially in the system size assuming similar scaling for state preparation and time evolution. This is done by computing finite-volume one- and two-particle Green’s functions on the quantum hardware. Particle decay rates and two particle scattering cross sections are extracted from the imaginary parts of the Green’s function. A 0 + 1 dimensional implementation of this method is demonstrated on IBM’s superconducting quantum hardware for the decay of a heavy scalar particle to a pair of light scalars.

## Spectral Density Estimation with the Gaussian Integral Transform

The spectral density operator *ρ*(*ω*)=*δ*(*ω*−*H*) plays a central role in linear response theory as it’s expectation value, the dynamical response function, can be used to compute scattering cross-sections. In this work, we describe a near optimal quantum algorithm providing an approximation to the spectral density with energy resolution Δ and error *ϵ* using O(√(log(1/ϵ)(log(1/Δ)+log(1/ϵ)))/Δ) operations. This is achieved without using expensive approximations to the time-evolution operator but exploiting instead qubitization to implement an approximate Gaussian Integral Transform (GIT) of the spectral density. We also describe appropriate error metrics to assess the quality of spectral function approximations more generally.

## Geometric Quantum Information Structure in Quantum Fields and their Lattice Simulation

An upper limit to distillable entanglement between two disconnected regions of massless noninteracting scalar field theory has an exponential decay defined by a geometric decay constant. When regulated at short distances with a spatial lattice, this entanglement abruptly vanishes beyond a dimensionless separation, defining a negativity sphere. In two spatial dimensions, we determine this geometric decay constant between a pair of disks and the growth of the negativity sphere toward the continuum through a series of lattice calculations. Making the connection to quantum field theories in three-spatial dimensions, assuming such quantum information scales appear also in quantum chromodynamics (QCD), a new relative scale may be present in effective field theories describing the low-energy dynamics of nucleons and nuclei. We highlight potential impacts of the distillable entanglement structure on effective field theories, lattice QCD calculations and future quantum simulations.

## Quantum Algorithms for Simulating the Lattice Schwinger Model

The Schwinger model (quantum electrodynamics in 1+1 dimensions) is a testbed for the study of quantum gauge field theories. We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings. In particular, we perform a tight analysis of low-order Trotter formula simulations of the Schwinger model, using recently derived commutator bounds, and give upper bounds on the resources needed for simulations in both scenarios. In lattice units, we find a Schwinger model on *N*/2 physical sites with coupling constant *x*^{-1/2} and electric field cutoff *x*^{-1/2}Λ can be simulated on a quantum computer for time 2*xT* using a number of *T*-gates or CNOTs in *Õ*(*N*^{3/2} *T*^{3/2} *x*^{1/2} Λ) for fixed operator error. This scaling with the truncation Λ is better than that expected from algorithms such as qubitization or QDRIFT. Furthermore, we give scalable measurement schemes and algorithms to estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable—the mean pair density. Finally, we bound the root-mean-square error in estimating this observable via simulation as a function of the diamond distance between the ideal and actual CNOT channels. This work provides a rigorous analysis of simulating the Schwinger model, while also providing benchmarks against which subsequent simulation algorithms can be tested.

## Entanglement Rearrangement in Self-Consistent Nuclear Structure Calculations

Entanglement properties of 4He and 6He are investigated using nuclear many-body calculations, specifically the single-nucleon entanglement entropy, and the two-nucleon mutual information and negativity. Nuclear wavefunctions are obtained by performing active-space no-core configuration interaction calculations using a two-body nucleon-nucleon interaction derived from chiral effective field theory. Entanglement measures within single-particle bases, the harmonic oscillator (HO), Hartree-Fock (HF), natural (NAT) and variational natural (VNAT) bases, are found to exhibit different degrees of complexity. Entanglement in both nuclei is found to be more localized within NAT and VNAT bases than within a HO basis for the optimal HO parameters, and, as anticipated, a core-valence (tensor product) structure emerges from the full six-body calculation of 6He. The two-nucleon mutual information shows that the VNAT basis, which typically exhibits good convergence properties, effectively decouples the active and inactive spaces. We conclude that measures of one- and two-nucleon entanglement are useful in analyzing the structure of nuclear wave functions, in particular the efficacy of basis states, and may provide useful metrics toward developing more efficient schemes for ab initio computations of the structure and reactions of nuclei, and quantum many-body systems more generally.

## Loop, String, and Hadron Dynamics in SU(2) Hamiltonian Lattice Gauge Theories

The question of how to efficiently formulate Hamiltonian gauge theories is experiencing renewed interest due to advances in building quantum simulation platforms. We introduce a reformulation of an SU(2) Hamiltonian lattice gauge theory—a loop-string-hadron (LSH) formulation—that describes dynamics directly in terms of its loop, string, and hadron degrees of freedom, while alleviating several disadvantages of quantum-simulating the Kogut-Susskind formulation. This LSH formulation transcends the local loop formulation of d+1-dimensional lattice gauge theories by incorporating staggered quarks, furnishing the algebra of gauge-singlet operators, and being used to reconstruct dynamics between states that have Gauss’s law built in to them. LSH operators are then factored into products of “normalized” ladder operators and diagonal matrices, priming them for classical or quantum information processing. Self-contained expressions of the Hamiltonian are given up to d=3. The LSH formalism uses makes little use of structures specific to SU(2) and its conceptual clarity makes it an attractive approach to apply to other non-Abelian groups like SU(3).

## Systematically Localizable Operators for Quantum Simulations of Quantum Field Theories

Correlations and measures of entanglement in ground state wavefunctions of relativistic quantum field theories are spatially localized over length scales set by the mass of the lightest particle. We utilize this localization to design digital quantum circuits for preparing the ground states of free lattice scalar quantum field theories. Controlled rotations that are exponentially localized in their position-space extent are found to provide exponentially convergent wavefunction fidelity. These angles scale with the classical two-point correlation function, as opposed to the more localized mutual information or the hyper-localized negativity. We anticipate that further investigations will uncover quantum circuit designs with controlled rotations dictated by the measures of entanglement. This work is expected to impact quantum simulations of systems of importance to nuclear physics, high-energy physics, and basic energy sciences research.

## Fixed-Point Quantum Circuits for Quantum Field Theories

Renormalization group ideas and effective operators are used to efficiently determine localized unitaries for preparing the ground states of non-interacting scalar field theories on digital quantum devices. With these methods, classically computed ground states in a small spatial volume can be used to determine operators for preparing the ground state in a beyond-classical quantum register, even for interacting scalar field theories. Due to the exponential decay of correlation functions and the double exponential suppression of digitization artifacts, the derived quantum circuits are expected to be relevant already for near-term quantum devices.

## Quantum Computer Systems for Scientific Discovery

The great promise of quantum computers comes with the dual challenges of building them and finding their useful applications. We argue that these two challenges should be considered together, by co-designing full-stack quantum computer systems along with their applications in order to hasten their development and potential for scientific discovery. In this context, we identify scientific and community needs, opportunities, a sampling of a few use case studies, and significant challenges for the development of quantum computers for science over the next 2–10 years. This document is written by a community of university, national laboratory, and industrial researchers in the field of Quantum Information Science and Technology, and is based on a summary from a U.S. National Science Foundation workshop on Quantum Computing held on October 21–22, 2019 in Alexandria, VA.

## Quantum Computing for Neutrino-nucleus Scattering

Neutrino-nucleus cross section uncertainties are expected to be a dominant systematic in future accelerator neutrino experiments. The cross sections are determined by the linear response of the nucleus to the weak interactions of the neutrino, and are dominated by energy and distance scales of the order of the separation between nucleons in the nucleus. These response functions are potentially an important early physics application of quantum computers. Here we present an analysis of the resources required and their expected scaling for scattering cross section calculations. We also examine simple small-scale neutrino-nucleus models on modern quantum hardware. In this paper, we use variational methods to obtain the ground state of a three nucleon system (the triton) and then implement the relevant time evolution. In order to tame the errors in present-day NISQ devices, we explore the use of different error-mitigation techniques to increase the fidelity of the calculations.

## SU(2) Non-Abelian Gauge Field Theory in One Dimension on Digital Quantum Computers

An improved mapping of one-dimensional SU(2) non-Abelian gauge theory onto qubit degrees of freedom is presented. This new mapping allows for a reduced unphysical Hilbert space. Insensitivity to interactions within this unphysical space is exploited to design more efficient quantum circuits. Local gauge symmetry is used to analytically incorporate the angular momentum alignment, leading to qubit registers encoding the total angular momentum on each link. Results of multi-plaquette calculations on IBM’s quantum hardware are presented.

## Short-depth circuits for efficient expectation value estimation

The evaluation of expectation values *Tr*[*ρO*] for some pure state *ρ* and Hermitian operator *O* is of central importance in a variety of quantum algorithms. Near optimal techniques developed in the past require a number of measurements *N* approaching the Heisenberg limit *N*=O(1/*ϵ*) as a function of target accuracy *ϵ*. The use of Quantum Phase Estimation requires however long circuit depths *C*=O(1/*ϵ*) making their implementation difficult on near term noisy devices. The more direct strategy of Operator Averaging is usually preferred as it can be performed using *N*=O(1/*ϵ*^{2}) measurements and no additional gates besides those needed for the state preparation. In this work we use a simple but realistic model to describe the bound state of a neutron and a proton (the deuteron) and show that the latter strategy can require an overly large number of measurement in order to achieve a reasonably small relative target accuracy *ϵr*. We propose to overcome this problem using a single step of QPE and classical post-processing. This approach leads to a circuit depth *C*=O(*ϵ*^{m}) (with *m*≥0) and to a number of measurements *N*=O(1/*ϵ*^{2+n}) for 0<*n*≤1. We provide detailed descriptions of two implementations of our strategy for *n*=1 and *n*≈0.5 and derive appropriate conditions that a particular problem instance has to satisfy in order for our method to provide an advantage.

## Minimally-Entangled State Preparation of Localized Wavefunctions on Quantum Computers

Initializing a single site of a lattice scalar field theory into an arbitrary state requires O(2^{nQ}) entangling gates on a quantum computer with n_{Q }qubits per site. It is conceivable that, instead, initializing to functions that are good approximations to states may have utility in reducing the number of required entangling gates. In the case of a single site of a non-interacting scalar field theory, initializing to a symmetric exponential wavefunction requires n_{Q }− 1 entangling gates, compared with the 2^{nQ−1 }+ n_{Q }− 3 + δ_{nQ,1 }required for a symmetric Gaussian wavefunction. In this work, we explore the initialization of 1-site (n_{Q }= 4), 2-site (n_{Q }= 3) and 3-site (n_{Q }= 3) non- interacting scalar field theories with symmetric exponential wavefunctions using IBM’s quantum simulators and quantum devices (Poughkeepsie and Tokyo). With the digitizations obtainable with n_{Q }= 3, 4, these tensor-product wavefunctions are found to have large overlap with a Gaussian wavefunction, and provide a suitable low-noise initialization for subsequent quantum simulations. In performing these simulations, we have employed a workflow that interleaves calibrations to mitigate systematic errors in production. The calibrations allow tolerance cuts on gate performance including the fidelity of the symmetrizing Hadamard gate, both in vacuum (|0⟩^{⊗nQ }) and in medium (n_{Q }− 1 qubits initialized to an exponential function). The results obtained in this work are relevant to systems beyond scalar field theories, such as the deuteron radial wavefunction, 2- and 3-dimensional cartesian-space wavefunctions, and non-relativistic multi-nucleon systems built on a localized eigenbasis.

## Solving Gauss’s Law on Digital Quantum Computers with Loop-String-Hadron Digitization

We show that using the loop-string-hadron (LSH) formulation of SU(2) lattice gauge theory (arXiv:1912.06133) as a basis for digital quantum computation easily solves an important problem of fundamental interest: implementing gauge invariance (or Gauss’s law) exactly. We first discuss the structure of the LSH Hilbert space in *d* spatial dimensions, its truncation, and its digitization with qubits. Error detection and mitigation in gauge theory simulations would benefit from physicality “oracles,” so we decompose circuits that flag gauge invariant wavefunctions. We then analyze the logical qubit costs and entangling gate counts involved with the protocols. The LSH basis could save or cost more qubits than a Kogut-Susskind-type representation basis, depending on how that is digitized as well as the spatial dimension. The numerous other clear benefits encourage future studies into applying this framework.

## Entanglement Suppression and Emergent Symmetries of Strong Interactions

Entanglement suppression in the strong interaction S-matrix is shown to be correlated with approximate spin-flavor symmetries that are observed in low-energy baryon interactions, the Wigner SU(4) symmetry for two flavors and an SU(16) symmetry for three flavors. We conjecture that dynamical entanglement suppression is a property of the strong interactions in the infrared, giving rise to these emergent symmetries and providing powerful constraints on the nature of nuclear and hypernuclear forces in dense matter.

## Oracles for Gauss’s Law on Digital Quantum Computers

Formulating a lattice gauge theory using only physical degrees of freedom generically leads to non-local interactions. A local Hamiltonian is desirable for quantum simulation, and this is possible by treating the Hilbert space as a subspace of a much larger Hilbert space in which Gauss’s law is not automatic. Digital quantum simulations of this local formulation will wander into unphysical sectors due to errors from Trotterization or from quantum noise. In this work, oracles are constructed that use local Gauss law constraints to projectively distinguish physical and unphysical wave functions in Abelian lattice gauge theories. Such oracles can be used to detect errors that break Gauss’s law.

## Simulations of Subatomic Many-Body Physics on a Quantum Frequency Processor

Simulating complex many-body quantum phenomena is a major scientific impetus behind the development of quantum computing, and a range of technologies are being explored to address such systems. We present the results of the largest photonics-based simulation to date, applied in the context of subatomic physics. Using an all-optical quantum frequency processor, the ground-state energies of light nuclei including the triton (3H), 3He, and the alpha particle (4He) are computed. Complementing these calculations and utilizing a 68-dimensional Hilbert space, our photonic simulator is used to perform sub-nucleon calculations of the two-body and three-body forces between heavy mesons in the Schwinger model. This work is a first step in simulating subatomic many-body physics on quantum frequency processors—augmenting classical computations that bridge scales from quarks to nuclei.[Image (left) was created by Pavel Lougovski.]

## Digitization of Scalar Fields for NISQ-Era Quantum Computing

Qubit, operator and gate resources required for the digitization of lattice λϕ4 scalar field theories onto quantum computers in the NISQ era are considered, building upon the foundational work by Jordan, Lee and Preskill. The Nyquist-Shannon sampling theorem, introduced in this context by Macridin, Spetzouris, Amundson and Harnik building on the work of Somma, provides a guide with which to evaluate the efficacy of two field-space bases, the eigenstates of the field operator, as used by Jordan, Lee and Preskill, and eigenstates of a harmonic oscillator, to describe 0+1- and 1+1-dimensional scalar field theory. We show how techniques associated with improved actions, which are heavily utilized in Lattice QCD calculations to systematically reduce lattice-spacing artifacts, can be used to reduce the impact of the field digitization in λϕ4, but are found to be inferior to a complete digitization-improvement of the Hamiltonian using a Quantum Fourier Transform. When the Nyquist-Shannon sampling theorem is satisfied, digitization errors scale as |log|log|ϵdig|||∼nQ (number of qubits describing the field at a given spatial site) for the low-lying states, leaving the familiar power-law lattice-spacing and finite-volume effects that scale as |log|ϵlatt||∼NQ (total number of qubits in the simulation). We find that fewer than nQ=10 qubits per spatial lattice site are sufficient to reduce digitization errors below noise levels expected in NISQ-era quantum devices for both localized and delocalized field-space wavefunctions. For localized wavefunctions, nQ=4 qubits are likely to be sufficient for calculations requiring modest precision.

## Gauss’s Law, Duality, and the Hamiltonian Formulation of U(1) Lattice Gauge Theory

Quantum computers have the potential to explore the vast Hilbert space of entangled states that play an important role in the behavior of strongly interacting matter. This opportunity motivates reconsidering the Hamiltonian formulation of gauge theories, with a suitable truncation scheme to render the Hilbert space finite-dimensional. Conventional formulations lead to a Hilbert space largely spanned by unphysical states; given the current inability to perform large scale quantum computations, we examine here how one might restrict wave function evolution entirely or mostly to the physical subspace. We consider such constructions for the simplest of these theories containing dynamical gauge bosons—U(1) lattice gauge theory without matter in d=2,3 spatial dimensions—and find that electric-magnetic duality naturally plays an important role. We conclude that this approach is likely to significantly reduce computational overhead in d=2 by a reduction of variables and by allowing one to regulate magnetic fluctuations instead of electric. The former advantage does not exist in d=3, but the latter might be important for asymptotically-free gauge theories.

## Linear Response on a Quantum Computer

The dynamic linear response of a quantum system is critical for understanding both the structure and dynamics of strongly-interacting quantum systems, including neutron scattering from materials, photon and electron scattering from atomic systems, and electron and neutrino scattering by nuclei. We present a general algorithm for quantum computers to calculate the dynamic linear response function with controlled errors and to obtain information about specific final states that can be directly compared to experimental observations.

## Quantum-classical computation of Schwinger model dynamics using quantum computers

We present a quantum-classical algorithm to study the dynamics of the two-spatial-site Schwinger model on IBM’s quantum computers. Using rotational symmetries, total charge, and parity, the number of qubits needed to perform computation is reduced by a factor of ∼5, removing exponentially-large unphysical sectors from the Hilbert space. Our work opens an avenue for exploration of other lattice quantum field theories, such as quantum chromodynamics, where classical computation is used to find symmetry sectors in which the quantum computer evaluates the dynamics of quantum fluctuations.

## Ground States via Spectral Combing on a Quantum Computer

A new method is proposed for determining the ground state wave function of a quantum many-body system on a quantum computer, without requiring an initial trial wave function that has good overlap with the true ground state. The technique of Spectral Combing involves entangling an arbitrary initial wave function with a set of auxiliary qubits governed by a time dependent Hamiltonian, resonantly transferring energy out of the initial state through a plethora of avoided level crossings into the auxiliary system. The number of avoided level crossings grows exponentially with the number of qubits required to represent the Hamiltonian, so that the efficiency of the algorithm does not rely on any particular energy gap being large. We give an explicit construction of the quantum gates required for the realization of this procedure and explore the results of classical simulations of the algorithm on a small quantum computer with up to 8 qubits. We show that for certain systems and comparable results, Spectral Combing requires fewer quantum gates to realize than the Quantum Adiabatic Algorithm.