July 15, 2016

Workshop on Quantum Algorithms and Devices

8:30 AM – 5:30 PM

Location: Redmond, WA, USA

  • Speaker: Aram Harrow

    Can quantum computers solve optimization problems much more quickly than classical computers? One major piece of evidence for this proposition has been the fact that Quantum Annealing (QA, also known as adiabatic optimization) finds the minimum of some cost functions exponentially more quickly than Simulated Annealing (SA), which is arguably a classical analogue of QA. These include a toy model, called “Hamming weight with a spike”, where SA gets stuck in a local minimum and QA uses quantum mechanical tunneling to find the correct answer.

    Our work considers a classical algorithm known as Simulated Quantum Annealing (SQA) which relates certain quantum systems to classical Markov chains. By proving that these chains mix rapidly, we show that SQA runs in polynomial time on the Hamming weight with spike problem in much of the parameter regime where QA achieves exponential advantage over SA. While our analysis only covers this toy model, it can be seen as evidence against the prospect of exponential quantum speedup using tunneling.

    This is based on arXiv:1601.03030, which is joint work with Elizabeth Crosson.

  • Speaker: Stephen Jordan

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here, we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization heuristics in their own right, competitive with the best previously known heuristic solvers for MAX-k-SAT at k=2,3,4.

  • Speaker: Fernando Brandao

    In this talk I presented a quantum algorithm for solving semidefinite programs (SDPs). It has worst case running time O((nm)^1/2 r), with n and r the dimension and sparsity of the input matrices, respectively, and m the number of constraints. In contrast any classical method requires at least Omesdfsdfnm) time to solve the problem. I discussed how for some instances the algorithm might offer even exponential speed-ups, if the quantum Gibbs states of Hamiltonians given by linear combinations of the input matrices of the SDP can be prepared efficiently on a quantum computer.

    As an application I showed how the Goemans-Williamson SDP for approximating max-cut can be solved quantum-mechanically in time linear in the number of vertices of the graph. This gives a speed-up over the best classical method, which requires linear time in the number of edges.

    Based on joint work with Krysta Svore.

  • Speaker: Dmitri Maslov

    In this talk, I discussed ion-trap quantum computing, and specifically, how to program and efficiently compile physical-level experiments for an ion-trap quantum machine developed by the researchers at the University of Maryland. I reported a complete strategy, consisting of the full set of steps taken to sequentially decompose a quantum algorithm/circuit from a higher-level specification into progressively lower-level specifications until physical-level specification is reached. The different stages are structured so as to best assist with the optimization of the resulting physical-level implementation and while taking into account numerous optimization criteria, including minimizing the number of expensive two-qubit gates, minimizing the number of less expensive single-qubit gates, optimizing the runtime, minimizing the overall circuit error, as well as optimizing classical control sequences. There were furthermore two types of compilation discussed: first allows executing a given quantum algorithm most efficiently via selecting physical qubits to work with, and second provides a fully parametrized physical-level circuit that can be executed on any set of qubits (moreover, sometimes, those parametrized circuits have free parameters that can be set arbitrarily). I illustrated theefficiency of this compilation approach via comparing to the previously known results. I also reported the results of some physical-level experiments, providing a strong motivation for further study of and the development of tools for the trapped ions quantum information processing platform.

  • Speakers: Jeremy O’Brien and Terry Rudolph

    Of the various approaches to quantum computing, photons are appealing for their low-noise properties and ease of manipulation at the single photon level; while the challenge of entangling interactions between photons can be met via measurement induced non-linearities. However, the real excitement with this architecture is the promise of ultimate manufacturability: All of the components—inc. sources, detectors, filters, switches, delay lines—have been implemented on chip, and increasingly sophisticated integration of these components is being achieved. We discussed the opportunities and challenges of a fully integrated photonic quantum computer.

  • Speaker: Barbara Terhal

    We discussed the need for fast and parallel decoding in implementing quantum error correction. We reviewed the issue of local decoding from the perspective of self-correction. We reported on an optimized implementation of the local Harrington decoder for the 2D toric code where we observe a threshold of 0.14% for noiseless parity checks and a threshold of 0.086% for equal-rate qubit and parity check errors. We reported on an implementation of a local decoder for the 4D toric code, exhibiting a threshold of 2.1% for noiseless parity checks and a threshold of 1.6% for equal-rate qubit and parity check errors.

  • Speaker: David Poulin

    Non-abelian anyons have drawn much interest due to their suspected existence in two-dimensional condensed matter systems and for their potential applications in quantum computation. In particular, a quantum computation can in principle be realized by braiding and fusing certain non-abelian anyons. These operations are expected to be intrinsically robust due to their topological nature. Provided the system is kept at a temperature T lower than the spectral gap, the density of thermal excitations is suppressed by an exponential Boltzman factor. In contrast to the topological protection however, this thermal protection is not scalable: thermal excitations do appear at constant density for any non-zero temperature and so their presence is unavoidable as the size of the computation increases. Thermally activated anyons can corrupt the encoded data by braiding or fusing with the computational anyons.

    In the present work, we generalize a fault-tolerant scheme introduced by Harrington for the toric-code to the setting of non-cyclic modular anyons. We prove that the quantum information encoded in the fusion space of a non-abelian anyon system can be preserved for arbitrarily long times with poly-log overhead. In particular, our model accounts for noise processes which lead to the creation of anyon pairs from the vacuum, anyon diffusion, anyon fusion as well as errors in topological chargemeasurements.

    Based on joint work with Guillaume Dauphinais.

  • Speaker: Scott Aaronson

    In the near future, it will likely become possible to perform special-purpose quantum computations that, while not immediately useful for anything, are plausibly hard to simulate using a classical computer. These experiments will be a scientific milestone, but they raise urgent challenges for theoretical computer science: in particular, on what grounds should we believe that a given quantum system really is hard to simulate classically? does classical simulation become easier as a quantum system becomes noisier? and how do we verify the results of such an experiment?

    I discussed recent results and open problems about these questions, focusing on random quantum circuits and general results about the complexity of quantum sampling, while also mentioning progress on my and Alex Arkhipov’s BosonSampling proposal.

    Based partly on joint work with Lijie Chen and with Daniel Brod.

  • Speaker: Lior Eldar

    Quantum entanglement is considered as the “source” of quantum algorithmic speedup, yet it is very fragile and hard to maintain in the presence of noise. In this work we ask what kind of interaction topology is sufficient as the underlying topology of a locally-defined quantum system (local Hamiltonian), so that it has a robust form of quantum entanglement in a well-defined sense, or quantum hardness-of-approximation [NLTS. Freedman, Hastings, ’13].

    We focused on topologies with a large iso-perimetric constant (namely “expanding”), and showed that while topologies that are strong enough for classical hardness-of-approximation (namely expander graphs) are not necessarily quantum-robust, a certain high-dimensional variant called systole/co-systole expander [Evra,Kaufman ’16], which is intensively studied in algebraic topology nowadays, will in fact allow to establish quantum hardness-ofapproximation, if it exists.

    Joint work with Aram Harrow.

  • Speaker: Rolando Somma

    One of the best-known problems that a quantum computer is expected to solve more efficiently than a classical one is the simulation of quantum systems. While significant work has considered the case of discrete, finite dimensional quantum systems, the study of fast quantum simulation methods for continuous-variable systems has only received little attention. In this talk, I presented quantum methods to simulate the time evolution of certain continuous variable quantum systems. Building up on recent results about product formulas for Lie groups, I first described a way to simulate the time evolution of the quantum harmonic oscillator (QHO) that results in a superpolynomial speedup over the corresponding classical method. Our algorithms can also be used to prepare eigenstates of the QHO and may be of independent interest. I then considered the simulation of more complex quantum systems, such as the quartic potential. In those cases, our method results in a polynomial quantum speedup. Generalizations and connections between our results and the so-called fractional Fourier transform, which is a generalization of the Fourier transform used in signal analysis, was also discussed.

  • Speaker: Nathan Wiebe

    We showed how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discussed how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.