Skip to content
BY 4.0 license Open Access Published by De Gruyter September 18, 2020

Quantum computing and simulation

Where we stand and what awaits us

  • Juan Ignacio Cirac ORCID logo EMAIL logo
From the journal Nanophotonics

Abstract

Quantum computers and simulators can have an extraordinary impact on our society. Despite the extraordinary progress they have made in recent years, there are still great challenges to be met and new opportunities to be discovered.

1 Introduction

Very recently, Google has announced the construction of a quantum processor based on superconducting qubits that is able to perform certain task much faster than any existing supercomputer [1]. This announcement was preceded by many other announcements by different research groups in which milestones in quantum computing with different platforms have been reported. In fact, in the last few years, the panorama in this field of research abruptly changed when leading technological companies expressed their intention to build such devices and public funding agencies around the globe approved generous support to construct and develop them. Research has gone beyond universities and other research institutions and is now also pursued in companies, which has triggered new dynamics. The field of quantum computing is nowadays attracting the attention of science, media, industry, politics, and the society in general. After many years of research, we are living very agitated times where there is a strong effort worldwide to build such devices, as they promise a variety of applications. We repeatedly read in diverse media how quantum computers are going to impact pharmaceutical industry, medicine, finances, energy production, or climate change. Although there is an obvious exaggeration in all that news and, in fact, in many cases there is no (or very little) evidence of such impact, most scientists working on that subject believe that quantum computers will complement and, in some areas, supersede supercomputers and will strongly impact our society. However, to achieve them, there are still many challenges and obstacles to overcome. In this short article, I want to highlight some of the opportunities quantum computers may offer us, as well as important challenges we are facing. The intention is not to review the state of the art of the different platforms or on quantum algorithms; there exist excellent reviews on those subjects (see, for example, Refs. [2–6]).

To analyze the power and possibilities of existing and future devices, one has to distinguish between different concepts that sometimes lead to confusion and misleading statements. The first is a scalable quantum computer, which can run arbitrary quantum algorithms reliably in the form of a sequence of elementary quantum logic gates. The second is a quantum device which can do the same although the errors accumulate, so that it can only obtain reliable results for limited sizes (or a maximum number of quantum gates) and thus cannot be scaled up to perform arbitrary computations. The third one is an analog quantum device which does not operate based on quantum gates but rather evolves in accordance with some given dynamics which can be engineered to some extent. They cannot solve general problems; however, they are easier to build.

2 Scalable quantum computers

A quantum computer is a device that is able to implement quantum algorithms based on a universal set of gates acting on quantum bits (qubits) with “almost no error”. There are a variety of problems, ranging from the simulation of materials or chemistry process, to optimization, for which very efficient quantum algorithms have been develop so that they can be solved by quantum computers way faster than by any other classical one. Those have a variety of applications in drug or material design, industrial processes or data processing, just to name a few. The statement regarding the absence of errors is very important but also very subtle because such a device is based on the laws of quantum physics, and thus not deterministic: when we measure at the end of the computation, we can obtain different results with different probabilities. By almost no error, we mean that the probability of obtaining each possible outcome in practice should be very close to the ideal one as dictated by the laws of quantum physics. In a real device, errors will certainly occur, as any interaction with the environment or any tiny imperfection will distort the probabilities of different outcomes. In fact, those errors accumulate during the computation which makes it extraordinarily difficult to build a quantum computer. Furthermore, as we make it bigger, errors will be more likely, so that it seems impossible to scale it to the sizes that are required for most applications. Fortunately, more than 20 years ago, it was discovered that those errors can be mitigated or even corrected so that it should be possible, at least in principle, to scale a quantum computer and yet keeping the condition of having “almost no error”. There is, however, a high price to pay: for each qubit, one has to add a number of qubits to correct the errors. In addition, the error procedure can only operate if the error per quantum gate (or time step) is below a threshold, which is of the order 10−2. This leads to an overhead in the number of qubits; that is, the number of required qubits has to be multiplied by a factor of the order 103–104, depending on the magnitude of the errors produced at each gate. Just to give an order of magnitude, most quantum algorithms that provide speed up with respect to classical ones become useful starting from about 103 to 104 qubits, so that with the overhead, one will require as many as 106–108 qubits. The device created by Google, for instance, has 53 qubits that have to operate at extreme conditions of low temperatures and isolation, so that, indeed, building a full-fledged quantum computer imposes a real scientific and technological challenge.

Although there are problems for which a quantum computer can achieve an exponential speed-up (as a function of the number of qubits), there are others in which this advantage is more modest. The first category includes some specific problems, such as factorization or discrete logarithm, and the simulation of quantum many-body systems. The second includes most of the optimization processes. The extra overhead demand will also decrease the advantages of quantum computers in solving some of those problems, especially those in the second category. Indeed, apart from increasing the number of qubits to correct the errors, more operations are required to perform the computation and the corrections so that the size of the problems where the quantum computer offers a speed up with respect to a classical computer may appear at a point where the execution time is extreme large and thus impractical.

Despite the obvious difficulty of scaling up existing technologies, there are different paths in which that task can be simplified. First, key technologies may appear in the way toward the construction of scalable quantum computers, in a similar way that transistors accelerated the development of classical computers. In addition, the combination of technologies may also lead to significant improvements, more compact devices, and smaller errors. There is also room for improvement in the development of error-correcting codes which may be better adapted to the specific errors that appear in different implementations, giving rise to much smaller overheads. For instance, overhead factors of the order of few tens or hundreds may make scaling up a much simpler and doable task in the near future. This will also affect the effectiveness of some of the quantum algorithms for optimization.

3 Noisy intermediate-scale quantum devices

Machines that can implement quantum algorithms but do not correct for errors (in a fault tolerant way) are colloquially called Noisy Intermediate-scale Quantum (NISQ) devices [7]. So far, they have been constructed with different platforms, including trapped ions, superconducting qubits, cold atoms, photons, quantum dots, vacancy centers in diamond, or phosphorous embedded in silicon. The first two are the most advanced, although cold atoms and photons are rapidly catching up. As the errors grow with the system size, they are not scalable and thus unable to solve most of the problems a quantum computer could. However, in the Google experiment, it was clearly shown that, although with 53 qubits and errors per gate of the order of 0.3%, they can still outperform classical computers in a certain task. Despite the fact that this was an academic problem with no practical application, there might be some relevant problems where such noisy devices can help. In particular, it could be expected that NISQ devices with up to few hundred qubits and with errors per gate below 0.1% will be built in the near future, and those may find some specific applications. Although it is hard to envision such applications, the fact that they can be operated in the cloud [8] will certainly open the minds of not only scientist but also of students or entrepreneurs who may find other uses of such devices. Constructing such devices and finding useful applications is a very active field of research and development, and several start-up companies have been created to build both the hardware and software required to operate them.

A very active field of research with NISQ devices is that of variational algorithms which apply to optimization problems in a broad sense, where one wants to find a string of bits (or a state of qubits) that minimizes a cost function. For instance, in the traveling salesman problem, the bits codify different orders in which cities are visited, and the cost function the total distance traveled. Or it can be the expectation value of the energy, so that the problem is to find the ground-state energy of a given Hamiltonian and thus solve problems in physics or chemistry. Variational algorithms create a state in accordance with a quantum circuit, where the quantum gates that are applied depend on some parameters which have to be optimized to minimize the cost function. Although it is not possible to predict the success of this procedure rigorously, it may give good results in practice. However, there are some challenges that need to be better understood and improved. For example, the optimization procedure can also become difficult in practice because of the presence of many local minima, or it may require an enormous number of measurements to compute the cost function, which may reduce their applicability.

This kind of devices can also pave the way toward scalable quantum computers. Although there are obvious limitations to their sizes with current technologies, as well as to the perfection of the quantum gates, one may still be able to introduce, stepwise, some specific error correction (or prevention) schemes that are adapted to the specific platforms, until eventually they become fault tolerant. The whole process has to be accompanied by the development of methods to debug the errors, validate the results, and benchmark different technologies. It is hard to predict whether this can happen progressively or there will be a “quantum winter” in which advance in this direction will be very slow. In view of the media attention and high expectations that quantum computers have raised, this may be very damaging for the field, at least in the short-medium run.

4 Analog quantum devices

This is another class of machines that is called sometimes analog quantum simulators. Those can be viewed as analog quantum computers that cannot implement a universal set of gates, do not have the capability of correcting errors, but yet can solve some specific problems in a more efficient way than classical devices. They are especially suited for problems related to quantum many-body problems that abound condensed matter, high-energy physics, or chemistry. Addressing those problems with classical computers requires resources (computer time and memory) that scale exponentially with the system size; that is, the number of subsystems, or the total volume. The reason is that quantum systems can be in superposition of different configurations, and to specify their quantum state (and thus, be able to compute its properties), one needs to assign a complex number to each configuration. Even in the simplest case where one has two-level systems, the number of configurations scales as 2N, so that one needs to compute and store such number of complex number, leading to the memory and time requirements. Already for N = 30, that figure is so big that even supercomputers cannot cope with it. This obstacle was already pointed out by Feynman [9] about 40 years ago and, in fact, he proposed to use a quantum computer to address such problems. However, for some of them, a quantum computer is not strictly required; one can take a different system which can be controlled to the extent that one can make it behave as the original model one wants to analyze. For instance, to solve the Hubbard model in two dimensions, which describes the motion of electrons in solids and it is a firm candidate to account for high-Tc superconductivity, one may need cold atoms trapped in optical lattices that can hop and interact with each other in accordance with such a model, emulating the electrons in the solid. In a sense, this setup imitates a real solid but with a magnified lattice structure. The larger distances between atoms (or other systems) make quantum simulators more controllable and easier to measure. The number of atoms required equals the number of electrons and thus does not grow exponentially with that number. An experiment with atoms can thus allow us to answer some questions about the Hubbard model which are not reachable with classical computers, like if it features the physical properties found in high-Tc superconductors, thus demonstrating that, indeed, that model can describe such an intriguing phenomenon. One could also use the simulator to address problems in lattice gauge theories, where fermions represent matter and bosons represent the gauge fields. Or in chemistry, where electrons can be represented by fermionic atoms so that one can study the geometric configuration of molecules in equilibrium, their physical properties, or even learn about the chemical reactions that are needed in some drug production.

The main advantage of analog quantum devices with respect to quantum computers is that their operating conditions are easier to reach, as they do not require error correction. However, this means that the results of the simulation will not be perfect and one may wonder why then can they be useful. The reason is related to the fact that in some of those problems, we are interested in learning about specific observables, where the presence of few errors will scarcely affect the result. For instance, in condensed matter physics systems, we are typically interested in observables such as the energy, the magnetization, or superconducting density. If in the final state after the simulation, a small percentage of qubits contain errors, we will still be able to retrieve those properties with sufficient precision. As an example, to know whether the Hubbard model supports d-wave superconductivity, it may suffice to measure the corresponding observable with about 10% precision. However, perhaps the simulation produces more errors than what one could expect because of mismatches in the experimental parameters, or they may accumulate during the dynamics in a way so that the error at the end is much larger than expected. The rigorous formalization of this way of reasoning is still missing, and it goes beyond what computer scientists typically analyze. Furthermore, there are no simple ways of verifying that the simulation is correct because we cannot solve the problem with classical computers. Here, new ways of verification can be thought. For instance, one may attempt to find the solution of the problem with different simulators, or with different algorithms to gain confidence about the result. Besides, there may be some other applications of analog quantum devices beyond quantum simulation because there may be other problems where the final state just has a small percentage of errors which provides the sufficient precision to solve them.

From the experimental side, analog quantum devices are very advanced. Atoms in optical lattices or in tweezers are a leading technology, together with trapped ions. In the first setup, several hundreds of atoms can be well controlled and their interactions can be tailored to mimic specific models in condensed matter physics. Experiments with about 50 trapped ions are also available. Those sizes go beyond what can be simulated with classical supercomputers. Other simulations have been performed with photons, quantum dots, or superconducting devices. In any case, one can expect that in the next few years, we will be able to address some relevant problems in condensed matter physics and, perhaps, in high-energy physics or quantum chemistry as well.

5 Conclusions

Quantum computers have enormous potential to revolutionize many areas of our society. However, this requires building equipment that is scalable, something that needs to leverage existing technologies well beyond current limits or use new ones, as well as improving error correction techniques. All this requires a great deal of funding, as well as close collaboration between industry and research centers. In addition, it is imperative to identify other problems where quantum computers can become a fundamental tool.

During the last few years, there has been a great advance in the construction of NISQ equipment on different platforms and this has culminated in the announcement of the quantum advantage obtained by Google. These computers and those to be developed in the short term will not be able to run most quantum algorithms, as they make mistakes and are not scalable. However, it is very likely that they will give rise to new applications as they have demonstrated that they are capable of performing a specific task more efficiently than a supercomputer. Collaboration between scientists and industry can be key to finding such applications.

One of the most relevant utilities of quantum computers is the possibility of simulating the complex quantum systems that appear in fields such as condensed matter physics, high energies, or chemistry. To do this, it is often not necessary to build a scalable quantum computer, but an analog one is sufficient, called quantum simulators. This equipment is very developed and can help us to solve fundamental problems in physics, or in the design of materials or drugs. For this, apart from the construction of the equipment on different platforms, it is necessary to develop new methods of verification, benchmarking, and debugging.

The field of quantum computing was pushed about 25 years ago by the discovery of quantum algorithms that outperform classical ones, as well as for the identification of several physical systems to build them. After those years, the field has advanced a lot, and now it is already possible to build devices that would be unthinkable a couple of decades ago. However, there is still a very long way to go full of excitement and, probably, many surprises.


Corresponding author: Juan Ignacio Cirac, Max-Planck Institute of Quantum Optics, Hans-Kopfermannstr. 1, D-85748Garching, Germany, E-mail:

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: None declared.

  3. Conflict of interest statement: The author declares no conflicts of interest regarding this article.

References

[1] F. Arute, K. Arya, R. Babbush, et al. “Quantum supremacy using a programmable superconducting processor,” Nature, vol. 574, pp. 505–510, 2019. https://doi.org/10.1038/s41586-019-1666-5.Search in Google Scholar

[2] I. Buluta and F. Nori, “Quantum simulators,” Science, vol. 326, pp. 108–111, 2009. https://doi.org/10.1126/science.1177838.Search in Google Scholar

[3] I. Georgescu, S. Ashhab, and F. Nori, “Quantum simulation,” Rev. Mod. Phys., vol. 86, p. 153, 2014. https://doi.org/10.1103/revmodphys.86.153.Search in Google Scholar

[4] Special Issue on, “Quantum simulation,” Nat. Phys., vol. 8, pp. 263–299, 2012.10.1038/nphys2258Search in Google Scholar

[5] Special Issue on, “Quantum simulation,” Ann. Phys., vol. 525, no. 10–11 pp. 739–888, 2013.Search in Google Scholar

[6] A. Montanaro, “Quantum algorithms: an overview,” NPJ Quantum Inf., vol. 2, pp. 15023, 2016. https://doi.org/10.1038/npjqi.2015.23.Search in Google Scholar

[7] J. Preskill, “Quantum computing in the NISQ era and beyond,” Quantum, vol. 2, pp. 79–99, 2018. https://doi.org/10.22331/q-2018-08-06-79.Search in Google Scholar

[8] www.ibm.com/quantum-computing. (kein Datum).Search in Google Scholar

[9] R. Feynman, “Simulating physics with computers,” Int. J. Theor. Phys., vol. 21, pp. 467–478, 1982. https://doi.org/10.1007/bf02650179.Search in Google Scholar

Received: 2020-06-29
Accepted: 2020-08-31
Published Online: 2020-09-18

© 2020 Juan Ignacio Cirac, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 18.4.2024 from https://www.degruyter.com/document/doi/10.1515/nanoph-2020-0351/html
Scroll to top button