Paper leaks showing a quantum computer doing something a supercomputer can’t

https://arstechnica.com/?p=1573565

Artist's impression of quantum supremacy.
Enlarge /

Artist’s impression of quantum supremacy.

Disney / Marvel Studios

Mathematically, it’s easy to demonstrate that a working general purpose quantum computer can easily outperform classical computers on some problems. Demonstrating it with an actual quantum computer, however, has been another issue entirely. Most of the quantum computers we’ve made don’t have enough qubits to handle the complex calculations where they’d clearly outperform a traditional computer. And scaling up the number of qubits has been complicated by issues of noise, crosstalk, and the tendency of qubits to lose their entanglement with their neighbors. All of which raised questions as to whether the theoretical supremacy of quantum computing can actually make a difference in the real world.

Over the weekend, the Financial Times claimed that Google researchers had demonstrated “quantum supremacy” in draft research paper that had briefly appeared on a NASA web server before being pulled. But the details of what Google had achieved were left vague. In the interim Ars has acquired copies of the draft paper, and we can confirm the Financial Times’ story. More importantly, we can now describe exactly what Google suggests it has achieved.

In essence, Google is sampling the behavior of a large group of entangled qubits—53 of them—to determine the statistics that describe a quantum system. This took roughly 30 seconds of qubit time, or about 10 minutes of time if you add in communications and control traffic. But determining those statistics—which one would do by solving the equations of quantum mechanics—simply isn’t possible on the world’s current fastest supercomputer.

A quantum problem

The problem tackled by Google involved sending a random pattern into the qubits and, at some later time, repeatedly measuring things. If you do this with a single qubit, the results of the measurements will produce a string of random digits. But, if you entangle two qubits, then a phenomenon called quantum interference starts influencing the string of bits generated using them. The result is that some specific arrangements of bits become more or less common. The same holds true as more bits are entangled.

For a small number of bits, it’s possible for a classical computer to calculate the interference pattern, and thus the probabilities of different outcomes from the system. But the problem gets ugly as the number of bits goes up. By running smaller problems on the current world’s most powerful supercomputer, the research team was able to estimate that calculations would fail at about 14 qubits simply because the computer would run out of memory. If run on Google cloud compute services, pushing the calculations up to 20 qubits would cost 50 trillion core-hours and consume a petawatt of electricity.

Based on that, it would seem a system with about 30 qubits would be sufficient to indicate superior quantum performance over a traditional non-quantum supercomputer. So, naturally, the researchers involved built one with 54 qubits, just to be sure. One of them turned out to be defective, leaving the computer with 53.

These were similar to the designs other companies have been working on. The qubits are superconducting loops of wire within which current can circulate in either of two directions. These were linked to microwave resonators that could be used to control the qubit by using light of the appropriate frequency. The qubits were laid out in a grid, with connections going from each internal qubit to four of its neighbors (those on the edge of the grid had fewer connections). These connections could be used to entangle two neighboring qubits, with sequential operations adding ever-growing numbers until the entire chip was entangled.

Unforced errors

Notably absent from this setup is error corrections. Over time, qubits tend to lose their state, and thus lose their entanglement. This process is somewhat stochastic, so it may happen early enough to destroy the results of any computations. With more qubits, obviously, this becomes a greater risk. But estimating the system’s overall error rate requires comparing its behavior to computed descriptions of its behavior—and we’ve already established that we can’t compute this system’s behavior.

To work around this, the research team started with observing the behavior of a single bit. Among other things, this revealed that different qubits on the chip had error rates that could vary by more than a factor of 10. They then went on to test combinations of two qubits, and saw that the error rates were largely a combination of the two error rates of the individual qubits. Not only did it make it easier to estimate the error rates of much larger combinations, but it showed that the hardware they used to connect qubits, and the process they used to entangle them, didn’t create significant sources of additional errors.

That said, the error rate is not particularly impressive. “We can model the fidelity of a quantum circuit as the product of the probabilities of error-free operation of all gates and measurements,” the researchers write. “Our largest random quantum circuits have 53 qubits, 1113 single-qubit gates, 430 two-qubit gates, and a measurement on each qubit, for which we predict a total fidelity of 0.2 percent.”

The Supremes

So clearly, this hardware is not the makings of a general-purpose quantum computer—or at least a general purpose quantum computer that you can trust. We needed error-corrected qubits before these results; we still need them after. And it’s possible to argue that this was less “performing a computation” than simply “repeatedly measuring a quantum system to get a probability distribution.”

But that seriously understates what’s going on here. Every calculation that’s done on a quantum computer will end up being a measurement of a quantum system. And in this case, there is simply no way to get that probability distribution using a classical computer. With this system, we can get it in under 10 minutes, and most of that time is spent in processing that doesn’t involve the qubits. As the researchers put it, “To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor.”

Just as importantly, it shows that there’s no obvious barrier to scaling up quantum computations. The hard part is the work needed to set a certain number of qubits in a specific state, and then entangle them. There was no obvious slow down—no previously unrecognized physical issue that kept this from happening as the number of qubits went up. This should provide a bit of confidence that there’s nothing fundamental that will keep quantum computers from happening.

Recognizing the error rate, however, the researchers suggest that we’re not seeing the dawn of quantum computing, but rather what they call “Noisy Intermediate Scale Quantum technologies.” And in that sense, they very well may be right, in that just last week IBM announced that in October, it would be making a 53-bit general purpose quantum computer available. This won’t have error correction either, so it’s also likely to be unreliable (though IBM’s qubits may have a different error rate than Google’s). But it raises the intriguing possibility that Google’s result could be confirmed using IBM’s machine.

In the mean time, this particular system’s only obvious use is to produce a validated random number generator, so there’s not much in the way of obvious follow-ups. Rumors indicate that the final version of this paper will be published in a major journal within the next month, which probably explains why it was pulled offline so quickly. When the formal publication takes place, we can expect that Google and some of its competitors will be more interested in talking about the implications of this work.

via Ars Technica https://arstechnica.com

September 24, 2019 at 08:16AM

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.