SANTA BARBARA, California—Early this autumn, a paper leaked on a NASA site indicating Google engineers had built and tested hardware that achieved what's termed "quantum supremacy," completing calculations that would be impossible on a traditional computer. The paper was quickly pulled offline, and Google remained silent, leaving the rest of us to speculate about their plans for this device and any follow-ons the company might be preparing.
That speculation ended today, as Google released the final version of the paper that had leaked. But perhaps more significantly, the company invited the press to its quantum computing lab, talked about its plans, and gave us time to chat with the researchers behind the work.
The supremacy result
"I'm not going to bother explaining the quantum supremacy paper—if you were invited to come here, you probably all read the leaked paper," quipped Hartmut Neven, the head of Google's Quantum AI lab. But he found it hard to resist the topic entirely, and the other people who talked with reporters were more than happy to expand on Neven's discussion.
Google's Sergio Boixo explained the experiment in detail, describing how a random source was used to configure the gates among the qubits, after which a measurement of the system's output was made. The process was then repeated a few million times in succession. While on a normal computer the output would be the same given the same starting configuration, qubits can have values that make their measured output probabilistic, meaning that the result of any one measurement can't be predicted. With enough measurements, however, it's possible to get the probability distribution.
Calculating that distribution is possible on a classical computer for a small number of qubits. But as the total number of qubits goes up, it becomes impossible to do so within the lifetime of existing supercomputing hardware. In essence, Google was asking a quantum computer to tell it what a quantum computer would do in a situation that's difficult for a traditional computer to predict.
(And doing so with a computer that has a high error rate. When asked, however, Google engineers indicated that errors would alter the probability distribution in a way they could detect when run with a moderate number of qubits).
Google staff admitted that it was a problem specifically chosen because quantum computers can produce results even if they have a high error rate. But, as researcher Julian Kelly put it, "if you can't beat the world's best classical computer on a contrived problem, you're never going to beat it on something useful." Boixo highlighted that this problem provided a useful test, showing that the error rate remained a simple linear extrapolation of the errors involved in setting and reading pairs of qubits.
This seemingly indicates that there's no additional fragility caused by the increasing complexity of the system. While this had been shown before for smaller collections of qubits, Google's hardware increases the limits on earlier measurements by a factor of 1013.
Google and its hardware
None of that, however, explains how Google ended up with a quantum computing research project to begin with. According to various people, the work was an outgrowth of academic research going on at nearby University of California, Santa Barbara. A number of the Google staff retain academic positions there and have grad students that work on the projects at Google. This relationship was initiated by Google, which started looking into the prospect of doing its own work on quantum computing at about the same time the academics were looking for ways to expand beyond the work that traditionally took place at universities.
Google's interest was spurred by its AI efforts. There are a number of potential applications of quantum computing in AI, and the company had already experimented a bit on a D-Wave quantum annealer. But gate-based quantum computers hadn't matured enough to run much more than demonstrations. So, the company decided to build its own. To do so, it turned to superconducting qubits called transmon—the same choice that others in the field, like IBM, have made.
The hardware itself is a capacitor linked to a superconducting Josephson junction, in which a bunch of electrons behaves as if it were a single quantum object. Each qubit behaves like an oscillator, with its two possible output values corresponding to still or in motion. The hardware is quite large, which makes it relatively easy to control—you can bring wires right up next to it, which is something you can't do to individual electrons.
Google has its own fabs, and the company makes the wiring and qubits on separate chips before combining them. But the challenges don't end there. The chip's packaging plays a role in shielding it from the environment, and it brings the control and readout signals in from external hardware—Google's Jimmy Chen noted that the packaging is so important that a member of that team was given the honor of being first author on the supremacy paper.
The control and readout wires consist of a superconducting niobium-titanium alloy, which constitutes one of the most expensive individual parts of the whole assembly, according to Pedram Roushan. And that connects it to external control hardware, with five wires required for every two qubits. (That wiring requirement is starting to create problems, as we'll get to later.)
The external control hardware for quantum computers is rather extensive. As Google's Evan Jeffrey described it, traditional processors contain circuitry that help control the processor's behavior in response to external inputs that are relatively sparse. That's not true for quantum processors—every aspect of their control has to be provided from external sources. Currently, Google's setup loads up all the control instructions into external hardware that's extremely low latency and then executes it multiple times. Even so, Jeffrey told Ars, as the complexity of the instructions has risen with the number of qubits, the amount of time the qubits spend idle has climbed from 1% to 5%.
Chen also described how simply putting the hardware together isn't the end of the challenge. While the individual qubits are designed to be identical, small flaws or impurities and the local environment can all alter the behavior of individual qubits. As a result, each qubit has its own frequency and error rate, and those have to be determined before a given chip can be used. Chen is working on automating this calibration process, which currently takes a day or so.
What's coming, hardware-wise
The processor that handled the quantum supremacy experiment is based on a hardware design called Sycamore, and it has 53 qubits (due to one non-functional device in a planned array of 54). That's actually a step down from the company's earlier Bristlecone design, which had 72 qubits. But Sycamore has more connections among its qubits, and that better fits with Google's long-term design goals.
Google refers to the design goal as "surface code," and its focus is on enabling fault-tolerant, error-correcting quantum computing. Surface code, as Google's Marissa Giustina described it, requires nearest-neighbor coupling, and the Sycamore design lays out its qubits in a square grid. Everything but the edge qubits have connections to their four neighbors.
But the layout isn't the only issue that stands between Google and error-correcting qubits. Google Hardware Lead John Martinis said that you also need two-qubit operations to have an error rate of about 0.1% before error correction is realistically possible. Right now, that figure stands at roughly 0.3%. The team is confident it can be brought down, but they're not there yet.
Another issue is wiring. Error correction requires multiple qubits to act as a single logical qubit, which means a lot more control wires for each logical qubit in use. And, right now, that wiring is physically large compared to the chip itself. That will absolutely have to change to add significant numbers of additional qubits to the chips, and Google knows it. The wiring problem "is boring—it's not a very exciting thing," quipped Martinis. "But it's so important that I've been working on it."
Error correction also requires a fundamental change in the control hardware and software. At the moment, controlling the chip generally involves sending a series of operations, then reading out the results. But error corrections require more of a conversation, with constant sampling of the qubit state and corrective commands issued when needed. For this to work, Jeffrey noted, you're going to really need to bring latency down.
Overall, the future of Google's hardware was best summed up by Kelly, who said, "lots of things will have to change, and we're aware of that." Martinis said that, as they did when moving away from the Bristlecone design, they're not afraid to scrap something that's currently successful: "We go to conferences and pay attention, and we're willing to pivot if we find we need to."
Software, too
While the quantum supremacy result was a bit of a contrived problem, it actually has a potentially useful application, in that the processor will produce a set of truly random digits, one that could be audited and verified if needed. But while potentially valuable, that's unlikely to provide a large enough market to justify Google's investment here. So developing additional software applications is going to be essential for the success of this project in the long term.
A number of other companies have approached this issue by providing a cloud interface to their quantum hardware, even if said hardware had too few qubits to do anything useful. The goal was to allow people to gain experience working with a particular form of quantum processor and encourage the development of the libraries and toolkits that will make future software development easier. Martinis was frank about this, saying, "resources dictated we do [the] powerful processor first and open it up to services later."
A Google quantum cloud service, however, is likely to be launched relatively soon. Nobody would give a deadline, but talk seemed to indicate next year is a likely target. Google has already put together an open source development framework for its hardware called Cirq, and the company built a quantum chemistry simulation toolkit called OpenFermion on top of it. Yet without any hardware to run the resulting software on, they're not going to pull researchers away from other platforms.
Dave Bacon, who leads the development of these toolkits, said that one of the challenges is that nobody's sure of the right abstractions to make at this point. With classical computers, he said, you don't have to care about transistor physics. At the current stage—Google's calling it NISQ, for noisy intermediate-scale quantum—you need to squeeze the most out of limited hardware, and that level of abstraction may not be appropriate.
Bacon said that Google expected the same three uses that other quantum computing companies are focusing on: simulation of quantum systems like complex chemicals and biomolecules; machine learning; and optimization problems. Google has been hosting annual symposia with researchers in the field to find out what they'd be interested in doing, and Bacon said that these experiences emphasized that scaling up the number of qubits would be critical to bringing additional algorithms into play. But he echoed people outside of Google, acknowledging that nobody is really certain how useful these algorithms will be when the processors are still error-prone.
Still, Bacon highlighted one major change that the hardware has already enabled. A decade ago, he used to have to generate mathematical proofs to show a given algorithm would work as intended. "Now, I don't have to be smart," Bacon says. "I can just run it and see what happens."
The quantum computing landscape
Google's quantum supremacy announcement came at a time when other companies have already had cloud-based quantum processors available for well over a year (and D-Wave's quantum annealer is substantially older). Google's Neven dismissed this as a bit irrelevant, saying, "quantum computing is a marathon—we've tried to avoid petty competitiveness—it's not company vs. company, it's humankind vs. nature."
But not all his competitors would agree. IBM, for example, having heard that this announcement was forthcoming, took issue with the declaration of quantum supremacy. Its issue was two-fold. First, IBM has decided that any declaration of quantum supremacy is inappropriate in an era of error-prone computations. But its second issue was less semantic and more technical.
Google's argument for quantum supremacy focused on the claim that a simulation of its processor's behavior would take 10,000 years on a state-of-the-art supercomputer. But IBM noted that Google's argument was based in part on memory starvation, and supercomputers have hard disks that can hold temporary data during the computations. If that disk space is factored in, IBM argues, the calculation could take as little as 2.5 days. At a couple of minutes, the quantum processor beats that handily, but there's still the chance that algorithm optimizations will cut the margin considerably.
This is somewhat similar to the experience D-Wave had, where every indication of a quantum advantage was quickly matched by computer scientists returning to the classical computing algorithms and finding ways of extracting speedups. And Google, to an extent, has expected this. Neven told us that the company had already funded "red team" researchers in academia to try to do similar optimizations.
This is sort of a misdirection. Google will undoubtedly be able to add additional qubits, which will cause the classical simulation to slow down further. At the same time, this reaction does suggest that the existing players in the field are starting to get sensitive to competition—even if said competition doesn't even have a marketable product yet.
There is the possibility that this competition will end up focusing on developer mindshare. Each company's processor will undoubtedly have distinctive hardware characteristics. As Bacon said, right now, software development involves talking fairly directly to the hardware. But he also told Ars that the differences aren't so substantial that we're likely to see any sort of vendor lock-in to a given processor unless some radically different technology begins to dominate, such as photon- or cold-atom-based computations.
All of which suggests that, quantum supremacy or not, Google's entry into this market won't provide an immediate shake-up. Instead, it's likely to push everyone to accelerate their work on increasing the number of qubits and managing the error rate. In that sense, Neven's characterization of this being "humankind vs. nature" may not be so far off.