Excuse me a moment—I am going to be bombastic, over excited, and possibly annoying. The race is run, and we have a winner in the future of quantum computing. IBM, Google, and everyone else can turn in their quantum computing cards and take up knitting.
OK, the situation isn’t that cut and dried yet, but a recent paper has described a fully programmable, chip-based optical quantum computer. That idea presses all my buttons, and until someone restarts me, I will talk of nothing else.
A qubit is a bit of quantum information. A normal bit can have two states: zero or one—it is in one or the other of those states all the time. A two-state qubit also has just two states: zero or one. But a qubit is not always in one of those states all the time. Instead, the qubit is in both states simultaneously, called a superposition state, and jumps to a one or zero only when it is measured (read out). A quantum computation is about manipulating the chance of obtaining a one or a zero at read out. In the optical system I'm describing here, it's possible to create qubits with more than two states. Then, a qubit may be in zero, one, two, or three states, and the superposition state is the probability of each of these. Hence, more information can be encoded in a single qubit. In this case, each qubit is a superposition of containing between zero and three photons. The quantum computer can process up to eight of these qubits at a time. The readout of the processing is a combination of the number of photons and where on the device the photons exit (there are eight different exits they can choose from).
But one key to quantum computing (or any computation, really) is the ability to change a qubit’s state depending on the state of another qubit. This turned out to be doable but cumbersome in optical quantum computing. Typically, a two- (or more) qubit operation is a nonlinear operation, and optical nonlinear processes are very inefficient. Linear two-qubit operations are possible, but they are probabilistic, so you need to repeat your calculation many times to be sure you know which answer is correct. A second critical feature is programmability. It is not desirable to have to create a new computer for every computation you wish to perform. Here, optical quantum computers really seemed to fall down. An optical quantum computer could be easy to set up and measure, or it could be programmable—but not both. In the meantime, private companies bet on being able to overcome the challenges faced by superconducting transmon qubits and trapped ion qubits. In the first case, engineers could make use of all their experience from printed circuit board layout and radio-frequency engineering to scale the number and quality of the qubits. In the second, engineers banked on being able to scale the number of qubits, already knowing that the qubits were high-quality and long-lived. Optical quantum computers seemed doomed.
Love the light
There is no question that quantum computing has come a long way in 20 years. Two decades ago, optical quantum technology looked like the way forward. Storing information in a photon's quantum states (as an optical qubit) was easy. Manipulating those states with standard optical elements was also easy, and measuring the outcome was relatively trivial. Quantum computing was just a new application of existing quantum experiments, and those experiments had shown the ease of use of the systems and gave optical technologies the early advantage.What is a qubit?
A qubit is a bit of quantum information. A normal bit can have two states: zero or one—it is in one or the other of those states all the time. A two-state qubit also has just two states: zero or one. But a qubit is not always in one of those states all the time. Instead, the qubit is in both states simultaneously, called a superposition state, and jumps to a one or zero only when it is measured (read out). A quantum computation is about manipulating the chance of obtaining a one or a zero at read out. In the optical system I'm describing here, it's possible to create qubits with more than two states. Then, a qubit may be in zero, one, two, or three states, and the superposition state is the probability of each of these. Hence, more information can be encoded in a single qubit. In this case, each qubit is a superposition of containing between zero and three photons. The quantum computer can process up to eight of these qubits at a time. The readout of the processing is a combination of the number of photons and where on the device the photons exit (there are eight different exits they can choose from).
But one key to quantum computing (or any computation, really) is the ability to change a qubit’s state depending on the state of another qubit. This turned out to be doable but cumbersome in optical quantum computing. Typically, a two- (or more) qubit operation is a nonlinear operation, and optical nonlinear processes are very inefficient. Linear two-qubit operations are possible, but they are probabilistic, so you need to repeat your calculation many times to be sure you know which answer is correct. A second critical feature is programmability. It is not desirable to have to create a new computer for every computation you wish to perform. Here, optical quantum computers really seemed to fall down. An optical quantum computer could be easy to set up and measure, or it could be programmable—but not both. In the meantime, private companies bet on being able to overcome the challenges faced by superconducting transmon qubits and trapped ion qubits. In the first case, engineers could make use of all their experience from printed circuit board layout and radio-frequency engineering to scale the number and quality of the qubits. In the second, engineers banked on being able to scale the number of qubits, already knowing that the qubits were high-quality and long-lived. Optical quantum computers seemed doomed.