FYI.

This story is over 5 years old.

Tech

MIT Physicists Score a Big Victory in Quantum Error Correction

information stored in the quantum bits known as qubits is so fragile that just the act of observing turns it to junk.

How is it possible to detect an error in some piece of data if you aren't allowed to actually look at what the data even is? This is the fundamental question behind quantum error detection, and it would seem to be, by definition, an unsurmountable barrier—information stored in the form of the quantum bits known as qubits is so fragile that just the act of observing or measuring it turns it to junk.

Advertisement

It's a deep question and one that lies at the very heart of quantum computing. At the Association for Computing Machinery's Symposium on Theory of Computing in June, researchers from MIT, Google, the University of Sydney, and Cornell University will present a new solution to this problem that's based not on measuring the actual values of qubits, but on measuring the relationships between different qubits, a sort of indirect but revealing observation.

Error detection is among the least flashy corners of computer science and engineering, and the algorithms can even seem a bit arcane. Information is lost within any given system as a matter of course, not through hacking or some malicious intervention, but just as a feature of electrical hardware: noise. It comes in all sorts of different forms, some more avoidable than others, but, at the very least, electronics (or electronics not operating at absolute zero temperature) are subject to thermal noise. The result is that bits get lost or corrupted.

To look is to destroy

There are lots of ways of detecting these errors and even correcting them. A common and beautifully simple low-level example is the usage of parity bits, which is where all of the "1" bits in a binary string are added up and, depending on whether the detection scheme is based on odd or even parity, an extra bit is added to make the total number of bits appropriately odd or even. So, if the detection scheme was odd-based, and the total number of bits in a string was even, we'd add a bit. If it was odd, we'd do nothing.

Advertisement

Obviously, this isn't foolproof, but it does a really good job most of the time (most bit loss/corruption involves single bits per byte). But it involves actually counting the bits and "looking" at the data. With quantum data, we have no such luxury: to look is to destroy. This is fundamental.

Quantum error correction looked to be about impossible until 1994. This is when Peter Shor, a mathematics professor at MIT and the brain behind the encryption-defeating Shor's Algorithm, came up with the first workable correction scheme.

It goes a bit like this, courtesy of Aram Harrow, a physics professor at MIT and one of the researchers involved in the new error correction scheme, and a traffic light metaphor.

"A traffic light can be red or green," he says. "At a normal intersection, the four lights will be either RGRG or GRGR. This is an example of a (classical) error-correcting code because if you can't see one of the lights you can figure out what color it should be by looking at the other lights. In this case, you wouldn't say that 'one traffic light exists in four locations,' but instead you could say that the information about whose turn it is to drive is redundantly encoded in many locations."

"One difference between this and a quantum code," Harrow adds, "is in a classical code the information could be extracted from any one of many different locations (assuming there has been no error)‐e.g. seeing any one of the four traffic lights is enough—where in a quantum code the information cannot be extracted without accessing many different locations."

Advertisement

So, the question then becomes not what is the value of this qubit?, but does this qubit agree with this other qubit? It seems easy enough, but comes with a highly significant limitation: the number of errors that can be corrected must be less than the square root of the total number of qubits being analyzed.

In the most basic form of quantum error correction, called 3-bit repetition code, we take some qubit and entangle it with two others, so rather than having some particle occupying a combination of "1" and "0" states, we have a combination of "111" and "000" states.

The data is sent and at the other end, an operation is performed on the second and third qubits (leaving the first) and the result will correspond to one of the possible errors that might be found in a combination of "111" and "000." This part is only the detection, however, and the error correction comes courtesy of Shor, who devised a scheme that uses nine qubits rather than three.

"These measurement always have the form 'Does A disagree with B?,'" explains Harrow. "Except it might be, instead of A and B, A B C D E F G, a whole block of things. Those types of measurements, in a real system, can be very hard to do. That's why it's really desirable to reduce the number of qubits you have to measure at once."

Harrow and his group's new error correction scheme beats the square-root limit by giving quantum computations their own memories, in a way. That is, as some qubit or collection of qubits bounces around a quantum computer, it's always changing states/values through time. The new method assigns every time increment in the computation with its own bank of qubits representing the target qubit—the qubit we're trying to see whether it's correct or not—at that particular time. So, instead of having three or nine qubits traveling together through time, they're stretched out across time.

In this scheme, error detection/correction compares qubits that are entangled from one stage to the next; it's sort of like looking at a timeline. If a previous stage doesn't match with a future stage, then there you go. The catch is that implementing something like this, where every step in a given computation has its own bank of qubits, means a whole lot of complexity.

Harrow is nonetheless confident: "Almost all of the sparse schemes started out with not very many logical qubits, and then people figured out how to get a lot more. Usually, it's been easier to increase the number of logical qubits than to increase the distance—the number of errors you can correct. So we're hoping that will be the case for ours, too."

The group's work is explained further in a paper posted to the arXiv pre-print server.