FYI.

This story is over 5 years old.

Tech

Physicists Succeed In Compressing Quantum Data for the First Time

Storing infinite information in a single particle.

A group of Canadian and Japanese physicists has for the first time compressed quantum information. Specifically, the team managed to take the data packaged into three qubits and stuff it down into a two-qubit container. Granted, we're not exactly talking about the Library of Babel here, but the road toward something as powerful and daunting as real-world quantum computing is littered with proofs of concepts.

Advertisement

The new quantum "compression" scheme, a description of which is slated for publication in a forthcoming issue of Physical Review Letters, has the added benefit of offering a new window into the peculiar, highly counter-intuitive world of quantum information.

For one thing, there is the matter of quantum information fully resisting the compression techniques used with classical bits of information. Classical bits are simple, deterministic 1s and 0s, which are stored in memory systems as long strings. It's possible to take the information stored in these strings and restate it in ways that require less memory.

An easy example is the image information for a simple red square. Uncompressed, that information might consist of data for each individual pixel, e.g. "1: red, 2: red, 3: red" and so on for potentially tens of thousands of pixels. We might instead record the same information as "1024 pixels wide, 750 pixel tall: red." This is the idea behind lossless compression: exploiting statistical redundancy.

Lossy compression is a bit stranger. Instead of restating redundant information in smaller terms, lossy compression makes educated guesses about what information it can just trash. If there are some red pixels in an image, but not enough to be perceived by the human eye, lossy compression just says fuck it and throws them out.

Both of these compression schemes need to know definitely that a pixel is red and only red (for example) in order to make the necessary determinations. In the quantum world, however, a piece of information is ecstatic. A bit of information in the world of classical physics is either a 1 or a 0, true or false, yes or no. In the quantum world, a bit is a qubit, which is both a 1 and a 0 and the whole range of possibilities in between the two; a qubit is a probabilistic definition of information, rather than the bit's deterministic definition. So: classical compression doesn't work so well in the quantum world.

Advertisement

To get information from a qubit, it's necessary to take a measurement of it. This has the effect of determining an exact, singular state (a 1 or 0, and not both) while tossing out all of the other probabilistic information contained in the unmeasured qubit. This state wasn't some hidden property of the qubit, but rather one of many states that could have occurred as the result of a measurement. If you were to back up time and do the same measurement again, the result might be different. It's hard to call information redundant if you can't every say for sure what that information is.

To describe their states completely would require infinite classical information.

The new quantum compression scheme was first proposed by physicists Martin Plesch and Vladimír Bužek back in 2010. The team behind the current work, led by the University of Toronto's Aephraim Steinberg, is the first to demonstrate it.

Essentially, if we have a system of qubits all in the same state (with the same probability distributions), we have identical qubits, even though we might get different results upon measuring the individual qubits. Strangely enough, particles in the quantum world can be both identical and distinct at the same time.

Storing all of the quantum information contained by these systems takes up a whole lot of memory space; as more qubits are added, memory needs rise exponentially. To compress these qubits, the idea is to take just one of the identical (but not) qubits and trash the others. This is put in storage and, because the all of the original qubits had the same probability distributions, when the qubit is taken out of storage and the system is decompressed, it's just a matter of taking the state of the qubit in storage and applying it to a bunch of other new qubits. These new qubits, sharing the state of the stored qubit, are effectively the same qubits as the ones we trashed in the compression process.

"Possible use of the scheme is a quantum memory," Plesch and Bužek wrote in their 2010 paper. "Having more copies of a single-qubit state, it might be very reasonable to compress them into a state of only a few qubits, which will be more easily protected against decoherence. If the copies are needed again, we perform the decompression transformation."

Now we can go ahead and do computing tasks with our resurrected qubits, those tasks consisting of different sorts of measurements to get different sorts of information. "This way you can store the qubits until you know what question you're interested in," Steinberg told Physics World. "Then you can measure x if you want to know x; and if you want to know z, you can measure z—whereas if you don't store the qubits, you have to choose which measurements you want to do right now."

Finally, it's just interesting to ponder, a look inside a world of awesome and barely comprehensible possibilities. "Even if I gave you a billion identically prepared photons, you could get different information from each one," Steinberg said. "To describe their states completely would require infinite classical information."