British researchers are developing new camera tech to give us eyes in impossible places.
Image: Heriot Watt University
Your fancy DSLR’s shutter speed has got nothing on this: A team of researchers is developing a camera that can capture images—videos, to be precise—at the speed of light. That makes it the fastest camera there is, and has the intriguing application that it could see objects around corners.
I saw the camera at the Royal Society’s Summer Science Exhibition, where Jonathan Leach from Heriot Watt University explained that it’s fast enough to see individual beams of light move through space. To demonstrate, take a look at this image of a laser that I took with my regular smartphone camera. The laser looks like a static green line bouncing off surfaces and criss-crossing over itself, just as it does to the human eye.
The camera, however, sees the laser beam moving through the air as the light bounces off one surface to another. Here’s a snippet of video it captured of that same laser beam:
What you’re seeing there is 15 billion frames per second, with an effective exposure time of 67 picoseconds (a picosecond is a trillionth of a second).
So how does that enable the camera to see round corners? We’re not talking about sticking the sensor around a corner and operating it at a right angle, like the smart guns we saw earlier this month; the camera should be able to detect objects hidden around a wall without it needing to be physically in the line of sight. It's a similar goal as that of a camera developed by MIT, but the way it works is a bit different, and Leach is confident they'll be able to get it to work in real time.
It sees around corners by recording the scattered light cast by an object hidden from view. Say there’s a person around the corner: the laser bounces off whatever surface (like a wall) to hit them, and the light then bounces back via the same route. The camera’s 32 by 32 pixel sensor—the part that looks a bit like a golden postage stamp—is super-sensitive, and each pixel can detect individual photons that come back to it.
Computer algorithms figure out where the object is based on the time it takes the photons to do this there-and-back journey. As the researchers explain in their video, a “timer” is started for each pixel when the pulse of light leaves the laser, and stops when a photon gets back to it. As light scatters differently off different objects, you can work out what it is—in our example, a person.
“A human or a dog or a car has a unique signature that our camera can detect,” said Leach.
The researchers have done simulations regarding their camera looking around corners, and the next step is to test it in a lab setting. Leach explained that it’s more effective in the dark, where the laser’s path isn’t confused by stray photons from other sources, and that the signal is stronger from bigger, whiter objects, and if the material the laser bounces off to reach them is good at scattering light.
“If the wall that we were reflecting it off was not a wall, it was a mirror, life would be a lot easier,” he said.
But even though a lot of photons might not make it back to the camera, it doesn’t need many to be useful. “In terms of applications, you don’t need that many photons to work out how far away something is,” said Leach.
The kind of applications he foresees range from the obvious military uses to search and rescue missions, where the camera could identify where trapped people were in a building, to medical imaging, where it could enable new types of endoscopy. He also suggested that a system like this could be integrated into cars, so you could be warned that a kid is running into the road round the corner when you’re about to reverse-park there.
At the same exhibition, Matthew Edgar from the University of Glasgow showed off another pretty cool piece of imaging equipment: a camera that captures images using just one pixel. In the race to cram more and more megapixels into consumer cameras, the idea seems rather counterintuitive, but Edgar explained that their one-pixel camera is able to see colours of light beyond the visible spectrum, like infrared. The point, he said, was that most camera sensors are usually made up of silicon, which is great for detecting visible light, but not anything else.
The camera with just one pixel
As it can see all types of infrared (unlike some other infrared imaging devices), the one-pixel camera is able to take images through things like tinted screens and smoke. Edgar took a picture of what looked like a black screen, to reveal a head behind it. The added bonus of using just one pixel is that it’s really inexpensive, in contrast to the kind of infrared gadgets you get on space agencies’ telescopes.
As with Leach’s camera, the military applications are obvious, but it’s also able to image gases—to detect leaks of methane, for example—and to see through some paints to reveal hidden messages or sketches on the canvas.
The Glasgow team also developed a camera that takes photos in 3D with just one camera, as opposed to the usual setup of several cameras. If the other cameras help give us eyes in places it would otherwise be impossible to see, that one helps us image what we can see with a bit more realism.
The only problem: we don’t really have a medium for viewing 3D images yet, and for now a 3D TV and some cinema specs are still required.