ScreenAvoider Uses Deep Machine Learning to Keep Digital Displays Private

Indiana University computer scientists offer an automated system to keep you from leaking your own data.

Dec 3 2014, 1:00pm

​Image: I HQ/​Flickr

A new image recognition and cloaking system developed by researchers at Indiana University offers a way of masking our computer and smartphone screens from camera surveillance. Dubbed ScreenAvoider, the technology aims to secure those unique information-portholes into users' private lives, first by identifying vulnerable screens and then auto-censoring them.

The catch is that ScreenAvoider is currently a technology meant more to protect you from yourself than from the lurking cameras of others—largely because your cameras are lurking more and more.

The problem is real: as wearable "lifelogging" tools like Google Glass continue to scale up, a somewhat subtle privacy threat arises. As lifeloggers go about their usual screen-based business, their cameras will automatically be gathering and storing images at a rate of several per minute.

These auto-logged images may include captures of screens, possibly containing sensitive information. These images are then beamed to some cloud-based storage drive, where security may be less assured. ScreenAvoider, which is described in a paper posted this week to the arXiv pre-print server, serves to redact that sensitive information.

It would be like if I had never seen a cat before and was told to both learn what a cat is and to classify various animals as cats or not simultaneously

"These wearable cameras can collect thousands images every day, many of which may capture private activities (like using the restroom) or information (like catching private documents)," the paper explains. "Many of these devices communicate with cloud-based applications, so that images are automatically shared with the cloud provider, and software features make it as easy to share images as it is to collect them."

The researchers, led by Indiana University Privacy Lab founder Apu Kapadia and computer vision researcher/professor David Crandall, are sure to note the recent iCloud incursion, in which around 500 celebrity photos were swiped from Apple's vaults.

"These [sharing] features raise obvious privacy concerns, as research shows that users themselves often mistakenly disclose information electronically (through 'misclosures'). This problem is exacerbated with large, unwieldy collections of lifelogging images, any of which may contain risks to privacy," Kapadia and his team write.

The technology behind ScreenAvoider is based on deep learning, a newish variant of machine learning/neural networking. Basically, an algorithm is trained by natural data sets to recognize not just monitors, but to classify what's on those monitors. Deep learning schemes allow the system to learn how to classify images at the same time that it's collecting and classifying the images themselves. It would be like if I had never seen a cat before and was told to both learn what a cat is and to classify various animals as cats or not simultaneously.

Image: Kapadia

Kapadia and team note that this sort of processing, which uses many more computational layers than conventional machine learning schemes, is highly resource intensive and really only practical using graphical processing units (GPUs), which is becoming a more and more common division of computational labor as computing tasks get deeper and denser.

To make the system more reasonable for everyday computers, the team devised an alternative system with lower demands, called ScreenTag. "This ScreenTag system, which is complementary to ScreenAvoider, dynamically creates and renders a machine-readable visual code overlaid on the computer's display, that contains information about which applications are running on the system," they write. "This way, lifelogging photos taken of the monitor include a 'watermark' that is easier for the lifelogging system to detect and interpret."

The theory is that something like ScreenAvoider will ultimately help protect users from other users as well as their own lifelogging cameras. If someone has ScreenAvoider enabled to censor their own information, it follows that the same sort of information recorded from the screens of others would be censored as well. Granted, that's not terribly helpful if some other user wants to capture your information, but it's something, anyway.

"We advocate a sociotechnical approach where people with such cameras have a general sense of 'propriety' and can specify rules such as 'I'm willing to discard 10 percent of my images if they capture other people's monitors'," Kapadia told me. "After all, many thousands of images a day may be captured, and discarding 10 percent of the images may not have a negative impact to the wearer. Our previous work found that lifeloggers indeed exhibit such feelings of propriety."

In any case, there are tools already on the market for thwarting image-snoops, including filters, as well as in-development technologies that attempt to make entire environments secure from image eavesdropping. One proposal in particular works by identifying the camera sensors in a room and then blinding them with pulses of light. It may seem like overkill, but the cloaking of personal information displayed on screens in public remains an open problem.

"Of course, if somebody wants to maliciously capture your monitor, our approach may actually exacerbate the problem as it helps them find images with monitors in them," Kapadia said. "In that case one may have to rely on other solutions that for example: detect cameras facing the screen and take some action, e.g. warning the user, or shoot a beam of light at the camera to wash out the image."

In August, Kapadia and his Privacy Lab received a $1.2 million grant from the National Science Foundation to study privacy as it relates to wearable technology.