FYI.

This story is over 5 years old.

Tech

Researchers Created Fake 'Master' Fingerprints to Unlock Smartphones

It’s the same principle as a master key, but applied to biometric identification with a high rate of success.
Fake, AI generated fingerprints

AI can generate fake fingerprints that work as master keys for smartphones that use biometric sensors. According to the researchers that developed the technique, the attack can be launched against individuals with “some probability of success.”

Biometric IDs seem to be about as close to a perfect identification system as you can get. These types of IDs are based on the unique physical traits of individuals, such as fingerprints, irises, or even the veins in your hand. In recent years, however, security researchers have demonstrated that it is possible to fool many, if not most, forms of biometric identification.

Advertisement

In most cases, spoofing biometric IDs requires making a fake face or finger vein pattern that matches an existing individual. In a paper posted to arXiv earlier this month, however, researchers from New York University and the Michigan State University detailed how they trained a machine learning algorithm to generate fake fingerprints that can serve as a match for a “large number” of real fingerprints stored in databases.

Known as DeepMasterPrints, these artificially generated fingerprints are similar to the master key for a building. To create a master fingerprint the researchers fed an artificial neural network—a type of computing architecture loosely modeled on the human brain that “learns” based on input data—the real fingerprints from over 6,000 individuals. Although the researchers were not the first to consider creating master fingerprints, they were the first to use a machine learning algorithm to create working master prints.

A “generator” neural net then analyzed these fingerprint images so it could begin producing its own. These synthetic fingerprints were then fed to a “discriminator” neural net that determined if they were genuine or fake. If they were determined to be fake, the generator then made a small adjustment to the image and tried again. This process was repeated thousands of times until the generator was able to successfully fool the discriminator—a setup known as a generative adversarial network, or GAN.

Advertisement

The master prints generated by the researchers were specifically designed to target the type of fingerprint sensors found in most modern smartphones. These capacitive fingerprints scanners usually only take partial readings of fingerprints when they are placed on the sensor. This is mostly for convenience since it would be impractical to require a user to place their finger on the sensor the exact same way each time they scan their print. The convenience of partial fingerprint readings comes at the cost of security, which is convenient for a sneaky AI.

The researchers used two types of fingerprint data for training their neural nets. One data set used “rolled” fingerprints that consist of images scanned from prints that were inked on paper. The other data set was generated from capacitive sensors that are used to digitally capture a fingerprint. Overall, the system was significantly better at spoofing capacitive fingerprints than at spoofing rolled prints at each of three security levels.

Read More: The World’s Largest Biometric ID System Keeps Getting Hacked

Each security level is defined by its false match rate (FMR), or the probability that the sensor will incorrectly identify a fingerprint as a match. The highest level of security only incorrectly identifies a match 0.01 percent of the time, the middle tier has an FMR of 0.1 percent, and the lowest level of security has an FMR of one percent.

Advertisement

At the lowest of the three security tiers, the researchers were able to fool the sensor with their master fingerprints up to 76 percent of the time for digital prints. While this is impressive, the researchers also note that it is “unlikely” that any fingerprint sensor operates at such a low level of security. In the middle security tier, where a sensor incorrectly identifies a match 0.1 percent of the time—which the researchers described as a “realistic security option”—they were able to spoof digital fingerprints 22 percent of the time. As the researchers noted, this is a “much higher number of (impostor) matches than what the FMR would lead one to expect.”

At the highest level of security, the researchers note that the master print is “not very good” at spoofing the sensor—the master prints only fooled the sensor less than 1.2 percent of the time.

While this research doesn’t spell the end of fingerprint ID systems, the researchers said it will require the designers of these systems to rethink the tradeoff between convenience and security in the future.


Correction: A previous version of this article state the work was done in collaboration with the University of Michigan when in fact it was done with Michigan State University. Motherboard regrets the error.