The VICE Channels

    Computers Can Tell If You're a Hipster

    Written by

    Meghan Neal

    contributing editor

    Via the University of California, San Diego

    Machines are slowly learning to think like humans, which unfortunately means they're learning our annoying tendency to slap dubious, stereotypical labels on people.

    A team of computer scientists at the University of California, San Diego have developed an algorithm that filters people into one of eight social subcultures: biker, country, Goth, heavy metal, hip hop, hipster, raver, or surfer. (These are society's most popular subcultures, per Wikipedia.) To train its stereotypical ways, researchers fed the computer thousands of group-shot photos and taught it to analyze the images for the pieces that could reveal social clues, like hairstyle or tattoos: eyes, top of head, neck, torso, and arms.

    At first researchers told the algorithm which attributes would land someone in a certain category—this guy has a beard and is wearing plaid shirt, so is probably a hipster, and so forth. Then they tested the machine to see if it could figure it out on its own. The algorithm turned out to be right 48 percent of the time—not great, but much better than random chance, which only got it right 9 percent of the time.

    Via the study

    Cool. Computers will soon be fairly adept at cementing social molds and bursting the idealist bubble that every individual is a unique snowflake. But for what? Well, societal subcultures, or "urban tribes" to use the academic nomenclature, are one of those things that humans easily recognize intuitively, but algorithms don't pick up on. So this ability to infer information from patterns is an area of machine learning, and computer vision specifically, that's still relatively new, and especially interesting to artificial intelligence researchers.

    What's more, the torrent of images pouring into the web—some 300 million a day are posted to Facebook alone—means there's a huge untapped database of potential insight into human behavior. Developing a machine that can recognize what's going on in a photograph by putting it into context could unlock that well of information, which could translate into lucrative business opportunities for web companies, and better features for users: more relevant search results (and subsequently, ads), and better recommendations and content on social networks.

    It's the same reason the tech titans in Silicon Valley are pouring money into an artificial intelligence arms race. And there are other interesting projects working on teaching computers human intuition. Over at Carnegie Mellon University, researchers are working on developing a machine with “common sense.” It looks at thousands of online images a day and learns to make associations between objects, slowly piling up a basic store of knowledge about the world to draw conclusions about things it was never programmed to know.

    The UCSD researchers are tackling the same problem from the social identity perspective. But do we really want Google or Facebook to lump us into boilerplate group and serve up content based solely on the photos we uploaded from the last party we went to? It's a subtle difference from the personal profile these sites have already amassed on us, but one that could make social networks online look more and more like a high school lunch room.