FYI.

This story is over 5 years old.

Tech

Researchers Developed a New, Even More Convincing Way to Write Fake Yelp Reviews

A new machine learning method keeps AI on track when writing fake reviews.
Image via Shutterstock

Aside from being pretty vulnerable to trolls with a vendetta, Yelp is susceptible to abuse through fake reviews. A 2016 report showed that Yelp labels approximately 25 percent of reviews as suspicious or not recommended, and the business reviewing platform announced that year that it would start working with the New York Attorney General to prosecute fake reviewers.

Previously, fake reviews have had a hint of the uncanny valley bot speak, sometime slipping non-sequitur or nonsense phrases into reviews. Mika Juuti, a doctoral student at Aalto University in Finland, and a team of researchers developed a new way to make algorithmically generated reviews more believable.

Advertisement

It’s based on research from 2017, when a team from the University of Chicago made algorithmically generated restaurant reviews that plagiarism detection services couldn’t spot. They wrote things like, “The food here is freaking amazing, the portions are giant. The cheese bagel was cooked to perfection and well prepared, fresh & delicious! The service was fast. Our favorite spot for sure! We will be back!” Pretty convincing. I, too, enjoy my bagels cooked to perfection.

The only problem with this method was that the AI tends to let its mind wander. Sometimes, a reference to a different city, food, or state slips in, which flags the bot-written review as fake to detection systems. Juuti developed a new way to keep the AI on track, using a machine learning technique called neural machine translation.

It uses a text sequence that follows “review rating, restaurant name, city, state, and food tags,” according to an Aalto press release—a combination that keeps the AI focused on the review at hand, without making it too stiff and bot-like.

The study, presented at the European Symposium on Research in Computer Security this month, asked participants to read real reviews written by humans and fake machine-generated reviews. The researchers then asked participants to identify the fake ones. “Up to 60 percent of the fake reviews were mistakenly thought to be real,” Juuti said.

Alongside the more focused AI, the researchers developed a fake-spotting tool that they say is better than what’s currently available. “Machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews,” they write in the study. “Robust detection of fake reviews is thus still an open problem.”