Old School 'Sniffing' Attacks Can Still Reveal Your Browsing History

The way that major browsers store history and structure links leaves them vulnerable to old school ‘sniffing’ attacks, according to new research from the University of California San Diego.

|
Nov 2 2018, 1:00pm

Image: Composed by Caroline Haskins. Google Chrome and Firefox logos from Wikimedia Commons. Puppy image from Pexels

Most modern browsers—such as Chrome, Firefox, and Edge, and even browsers such as FuzzyFox and DeterFox (different, security-focused versions of Firefox)—have vulnerabilities that allow hosts of malicious websites to extract hundreds to thousands of URLs in a user’s web history, per new research from the University of California San Diego.

What’s worse, the vulnerabilities are built into the way they structure links, meaning that major structural changes will have to take place in these browsers in order to protect user privacy. The only browser that was immune to the attacks was Tor Browser, as the browser does not keep track of a user’s internet history.

In a statement provided to Motherboard via email, senior engineering manager of Firefox security Wennie Leung said that Firefox will “prioritize our review of these bugs based on the threat assessment.” Google spokesperson Ivy Choi told Motherboard in an email that they are aware of the issue and are "evaluating possible solutions."

In a phone call with Motherboard, lead researcher Michael Smith said that the repairs will take several months to a year to implement because they’re researching the best possible way to go about the repairs before anything is pushed on users, and recommending those solutions to web browser vendors.

“We’re currently working on collecting more data about the implications of implementing the fix—like how many websites actually use the visited links feature, how widespread would the effect be?” Smith said. “Can we put upon the fix ourselves, and then do a user study see if people are annoyed, if people even notice the change.”

The vulnerabilities have to do with why, for instance, unclicked links appear blue while visited links appear violet: there’s a different set of rules and style that apply to links depending on whether they’ve been visited or not. However, a bad actor building a web page can manipulate this faster loading time for visited links by “sniffing,” or inferring your browsing history. In essence, sniffing is finding and exploiting proxies that reveal your web history.

As outlined in the UC San Diego report, this sniffing could happen in a couple of ways: they could force the browser to reload multiple complex images or image transformations that differ based on whether you’ve visited a link or not, which would create drastic differences in the loading time for each. With this strategy, actors can test 60 sensitive URLs per second.

In Google Chrome, the actor could also exploit what’s called a bytecode cache, which speeds up the loading time for revisiting a link that you’ve already visited. By embedding a special script in a web page, the actor can test how long it takes for a web page to load and infer whether you’ve visited it or not. Actors can probe 3,000 URLs per second with this method. But when the researchers reported the vulnerability to Google, the company marked the issue as “security-sensitive” but “low-priority.”

The researchers detected another vulnerability in Google Chrome’s Paint API, a feature released this year that makes it easier to transform visual elements. But the company repaired the vulnerability after the researchers reported it to Google.

Smith told Motherboard that he was able to identify all of these vulnerabilities in just a week.

"It's time to actually sit down and fix this problem."

Sniffing is far from new: in fact, the method was discovered in 2002, and for the most part, it faded from focus in 2010 when web browser developers like Mozilla restructured themselves to make sniffing more difficult. However, the problem never completely disappeared.

In a phone call with Motherboard, UC San Diego security researcher Deian Stefan said that since browsers are constantly updating and adding new features for developers, old flaws can gradually become gaping vulnerabilities.

“The biggest deal for us was the low bandwidth attacks are still feasible today,” Stefan told Motherboard. “As users, we really don’t think about these things too much. But it’s time to actually sit down and fix this problem.”

Smith told Motherboard that the researchers after submitting security reports to the vulnerable browser vendors, they began communicating with Google, Mozilla, and the CSS working group, a group of web developers that assess and propose changes to the coding language.

Stefan told Motherboard that the way that user history is shared between web pages would have to fundamentally change, in essence altering how the web actually works.

“In this new world, visited links [won’t] work the way they do today,” Stefan said. “Basically, if you click a link on Google.com, Google [will get] to know that you clicked on that link. But then when you go to a different website, like Facebook, and you happen come across the same link, it will look as if you never clicked on that link before. That’s fundamentally different than the way it is today.”

The problem is that there are some advantages to the way that links are structured now. For instance, if a link has been visited before, which the browser knows based off of your web history, the browser can push along the loading speed and help the page pop up faster. Plus, it’s sometimes visually useful to know whether or not you’ve clicked on something before in, say, a Google search or on your Reddit home page.

However, Smith told Motherboard that it’s unclear whether these perks outweigh the security and privacy vulnerabilities, since each new browser update continues to exacerbate these structural problems.

“We keep adding stuff to web browsers without a more formalized approach to how each new thing intersects,” Smith said. “We’re just going to keep introducing more and more holes, which is why we’d like to see more structural-level defenses, versus more ad-hoc patching that will be broken later by some other new addition.”