Your Feed is All You: The Nuanced Art of Personalization at Facebook
Image: Facebook

FYI.

This story is over 5 years old.

Tech

Your Feed is All You: The Nuanced Art of Personalization at Facebook

How do we increase the level of engagement on user interfaces without upping the creepiness factor that is so often associated with today’s personalization?

Perhaps more than any other organization, Facebook faces the challenge of personalization of digital information at a grand scale (recent numbers for monthly users in July 2016 topped out at about 1.71 billion, almost a quarter of the world's total population). Over the past few years, Facebook users have likely experienced both more ad content as well as "relevant" stories that relate to past searches, clicks, and other measurements of customer interaction (more on how this works a little later).

Advertisement

Hussein Mehanna is the director of engineering for the Core Machine Learning group at Facebook, which is part of the Applied Machine Learning organization that steers the development of algorithms for Facebook's artificial intelligence. Mehanna has led a team that he says is making a conscious effort to reduce the "creep" factor as they continue to improve their more personalized user interface, which users know as the News Feed.

"Facebook today in terms of personalization leverages a lot of social signals… the challenge is doing this at scale, doing this for 1 billion users for (on average) 1,500 stories per user every day," says Mehanna. As a consequence of massive amounts of incoming data, the task of creating algorithms to better measure a consumer's behavior and interests (two years ago the algorithm was said to measure over 100,000 factors) is becoming increasingly complex.

***

After getting some controversial attention over its old algorithm EdgeRank, Facebook introduced its newest algorithm Facebook FYI and continually updates its newsroom blog with updates to how the algorithm is determining what shows up in your News Feed. While social signals (such as who you interact with, whether you hide certain posts, etc.) are important, Mehanna states future efforts at personalization will need to involve algorithms that can look "beyond" the obvious. This will be necessary in order to develop a much sharper understanding of exactly what you like about a particular product, service, story, etc. and when you like it, as people's interests are not always static and often change over time.

Advertisement

Understanding the true sense of "engagement" with content will be critical, and this is where Mehenna says deep learning may have it's real advantage. He gives an example from his own Facebook News Feed.

"I have stumbled across a video for Messi," he says, "which is this very famous player. It was a video about clips of him playing soccer and doing some amazing tackles when he was a kid all the way until he became a professional player. It was about 15 minutes long or more and I watched all of that—the video itself is interesting… So that is a kind of content I would like to see, but I don't want the system to confuse that with me liking soccer, so what is it in the video that the user likes—that is a very hard problem and we need to solve that."

There are plenty of questions that might be relevant when trying to form an initial understanding of why Mehanna watched the video. Did he watch a 15 minute soccer video in his Facebook feed because he loves soccer, or is he just a fan of the player being highlighted? Maybe he just loves elite sports performance in general? Or, was he simply interested in the cinematography and unique camera angles? When it comes to understanding motivations, applications of deep learning are still "coarse" when it comes to applications in personalization, especially in rich media like video. Understanding video (especially in real time) is something that Twitter, Clarifai, and other AI companies are taking seriously, but it's a tough nut to crack.

Advertisement

Being able to personalize a user's feed with exactly the right kind of content at the right time is no small task, and Facebook has made efforts to make sense of "influencers" that shape your News Feed, including your network connections and activity (for example, factors such as people with whom you interact frequently and the stories in which they're interested can help shape the content that you see). A lot of today's deep learning technology needs significant amounts of data to make any real sense of a user's interests; at present, it's prohibitive to teach these "personalization" systems how to comprehend the complexities and subtleties of human motivations or capture refined interests. Furthermore, many users get easily bored, which is why it's important to better understand how human interests change over time, even in the short-term. "I get extremely exhausted after reading something political or unpleasant news, it becomes sort of draining. Now, it's still important, I still want to read it, but I don't want my News Feed to be jammed with content like that," explains Mehanna.

I don't want the system to confuse that with me liking soccer, so what is it in the video that the user likes—that is a very hard problem and we need to solve that.

Another veil that has yet to be lifted is reading into the habit of the quick scroll. People tend to interact passively with digital content, scrolling through their News Feed quickly or staying for just a few seconds. How do you know if a person even liked the story or the ad in these cases? "This is not necessarily a machine learning problem, but it's a problem that affects our ability in understanding a user's reaction toward the content," says Mehanna. While these engagement issues are ones that Facebook is looking to conquer, Mehanna has a couple of important ideas on developing technologies that can help avoid the pitfalls of "too-personalized" content, starting with a new genre of burgeoning technology: chatbots.

Advertisement

***

Chatbots, or what Mehanna refers to as a "personalized butler" (perhaps a subtle reference to Mark Zuckerberg's annual goal-oriented efforts), will become an extended layer of preference, functionality, and personalization in the Facebook experience. Mehanna describes a chatbot that is far different from the automated voices that we blankly chat back and forth with when calling into a customer hotline. "Imagine yourself calling a call center…… where the agent doesn't really understand you… versus a butler who really knows you well. I think personalization has the ability of changing these two-way interactions between human beings and automated agents to become extremely personalized," Mehanna explains.

This is a whole new area of tech exploration and one in which bridging the gap between the reaction of the system "knows too much" to a system that more closely resembles a knowledgeable personal assistant will likely help to remove the discomfort factor. Advances in natural language and socially-calibrated algorithms will allow for a more natural way to interact with these entities. These systems will also be able to explain why they brought up a particular subject or suggested a specific action, like buying flowers for your significant other.

Active users of Facebook and other companies using some sort of recommender system (Amazon is an obvious example) will often tell you that the current level of personalization makes them feel uncomfortable precisely because they don't understand how the system makes recommendations. Even if it's an object of interest, the feeling of being "watched," of a system being able to make too-specific a suggestion, is a tad too creepy for some people. "The moment you feel that a personalized system is uncomfortable or creepy, then that's not a personalized system," says Mehanna. People want to know how and why a system is making a recommendation, and Mehanna believes there's no reason that there shouldn't be this kind of transparency.

Advertisement

During our interview, I brought up The Wall Street Journal's now-famous "Red Feed Blue Feed" coverage, which highlights the extreme differences of news coverage between Facebook users who "like" conservative Facebook fan pages or liberal fan pages, respectively. Mehanna claims that people are responsible for their feeds, and that Facebook's interference ("veering conservatives to be more liberal, or liberals to be more republican, or people who like soccer to like more volleyball") in a person's own crafted experience would not be right. He emphasizes the value of the users knowing that they, not Facebook, control the experience.

When you like or unlike a page, take an interactive quiz, or click on an ad, the system responds. Mehanna believes you need the ability to tell a system to forget about some particular search or posting or click, to have the volition to say "do not suggest this or that anymore". While undoing clicks or direct demands aren't yet possible, you can clear your Facebook search history (similar to Google), a fact that I would surmise most people don't know. To do so, you click on the main page drop-down arrow, select "more" under activity log, then select the search option, which will you show an entire history by date and allow you to erase individual searches or the entire archive. Not only do you need to know where to look, but this must be done manually on an ongoing basis in order to keep up a clean slate.

Advertisement

Regardless of the current roadblocks, Mehanna emphasizes the need to build user control into user interface systems like Facebook from the very beginning and believes it's an essential part of a personalized system that adapts to your biases and not the person who wrote the system. "Many algorithms today are not being built this way," says Mehanna. "All of the machine learning gurus that I talk don't really value transparency, but this is important for the user's sake." While it's unclear whether Facebook is developing algorithms that have such built-in control options, it's a claim worth keeping in mind as user interfaces and other technologies like chatbots are developed at a more rapid rate.

***

While more information on controls is being made public to users, it might be argued that Facebook could do a better job of publicizing these updates. In other words, if you don't avidly check the Facebook newsroom, you're missing out on updates being made to its News Feed algorithms and are using Facebook "in the dark", without knowledge as to how the interface works. Much of Facebook's algorithms remain in a black box, but users have been given more tools over the past couple of years to help control what they see on Facebook, which aligns with Mehanna's stated emphasis on transparency. That being said, it seems to be in Facebook's best interest (both for making revenue and avoiding scaring away users) to keep it in its newsroom blog where concerned users can go for updates, while the gross majority of users interact with the interface in a more automated fashion.

The Facebook News Feed team recently shed light on how its latest algorithm, Facebook FYI, ranks and displays "personally informative stories," changes that took into account feedback from their Feed Quality Program (which apparently includes tens of thousands of crowdsourced surveys from around the world asking people what they like to see in their News Feeds). Not into an ad? One of Facebook's most recent updates is updating its ad preference controls, allowing users to opt out of seeing ads from specific businesses or organizations (they also include an explanation as to why the inclusion of ads is necessary in the first place, acknowledging that ad revenue helps support their organization - if you didn't already know that collecting and selling personal information is how their business model is run). Other settings allow users to alter who can view their posts, who can tag them in images, among other "control" options.

The control factor is really a universal concept across technologies, believes Mehanna. Similar to content that seems to read your mind in your newsfeed or chatbots that you can't get to stop suggesting make-up gifts for a soon-to-be ex-spouse, many of the problems arise when people don't know what a machine will do next. Take for instance the time that Mehanna test drove his friend's self-driving Tesla. "It's very smart, you're not quite sure why it's making the decisions that it is, and you don't quite trust it," he says.

Mehanna notes that there are a lot of opportunities to test out experiences in which smart technologies, including chatbots, vocally (or otherwise) suggest a next action or explain a current course of action, anchoring again in his emphasis on transparency and control. "There's a lot of opportunities, chatbots are a growing genre of technologies, and this is an area where human beings can interact with them in a multi-modality form," says Mehanna. On the flipside, there's also the reality that consumers are not often "conveniently" informed of all the privacy and personalization options available to them, a good reminder that well-informed consumers do their homework and find out just how much control they can leverage.

Taking the bird's eye view, it's clear Facebook's principle of "transparency" only goes so far. Though Facebook does disclose what they must in order to maintain trust with users, they certainly have experiments and systems that users are simply not privy to—and never will be. On the one hand you can't blame Facebook for maintaining the competitive advantage it has earned, and the billions in pays out in company-wide salaries. On the other hand, a skeptic might see the value of transparency to be a feint of just enough transparency to keep people quiet.

The same might be said of "control." While Facebook should be sensitive to the users' concerns about how they are being used paired with advertising and content, it can't give away its control to users entirely. No one can remove all of their advertisements from their Facebook experience, as this option might just put Facebook out of business.