If My Smartphone Was Really That Smart It Would Lie To Me
Image: Kenny Louie/Flickr

FYI.

This story is over 5 years old.

Tech

If My Smartphone Was Really That Smart It Would Lie To Me

Sometimes, when it comes to gadgets, honesty isn't the best policy.

"What you'll see in a lot of the literature is discussion of how HCI [human-computer interaction] should have magic, elements of theater," said Eytan Adar, associate professor of information and computer science at the University of Michigan.

But what that magic and theater really is, and what Adar has been at the forefront of identifying and exploring, is a lie.

Your progress bar telling you how much battery you have left or how much longer your download will take—that's a lie. The button in the elevator that's supposed to close the doors—that's a lie. Design tricks like having your deleted files disappear in a puff of smoke? Those are all lies.

Advertisement

But that's not necessarily a bad thing: there are times when you want your computer or smartphone to lie to you, and the instances in which that's acceptable are going to be a huge fodder for discussion of technological ethics in the near future. Adar, he thinks, came up with the term for these lies: benevolent deception.

To eliminate deception entirely is already well beyond impossible, and also, like most elements of tech that could destroy the world, deception is not totally undesirable

Adar co-authored what might be the earliest overall view of the concept of benevolent deception, in partnership with two members of the Microsoft Research team (Adar sometimes consults for tech companies when he's not teaching).

"Though it has been asserted that 'good design is honest,' deception exists throughout human-computer interaction research and practice," the paper begins.

It goes on to lay out what deception looks like in the context of human-computer interaction, along with some examples of when it's malevolent (hacking, say) or ethically questionable, as opposed to for the user's benefit.

This is a discussion that's going to become huge in the very, very near future, as near as, like, tomorrow, when artificial intelligence couples with data about us that's readily available and can influence our behavior incredibly easily. It's important to lock down what's good and what's bad in the world of tech deception; to eliminate it entirely is already well beyond impossible, and also, like most elements of tech that could destroy the world, deception is not totally undesirable.

Advertisement

Most instances of benevolent deception are boring, simple ways to make the very broad conclusions of incredibly complicated and interrelated computational sequences understandable to us dummies who can't even code. Instead of going to the command line to delete a file, we drag an icon of a folder to an icon of a trash can. Even the command line represents a layer of simplification that arguably be considered deceptive.

"Deception in the modern sense has evolved around that to hide some of the complexity and make it much more tolerable for the end user to sort of engage with the systems they're experiencing," said Adar. "We don't understand how a lot of the underlying technologies work these days, which I think is quite different than in the past."

I suggested more computer science education might help address that issue, and Adar was enthusiastic—"I would love for that to be the case. I'm a professor, I would love for people to be learning these things," he said—but it just isn't feasible for any normal human to learn enough about things like search algorithms, ranking algorithms, and artificial intelligence to make that a sensible solution.

Much of Adar's research focuses on instances of benevolent deception that are ethically clear. One of them is an arm-wrestling robot designed to help with physical therapy for those rehabilitating their arms. "People have mental blocks about how much they think they can move their arm," Adar said. So if the user is told that he moved his arm 10 degrees yesterday, today, the arm-wrestling robot might lie and tell the user that he's hit 9 degrees when in fact he's already at 10. The user will think, well, I made it to 10 yesterday, so clearly I can try a bit harder, and presto, the user makes it to 11. But that's a somewhat unusual example in that it's so clear-cut. Who could have issues with helping someone regain full use of their arm by any means possible?

Advertisement

In a post for Medium, writer Casey Johnston lays out a very modern claim (Adar's paper was written in 2013; it is already well out of date in places) for the benefits of benevolent deception. She, like many of us, is always late to things. Google Maps's public transit directions within New York City are often flawed due to the much larger and more frustrating flaws inherent in the New York City subway system. Google provides a time estimate for getting from one place to another, but it gives a best-case scenario, which hardly ever happens due to a combination of user error and the pranks played by such garbage train lines as the G and the C. What Johnston advocates for is a version of Google Maps that lies to her, tells her to leave her house earlier than it actually thinks she needs to, to allow for those complications.

That's a thorny proposition, because while ethical, you'd probably have to tell Google that you want this sort of feature, which makes it hard to actually deceive you. "Once you've committed in that fashion, then you know that you're being lied to," said Adar. "Now whether you're willing to suspend disbelief, that's a different thing. It's possible in some scenarios that you'll say, I know it's lying to me, but I'm just going to pretend it's the truth."

That gets scarier once you look at how exactly Google could deceive us, if it wanted to. Google knows your appointments (thanks, Google Calendar), knows your location (thanks, Google Maps), and can remember all of it and make connections between the data it gets. Google could very easily figure out that you are a late person, and lie to you to try to get you to stop being late. Is that okay? "If it was doing it without your permission, like if the designer or the company had made some decision like, 'this is the kind of person you should be,' I guess that would be ethically questionable," said Adar. "If you've decided that's the kind of person you want to be, you want to be on time, and the interface is helping you do that through deception, I think that's probably more ethically okay."

That gets scarier once you look at how exactly Google could deceive us, if it wanted to

What if the government institutes benevolent deception in cars, to tell you you're moving faster than you are in order to slow you down? Or a bank tells you you have less money than you do, in order to spur you to save? Surely saving your money and driving more safely are desirable goals, but are we okay with having our institutions and technology lie to us to achieve them?

Even in the case of the arm-wrestling robot, a benevolent deception if ever there was one, what happens if the user figures out that the robot has been lying to prod him into moving more? What happens when we can't trust our own tech? That's partly why Adar says that transparency and honesty in tech is generally the best option. But you can bet that Apple and Google (and Facebook, and Amazon, and Microsoft) are batting around the concept in high-level boardrooms right now, thinking about ways to trick users that go far beyond little user-interface tricks like the recycling bin. In the past few years we the tech-using public have shown a remarkable talent for ignoring the ways in which tech might be screwing us—privacy, safety, and information freedom come to mind. But what if we don't know when tech is screwing us? And even weirder: what if we like when it does?

Jacked In is a series about brains and technology. Follow along here.