FYI.

This story is over 5 years old.

Tech

Who's Guilty When a Brain-Controlled Computer Kills?

Brain-machine interfaces are opening up a host of ethical dilemmas for researchers.
Image: Tim Sheerman-Chase/Flickr

Earlier this year, Elon Musk unveiled his grand plan for Neuralink, his foray into creating a brain-machine interface that would allow users to control computers with their minds. While this announcement immediately spawned fantasies of a future that looked uncomfortably like Ghost in the Shell, other versions of brain-machine interfaces (BMIs) have been in use for a while. Already, researchers are using BMIs for everything from restoring limb movement for paralyzed patients to racing drones.

Advertisement

Existing BMI technology is impressive, but still in its infancy. According to a new paper published today in Science by an international team of researchers, it's not too early to start addressing the host of ethical issues that are raised when people control computers with their minds.

"Although effortless interactions between mind and machine seem intuitively appealing, creating direct links between a digital machine and our brain may dangerously limit or suspend our capacity to control the interaction between 'inner' personal worlds and outer worlds," the authors write. "For many, such a scenario raises fundamental, even existential, fears—including the fear of losing privacy and autonomy and of self-dissolution."

Although the authors recognize that fears of machine-orchestrated self-dissolution might seem exaggerated considering the current state-of-the-art BMI tech, they also note that "given the exponential growth of the field over the last decades, we should anticipate that technological feasibilities might change rapidly."

Just like the rise of self-driving cars revived the old "trolley problem" of moral philosophy, BMIs come with their own suite of ethical issues. In the first place, there is the problem of responsibility. Although the authors acknowledge that BMIs might be seen as just another tool, unlike a hammer, these tools have a number of autonomous components within them. If you smash a window with a hammer, it's clear that the fault lies with on the person who wielded it. But if you think about smashing a window with a hammer while wired into a BMI, without any intention of actually smashing the window, and this triggers your autonomous robot to go break the nearest window with its onboard hammer, well—it's much less clear who is at fault.

Advertisement

To tackle the issue of accountability when the first injury to result from human-robot symbiosis arises, the authors imagine a system that would require the user to approve or veto any unwanted actions on the part of the machine they are interfacing with. This is kind of like how a driver has the option to press the brakes when a pedestrian walks in front of their car: The option they take will ultimately decide whether or not they are at fault in an accident. BMI users would use an eye-scanning system to approve or veto robot actions. This would be less effective in the case of a faulty robot, but manufacturers and the law already take the risk of products into account. There would simply have to be a new risk-assessment regime created for BMI tech.

Image: Wyss Center

A second major issue is privacy. BMIs have the potential to expose way more neural data than a user is comfortable with. We already store a vast majority of our personal lives on computers that are vulnerable to being hacked and there's not much reason to think that BMIs would be any less threatened.

Already, researchers have demonstrated the ability to hack implants like insulin pumps and cardiac defibrillators—both of which could result in the death of their user if manipulated with malicious intent. Also, it would be possible for a hacker to intercept and manipulate biological signals that have been converted into digital ones, such as Bluetooth or Wi-Fi. Unfortunately, as the authors write, "there is, to our knowledge, no established technological solution to this problem."

Advertisement

According to the authors, BMI companies pose some of the greatest threats to users' privacy. There's little reason to think that brain data won't be bought and sold just like all of our other personal information we already provide on the internet, and the authors argue that BMI technology companies must develop clear and ethical guidelines about how brain data from their users will be stored and used.

Read More: How Hackers Could Get Into Your Head With Brain Malware

Still, the researchers write that there are lessons to be learned from current technologies, which use encryption protocols to protect data. The question, then, is what kind of encryption will be needed to protect our brainwaves? The authors of the Science paper call for more research into secure neural engineering, or neurosecurity, to prevent against unauthorized manipulation of neural data, or "brainjacking".

For now, there are more ethical questions about BMIs than answers. This technology shows promise in helping treat everything from paralysis to concentration disorders, and a number of other possible applications are likely to emerge as BMIs becomes more sophisticated.

But if we want to avoid a future where the latest global malware wave results in millions of people getting brainjacked until they can pay a bitcoin ransom, we need to start laying the foundations for human-machine symbiosis now.