FYI.

This story is over 5 years old.

Tech

We Need Bug Bounties for Bad Algorithms

In the age of algorithmic decision-making, we need to incentivize algorithmic auditors to become our new immune system.
Image: Shutterstock

Amit Elazari Bar On is a doctoral law candidate (J.S.D.) at UC Berkeley School of Law and a CLTC (Center for Long-Term Cybersecurity) Grantee, Berkeley School of Information, as well as a member of AFOG , Algorithmic Fairness and Opacity Working Group at Berkeley. On 2017, Amit was a CTSP Fellow.

We are told opaque algorithms and black-boxes are going to control our world, shaping every aspect of our life. They warn us that without accountability and transparency, and generally without better laws, humanity is doomed to a future of machine-generated bias and deception. From calls to open-the-black box to the limitations of explanations of inscrutable machine-learning models, the regulation of algorithms is one of the most pressing policy concerns in today’s digital society.

Advertisement

Rising to the challenge, regulators, scholars, and think-tanks have begun devising legal, economic, and technical mechanisms to foster fairness, accountability, and transparency in algorithmic decision-making. From anti-trust, fiduciary laws to tort and anti-discrimination laws, academic journals are getting filled by innovative regulatory mechanisms to hold decision-makers and “evil” corporations accountable.

This is great but, the discussion is still missing a critical part: While policymakers devise better laws to hold the people behind bad algorithms accountable, those who are actually best positioned to uncover the operations of bad algorithms are risking legal liability. Currently, US anti-hacking laws actually prevent algorithm auditors from discovering what’s inside the black-box and report their findings. What we need is not just better laws—we need a market that will facilitate a scalable, crowd-based system of auditing to uncover “bias” and “deceptive” bugs that will attract and galvanize a new class of white-hat hackers: algorithmic auditors. They are the immune system for the age of algorithmic decision-making.

Algorithmic auditors are a growing discipline of researchers specializing in computer science and human-computer interaction. They employ a variety of methods to tinker with and uncover how algorithms work, and their research has already sparked public discussions and regulatory investigations into the most dominant and powerful algorithms of the Information Age. From Uber and Booking.com to Google and Facebook, to name a few, these friendly auditors already uncovered bias and deception in the algorithms that control our lives.

Advertisement

Yet, there is a catch. Effective algorithmic auditing often requires techniques such as “sock-puppet”[ing] or scraping that raise legal difficulties, as a recent decision coming out from the DC district court exposed. In fact, algorithmic auditors and security researchers have much in common when it comes to dealing with US anti-hacking law’s murky landscape. In both disciplines, the law is ill-equipped and undermines vital research efforts that our society desperately needs, and legal threats are on the rise. In both disciplines, still, the law allows corporations to prevent researchers from doing their job. Yet, while the barriers might be similar, the markets for security “bugs” and bias “bugs” are very different.

In security, industry was able to create a vibrant market for white-hat hacking using contracts, in which companies pay millions to external researchers instead of suing them (generally). These bug bounty programs incentivize researchers to conduct security research and report security bugs in exchange for monetary and reputational rewards, fueling research on a multitude of platforms, paying millions to tens of thousands of friendly hackers, and keeping the data of billions of users (more) secure.

From Silicon Valley giants to the Department of Homeland Security, bug bounties are exploding, and a new trend of Privacy bug bounties is emerging. Indeed, friendly hackers, are also an important part of the internet immune system, and even regulators and government agencies are recognizing that as well.

Advertisement

And yes, this legal market for external crowd-based security and privacy research is operating within the current boundaries of the law—with no legal reform in the horizon. But in artificial intelligence, while calls for algorithmic fairness and accountability are rising, auditors are facing liability under outdated laws. They risk getting cease-and-desist letters for their meaningful contribution towards an “open-box” society.

From this paradoxical reality the idea to create a market and legal framework that incentivizes scalable crowd-based auditing of algorithms, involving users and auditors alike emerges. In other words, we need to employ the immune system approach existing in security, for algorithms as well.

Introducing, the “Algorithmic Bug Bounty.” This is a framework that applies Linus Law (“given enough eyeballs, all bugs are shallow”) to algorithmic bias and deception, harnessing legal, reputational, and monetary incentives to create a market for “bias bugs,” free from capture and corporate corruption. Indeed, this bug bounty program is geared towards users as well. Users are not just passive subjects of algorithmic decision-making, but sometimes they operate as agents of change, by reporting potential fairness or deceptive “bugs.” If we want to hold decision makers accountable, we need better enforcement, and enforcement means auditing. Only a market solution could scale crowd-based auditing of algorithms in the private sector.

Yes, admittedly it will take years until we establish a similar set of monetary and market incentives to foster algorithmic auditing, as we now finally have for security vulnerabilities.

It will take years until our laws and our society create a reality where a “bias bug” is as costly as a massive data breach. Laws will need to change. Intermediaries similar to hackers’ platforms, that will enable small and mid-size businesses to adopt algorithmic bug bounties into their institutions will need to emerge. Best practices and standards will need to be written, and we might start with programs with no incentives (akin to Coordinated Vulnerability Disclosure programs) or “private” Algorithmic Bug Bounties, where only a handful of external researchers are invited to test the system. It takes time to build a market.

Still, if there is one thing to be learned from decades of technological developments it's that it “takes a crowd,” a community, to affect change. The sooner we embrace that, the better we will be able to truly “open the black box”.

The author would like to thank Keren Elazari, Christo Wilson and Motahhare Eslami. These ideas will be explored in depth in a paper the author is working on: Amit Elazari Bar On, Christo Wilson and Motahhare Eslami, ‘Beyond Transparency – Fostering Algorithmic Auditing and Research’ (working draft to be presented at PLSC 2018).