Many neural networks, including those used by the financial industry, are so complex even their own creators can't say exactly how they work.
Image: Guy Torsher
Forecast is a series exploring the future of AI and automation in a variety of different sectors—from the arts to city building to finance—to find out what the latest developments might mean for humanity's road ahead. We'll hear from Nikolas Badminton, David Usher, Jennifer Keesmaat, Heather Knight, Madeline Ashby and Director X, among others. Created by Motherboard in partnership with Audi.
In the world of finance, every second counts. With billions of dollars traded every day on stock markets around the world, financial institutions look for every competitive edge they can.
Enter artificial intelligence. Some financial firms use AI to model scenarios for their capital plans, while others use it to scan for stock patterns or even as a simple chat bot to assist clients with day-to-day banking.
Yet many of these neural networks and learning computers are so complex that even their own creators are unsure of why a program makes a certain recommendation—AI is a so-called "black box" where data is fed into the algorithm, it's analyzed, and the answer pops out, often leaving the programmers who developed the program unsure of exactly how it happened.
While discrimination based on factors such as race, sex, or marital status is illegal when applying for a mortgage, for example, the lack of transparency within AI models makes that more difficult to track. And in the United States, you are required by law to be informed why your credit card application is rejected, something that becomes nearly impossible if you don't know how the AI is making its decisions. Other studies have found AI can be biased when it comes to race or sex, just like the humans who designed it.
One University of Waterloo (UW) PhD candidate has developed an AI program he says shines a light into that black box—the "white box" method—revealing the inner workings of the AI and allowing us to better understand what the computer is learning, how it is analyzing data, and why decisions are made.
The software, known as CLEAR (CLass-Enhanced Attentive Response) Trade, analyzes decisions made by these AI algorithms and reverse engineers them to provide important insight into why a program chose a specific option.
"Our motivation was to provide more insight to the analyst. Right now [AI] says if the stock will go up or down, but gives no explanation," said Devinder Kumar, chatting with me over the phone from Waterloo. He is just completing his first year of his PhD at UW, about an hour's drive west of Toronto.
Exactly how many transactions are performed using AI is a proprietary secret closely guarded by these institutions, according to futurist and tech researcher Nikolas Badminton. "Globally I think you have millions of trades actually happening that have been augmented by AI in one way or another." If that's true, it further highlights the importance of knowing what's going on under the hood.
The rollout of the CLEAR Trade program and others like it is important, given new rules set to take effect in the European Union. The European General Data Protection Regulation (GDPR) will be implemented in May 2018, part of which relates directly to AI and the finance sector.
Article 15 states the rights of an individual to have information about the logic involved in any automatic decision that relates to them, as well as its consequences. Article 22 gives individuals the right to be exempt from that automated decision making.
"[When] you walk into a bank they cannot say they've denied you service or denied you a mortgage because the model says so. They need to provide an explanation to the client," said Kumar. "In Europe it's certainly becoming a law, and in a couple of years it'll become a best practice, at least, in North America."
The GDPR will affect businesses and financial institutions outside of Europe as well, as the law applies to any company that offers goods or services to EU residents, even those based in Canada. The quicker researchers can develop and execute these programs, the better. Kumar himself hopes to begin field testing within six months to a year.
The "white box" approach to AI is "really important from a financial institution perspective, because you can't just have this black box of not knowing how the decision is made. That adds an element of risk," said Badminton.
So just how many financial institutions are working on developing a white box approach to explainable AI?
"They all are," Badminton said, adding they've always been at the leading edge of new technology to remain one step ahead of hackers or other cyber criminals for decades.
"It's not good enough to walk into a boardroom and say, 'These are the decisions we've made, this is the portfolio, here are the hedge funds, here are the futures we're investing in. We're just trusting our computer system.'"
- audi forecast