FYI.

This story is over 5 years old.

Tech

Google’s Handling of Public Health Data Should Serve as a Cautionary Tale, Report Says

Academic paper assesses DeepMind’s ‘inexcusable’ approach to British patients’ data privacy.
cykocurt/Flickr

Time and time again it's proven that massive, uncontrolled retention of public data can lead to serious breaches of privacy and trust. And in no other sector can this issue cause more problems than in healthcare, where reams of personally identifiable patient information are subject to hacks and leaks.

With this in mind, Google-owned AI subsidiary DeepMind's entrance into AI-powered healthcare in 2015 has been lauded as being less a canary in a coalmine, more a bull in a china shop by a new academic report that criticizes DeepMind's approach to patient privacy in conjunction with the UK's National Health Service (NHS).

Advertisement

"DeepMind had not built a piece of healthcare software in its entire existence, and so, to have it just be walked over to by physicians and have an entire hospital's worth of identifiable patient data given to them on trust seems a bit much to me," the paper's co-author, Hal Hodson, told Motherboard yesterday.

"People are paying attention to this stuff."

It was July 2015 when doctors from the British public hospitals within the Royal Free London NHS Foundation Trust asked Google's artificial intelligence subsidiary DeepMind to develop software that uses NHS patient data. Just four months later, on November 18, the personally identifiable information on 1.6 million NHS patients was in the hands of third-party servers processing data for DeepMind's initiative—an app for doctors called Streams that should have only required data related to specific patients who were at risk of acute kidney injury.

Then, an April 2016 investigation carried out by Hodson, who then worked at New Scientist, revealed that DeepMind's access to patient data by far exceeded what the company's public relations department had let on, with DeepMind possessing patient data including information on HIV, drug overdoses, and abortions dating back five years.

It was only after New Scientist's article "that any public conversation occurred about the nature, extent and limits of the DeepMind-Royal Free data transfer," states Hodson in the report.

Advertisement

This flagrancy of data transparency conventions is warning to be learned from, the paper concludes.

"By doing it so badly at the start, it will be harder for them going forward, but I also think potentially it's good," Hodson told Motherboard. "There's now a lot more scrutiny, people are paying attention to this stuff."

If lessons are not learnt from DeepMind's alleged mistakes, we are looking at dire consequences, including criminal breach of privacy, computational errors that could lead to misdiagnosis, and ultimately, the risk of mistreatment. A DeepMind spokesperson told Motherboard today that the paper "completely misrepresents" the reality of how the NHS is using technology to process data.

But to establish trust, DeepMind needs to admit it made mistakes, according to Hodson. "The paper doesn't talk about now. The paper talks about the beginning, which is important because of the trust reasons," Hodson told Motherboard.

He admits there has been a marked changed between the quality of the privacy and policy documents that sets out the rules for how patient data is handled between 2015 and now.

"[DeepMind] has improved, and I think it should be trying to do this stuff. But natural monopolies only work as long as there's oversight and regulation, and currently, the existing oversight and regulation appears to not be very strong."

Subscribe to pluspluspodcast, Motherboard's new show about the people and machines that are building our future.