FYI.

This story is over 5 years old.

Tech

An AI Will Decide Which Criminals in the UK Get Bail

But it's not yet clear if the tool is more accurate than real humans.
Helsinki street art. Image: Alessio MumboJumbo/Flickr

Get arrested in Durham, England, and artificial intelligence could help decide whether you're held in custody or sent home—but it's not yet clear if the algorithm is more accurate than police officers when it comes to assessing whether someone is likely to reoffend.

Durham Constabulary has worked with academics from the University of Cambridge to develop the Harm Assessment Risk Tool (HART), an algorithm that analyses crime data and predicts whether an arrested suspect is likely to pose a risk if released from custody.

Advertisement

The tool is similar to those recently rolled out in the United States.

"The basic logic is to use the prior histories of thousands of people arrested and processed in Durham to forecast the level of risk of high harm they will cause by criminal acts within two years after they are arrested," Professor Lawrence Sherman, director of the Cambridge Centre for Evidence Based Policing, told Motherboard in an email.

Custody sergeants will be shown a rating of low, medium, and high risk, using that to help decide if a suspect should be released out on bail.

HART was trained on five years of data, including suspects' offending history, gender, and postcode. It was let loose on actual cases in 2013, and researchers found HART's predictions that a suspect was a low risk were accurate 98 percent of the time, while forecasts that they were high risk were accurate 88 percent of the time. However, there is no baseline data on the accuracy of human officers' decisions to compare against.

That's only being studied now, noted Dr Geoffrey Barnes of the University of Cambridge and director of criminology at the Western Australia Police, who also helped develop HART. In an email to Motherboard, he said full results will take two more years to collect.

Alongside that, Durham is running a "wilful blindfulness exercise", where officers aren't shown HART's prediction, but instead make their own guess as to whether the suspect will reoffend within two years and whether it will be serious, so researchers can see if humans and machines make the same choice. So far, HART and officers agree only 56 percent of the time. Durham will have to wait two years to see which is correct more often.

Advertisement

Still, HART will be rolled out to Durham Constabulary within the next three months, according to Sheena Urwin, head of criminal justice at Durham Constabulary.

Such AI-based risk assessments have been used in justice and policing before, with ProPublica last year analysing the risk scores of 7,000 people in Florida who were assessed using a similar system called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). The research revealed only 20 percent of those predicted to go on to commit violent crimes actually did.

Alongside such inaccuracies, ProPublica reported racial disparities, with the algorithm more likely to falsely flag black defendants as future troublemakers versus white suspects.

HART doesn't directly include race in its calculations, but Urwin said the constabulary was aware of the idea of race by proxy —when race isn't revealed directly but through other characteristics such as addresses. HART does use postcode data.

Urwin told Motherboard over the phone that postcodes are a "good predictor for us", but with bias in mind noted the system only uses the first four characters, so people will be judged by a broader area rather than specific street. But as the ProPublica investigation showed, this 'coarse data' may actually be worse.

The aim with HART is to use data to keep low-risk offenders out of custody, instead pushing them towards services to encourage them away from a life of crime, Urwin explained over the phone. "That's really why we got into this algorithmic forecasting tool in the first place. And we wanted to do that in an evidence-based framework, with academic rigour, and in an open transparent way," she said.

That's why Durham Constabulary is working with independent academics, and why the research results will be published. This openness is key to ensuring these supposedly altruistic AI-driven programmes are watched closely for embedded bias.

Subscribe to Science Solved It, Motherboard's new show about the greatest mysteries that were solved by science.