This article is more than 1 year old

UK cops run machine learning trials on live police operations. Unregulated. What could go wrong? – report

RUSI: How about some codes of practice, transparency, for starters?

The use of machine learning algorithms by UK police forces is unregulated, with little research or evidence that new systems work, a report has said.

The police, not wanting to get left behind in the march of progress or miss out on an opportunity to save some pennies, are keen to test out new technologies.

But the willingness to get started and the disparate nature of policing across the UK often means there is a lack of overall guidance or governance on their use.

In a report (PDF), published today, defence think tank RUSI called for greater regulation and for codes of practice for trials carried out in live operational environments that focus on fairness and proportionality.

London, UK - March, 2018. Police officers patrolling Leicester Square and Piccadilly Circus in central London. Pic Paolo Paradiso / Shutterstock.com

Zero arrests, 2 correct matches, no criminals: London cops' facial recog tech slammed

READ MORE

Although algorithms are used by cops in a variety of ways – perhaps most well known are automated facial recognition or to pinpoint crime hotspots – the report focused on those uses which most affect individuals. For instance, those that identify which people are more likely to reoffend, such as Durham Police’s Harm Assessment Risk Tool.

The report pointed out that, as with much nascent technology, it is hard to predict the overall impact of the use of ML-driven tools by police, and the systems may have unintended, and unanticipated, consequences.

This is exacerbated by a lack of research, meaning it's hard to definitively say how systems influence police officers' decision-making in practice or how they impact on people’s rights. The RUSI report also pointed to a limited evidence base on the efficacy or efficiency of different systems.

Some of the main concerns are algorithmic bias – as the report said, even if a model doesn’t include a variable for race, some measures, like postcodes, can be proxies for that. Durham Police recently mooted removing a postcode measure from HART.

Others include the fact models rely on police data – which is can be incomplete, unreliable and is continually updated – to make predictions, or fail to distinguish between the likelihood of someone offending or just being arrested, which is influenced by many other factors.

It's going on, in the field, but no one knows about it

But any such concerns haven’t stopped police from trying things out – and the report's authors expressed concern that these trials are going ahead, in the field, without a proper regulatory or governance framework.

The RUSI report also identified a lack of openness when it comes to such trials. The furore over police use of automated facial recognition technology – high rates of inaccuracy were only revealed through a Freedom of Information request – exemplifies this.

It called for the Home Office to establish codes of practice to govern police trials “as a matter of urgency” (although the department's lacklustre approach to biometrics, having taken half a decade to draw up a 27-page strategy doesn't bode well here).

The report also recommended a formal system of scrutiny and oversight, and a focus on ensuring accountability and intelligibility.

In this context, the use of black box algorithms, where neither the police nor the person can fully understand – or challenge – how or why a decision has been made, could damage the transparency of the overall justice process.

Different machine learning methods provide different levels of transparency, the report noted, and as such it suggested the regulatory framework should set minimum standards for transparency and intelligibility.

It also emphasised the importance of humans being involved. Forces need to demonstrate a person has provided meaningful review of the decision to ensure algorithms are only used to support, not make, a decision.

But the report noted officers might be unwilling to contradict a model that claims a high level of accuracy. (One needs only look at the continued presence of lie detectors in the US – despite a weight of evidence against them – to understand people’s willingness to accept that a piece of kit is “right”.)

As such, the report called for be a process to resolve disagreements, and for public sector procurement agreements for ML algorithms to make requirements of the providers. That includes the provider being able to retroactively deconstruct the algorithm and being able to provide an expert witness.

The report also noted the need to properly train police officers – not just so that they can use the kit, but so they fully understand its inherent limitations and can interpret the results in a fair and responsible way.

It recommended that the College of Policing develops a course for officers, along with guidance for the use of ML tools and on how to explain them to the public.

Commenting on the report, Michael Veale, a UCL academic whose focus is on responsible public sector use of ML, emphasised the need to build up evidence about whether such interventions work, and added that the government should back these efforts.

“It may surprise some readers to know that there is still a What Works Centre for Policing that could, if properly funded, play this role,” he said.

“Algorithmic interventions need testing against other investments and courses of action — not just other algorithms, and not just for bias or discrimination — to establish where priorities should lie.”

Veale also warned that there is wider organisational use of predictive technologies in policing, such as for determining staffing levels, timetables, patrols and areas for focus.

“We’ve learned from the experience with New Public Management [a model developed in the '80s to run government bodies] and the NHS over the last few decades of the danger that the gaming associated with target culture can bring — look at Mid Staffs.

"We need to be very careful that if these new technologies are put into day-to-day practices, they don’t create new gaming and target cultures,” he said. ®

We'll be examining machine learning, artificial intelligence, and data analytics, and what they mean for you, at Minds Mastering Machines in London, between October 15 and 17. Head to the website for the full agenda and ticket information.

More about

TIP US OFF

Send us news


Other stories you might like