This article is more than 1 year old

British cops told to scrap 'discriminatory' algorithms in policing

Predictive plod practices bake bias into systems people don't understand, says Liberty

Human right group Liberty is urging UK cops to stop using predictive policing programs that put a "technological veneer of legitimacy" on existing biased practices.

In a report published yesterday, Liberty said that, at the very least, the force should be more transparent about their use of algorithms in carrying out police work.

Metropolitan Police at Notting Hill Carnival

'Unjustifiably excessive': Not even London cops can follow law with their rubbish gang database

READ MORE

The report centres on two types of predictive policing programs – predictive mapping, where police data is used to identify crime "hotspots", and individual risk assessment, where police have used algorithms to try to predict whether someone is likely to commit a crime.

Liberty said, however, that forces' predictive policing methods are inherently biased – mapping programs rely on "problematic historical arrest data", while individual programs "encourage discriminatory profiling and result in opaque decision-making".

This is combined with a "severe lack of transparency" on the use of the techniques, Liberty said, and urged the cops to "fully disclose" information about how and when predictive policing is used.

The group submitted Freedom of Information requests to 90 forces in the UK, and found that 14 had deployed, were deploying or, or planned to deploy predictive policing.

The rise in the use of the algorithms for policing comes as forces face tough budget cuts, and closer scrutiny from human rights activist.

Liberty said society should be wary of increasing its reliance on big data, saying it creates a culture that "allows the state to monitor us even more closely and build up intrusive profiles" of people.

Critics have also consistently warned that the use of predictive policing could have unintended and unanticipated consequences, especially when it comes to the data used to build the algorithms.

UK cops run machine learning trials on live police operations. Unregulated. What could go wrong? – report

READ MORE

This could be outdated, incomplete and unreliable police data, or indirect markers for race, such as postcodes. This risks "entrenching pre-existing discrimination", Liberty warned. Baking this data into software runs the risk that it is seen as neutral – when it isn't. The fact that many people don't understand the system gives it a veneer of objectivity, the group added.

This "lends unwarranted legitimacy to biased policing strategies that disproportionately focus on BAME and lower income communities", the report said.

Efforts to boost human oversight or intervention were "not sufficient" to meet its concerns, said Liberty, arguing that the challenges in ensuring police don't defer to the algorithm are "so significant they cannot be met in the short term by training or guidelines".

Liberty also called for forces to re-evaluate their use of data for policing, and told the London Metropolitan Police to carry out a full review of its Gangs Matrix database, which was recently slammed as being "unjustifiably excessive" by the UK's data protection watchdog.

Report author Hannah Couchman said that predictive policing was already failing society for a number of reasons. For instance, she said, it feeds into a bigger surveillance system, bolsters the notion of "pre-criminality" and puts a "glossy sheen" of technology on the processes.

"And it fails us because it focuses on technology + big data as the solution to policing problems which are deeper, systemic issues requiring a much more considered, radical + compassionate response," she said on Twitter.

Her report argued that investment in digital solutions should focus on the development of programs that addressed wider issues of bias in the criminal justice system. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like