Predicting Murder & Ignoring Misconduct; How the UK’s New AI Policing Tool Misses the Mark

Clearly satisfied with the steps taken to root out the systemic cultural issues in their ranks, the Metropolitan Police have now turned their hand to a new project: using AI to predict who might murder.

Predicting Murder & Ignoring Misconduct; How the UK’s New AI Policing Tool Misses the Mark

The so-called “murder prediction tool”, officially titled "Sharing Data to Improve Risk Assessment" was first reported by The Guardian in early April 2025. It’s a collaboration between the Ministry of Justice, the Home Office, and Greater Manchester Police.

Not to worry, though; it’s only in the "research phase".

Nonetheless, documents obtained by Statewatch through Freedom of Information requests reveal that data from up to 500,000 individuals including victims, witnesses, and those with safeguarding concerns has already been shared to train the new model.

This is historic data, drawn from a criminal justice system long criticised for systemic racism, class bias, and misogyny. Training a predictive model on such tainted data will only reproduce and amplify those biases, now wrapped in the false objectivity of an algorithm.

Strangely, no AI would have been required to predict that a police officer would kidnap, rape, and murder Sarah Everard in 2021.

Equally, no predictive model is necessary to anticipate who would most likely breach data protections to stalk women. It turns out, that overwhelmingly, it’s the police themselves.

Apparently, building a "homicide prediction" AI is judged a better use of public resources than rooting out the rot that still festers within the police.

Why work to keep existing police powers in check when you could instead turbocharge them, carrying forward all the historical bias, discrimination, and misconduct that’s already embedded in the system?

This is not innovation. It’s automation of injustice, paid for by the victims.