Artificial intelligence is being designed to help investigate and predict crimes. Predictive policing software is built around algorithms that identify where crime could potentially take place. The AI software aims to make policing more fair and accountable but it isn’t without its share of critics or notions of bias.
The idea that smart, emotionless AI machines are capable of making decisions with a freedom of bias is fading fast. A recent investigation which smeared this perception came from ProPublica. They concluded data controlling the AI to determine whether a convicted criminal is likely to commit crimes again is highly biased against minorities. The results from the investigation were disputed by the company Northpointe who created the algorithm. This clash of opinion sparked debate on whether even the smartest machines can be trusted and are free from ‘human error’.
CrimeScan software was developed several years ago with the concept in mind that crime acts like a disease which can be predicted through data. This data includes crime reports, 911 calls and seasonal and weekly trends. It is hoped that this could help track potential crimes breaking out or monitor the escalating patterns of rival gangs. Predictive policing has been used before CrimeScan however with the program PredPol coming to prominence in the LAPD. Much like CrimeScan this program identified areas where crimes are more likely to occur in a given period. Using scientific data it spots patterns of criminal behaviour and is now used at more than 60 police departments.
CrimeScan and PredPol both claim their software to be smarter and more accurate than human perception when it comes to predicting the location of crimes although the step of predicting who committed the crime is limited. The algorithms they create reflect the attitudes of how certain areas are seen. As opposed to the data of crimes in that area a bias could be created due to police decisions. An area with a bad reputation could make police who are patrolling the area more aggressive in making further arrests. As with anything data driven there’s a danger that humanity will be lost in the equation. The data that drives the predictions is unknown often to officers let alone the identified criminals whose lives are impacted, so officers will need to put their own judgment ahead of any numbers produced.
Daniel Neill the creator of CrimeScan says: “We need to show that not only can we predict crime, but also that we can actually prevent it.” If you just throw the tool over the wall and hope for the best, it never works that well. A tool can help police officers make good decisions but I don’t believe machines should be making decisions. They should be used for decision support. I do understand that, in practice, that’s not something that happens all the time.”
It’s certainly easier to have more faith in the logic of the programming than the data it produces. It’s daunting to think of innocent people who could fall victim to a faulty algorithmic profile. Many would say the ethical aspects appear to outweigh the practical uses of the AI.
A recent Japanese anime ‘Psycho Pass’ gives an interesting insight into where AI software such as CrimeScan could be heading in the near future. Set in the 22nd century it shows Japan enforcing the Sibyl System which determines the threat level of citizens by examining their mental state for criminal intent, which is known as their Psycho Pass.