International Journal on Criminology Volume 4, Number 2, Winter 2016 | Page 73

International Journal on Criminology complex phenomena are and will continue to be impossible to predict—earthquakes are a good example. But will it be possible to predict other risk events? In the bestcase scenario, only very slightly. Let us have a look. In fields where we think we can make predictions, and where we have been doing so for quite some time—the economy and finance, the weather, seismology, etc.—the results of the forecasts have been pretty catastrophic, especially when it comes to evaluating risk. Here is the proof: before the subprime crisis, all the rating agencies (Standard & Poor's, Moody's, and so forth) had quantitative estimates in their possession on the risk of house buyers defaulting on their loans. These risks were assessed by means of quantitative analysis, a discipline that is derived from the physics of probabilities, and that is supposed to “scientifically” control the risks of trading. But when the crisis broke, the real risk turned out to be two hundred times worse than the agency predictions! As one expert chuckled, thinking you are risk-free based on the estimates of rating agencies is tantamount to smearing yourself with sunscreen to protect yourself from a nuclear blast. None of this prevented the cyber Madoffs from conjuring up a new trick, which in the US is called crime prediction or predictive policing—PredPol if you want to be cool. This predictive analytics software is supposed to be able to forecast crimes, and even, why not, political crises, revolutions and enemy attacks. Let us read the articles that the press devotes to predictive policing. According to one paper, it is “the software that can predict crimes … XXX (the name of the software) has arrived in the UK”. We also learn that it can “forecast where and when criminals might strike” or “predict where burglaries, robberies, and assaults will take place in the future … with convincing results.” Now, the cyber Madoffs (who, according to the gullible media, claim to “predict crimes”) use the same type of quantitative analysis and “predictive algorithms” that brought Wall Street crashing down! And how is this crime-predicting software powered? It is powered by “an algorithm designed to forecast where and when a crime will occur using a database of past offenses” or by “using historical statistics” and “criminal databases from the 1960s.” Without exception, each of these different types of software derives its reference material from the past; they are all powered by data that is strictly retrospective. The software sucks up anything it finds lying around on the internet, and the ensuing mass—which is operated by guesswork—is dressed up in high-sounding, high-tech names, such as big data or data mining, before the raw material is fed through an algorithm cruncher. And the results are plausible: there is likely to be a temporary improvement in police performance until the villains turn the displacement effect to their advantage (as is their wont). But this has nothing to do with predictive power—and here is why. The crucial question—the question that our media in their excitement ignore (knowingly or not)—is: Does our knowledge of the past mean we can predict the future? Take an example: Does the weather yesterday guarantee what the weather will be like tomorrow? Obviously not, because there is an element of uncertainty, 72