It’s the question everyone is asking – could the Sydney siege have been predicted and therefore prevented based on the past behaviour of gunman Man Haron Monis.
Monis’ troubled history was well known to media and the police. He was on bail for being an accessory to the murder of his ex-wife, faced more than 50 sexual and indecent assault charges and had a conviction for sending abusive letters to families of deceased Australian soldiers.
The self-proclaimed Iranian cleric died Tuesday following the police break-up of the 16 hour siege in a Sydney cafe. Cafe manager Tori Johnson and barrister Katrina Dawson also died following the gun battle.
What happened in this case will be the subject of much investigation but Prime Minister Tony Abbott says Australians have a right to ask why Monis was “entirely at large in the community”, with New South Wales Premier Mike Baird adding : “We are all outraged that this guy was on the street.”
Similar questions are asked following other cases were crimes are committed by someone known to police with a history of bad behaviour, violence or abuse.
But can we predict if and when such a person is likely to commit any further crimes?
Experts and predictions
In many areas of life we rely upon experts to make predictions and decisions based on those predictions – which is often referred to as clinical prediction.
A psychiatrist might be asked to predict the chances that an offender will re-offend if released into the community. This information might be used at a parole hearing.
But for a very long time, there have been attempts to supplement and indeed replace this process with actuarial prediction, based purely on data and statistical analysis.
An example comes from the early work of US sociologist Ernest Watson Burgess who in 1928 proposed 12 factors to be used in predicting parole violations, including type of offence, parental and marital status, criminal type, social type, community factors, statement of trial judge and prosecuting attorney, previous criminal record among other factors. This was one of the first efforts to use data to predict parole violations.
The trouble with experts
There are good reasons for not relying solely on experts and instead relying on formal (actuarial) models that combine data to make predictions for us.
First, people are prone to bias in their judgements, and one of the best known and aptly illustrated biases is the hindsight bias . This is the tendency to overestimate the probability that you would have correctly predicted an event after that event has occurred.
This bias can lead us to become overconfident in our ability to predict the outcome of events. It stops us from learning what the useful indicators are that we should pay attention to that might lead to accurately predicting an outcome.
Second, expertise is no guarantee of prediction accuracy. US psychologist Paul E Meehl reviewed 20 studies that compared clinical judgements of psychiatrists and psychologists with a regression model (a statistical model that combines predictor variables to find the best combination for predicting an outcome variable).
There was not a single study in which the clinician outperformed the statistical model in making predictions.
Further studies of psychiatrists and psychologists in a psychiatric facility trying to predict the dangerousness of 40 newly admitted male patients showed similarly poor results.
Clinicians had a predictive ability accounting of 12% of the data, compared with a predictive ability of 82% for a linear regression model using the same information.
So can statistics predict a crime?
Results like these have led to large efforts to develop and validate actuarial (statistical) methods for predicting violence.
One of the most comprehensive and well regarded approaches is the Classification of Violence Risk (COVR ). This uses statistical methods to classify people into five risk groups (ranging from very low risk to very high risk).
This approach was developed for use in clinical populations and so may well be of little value for predicting violence in the general population. It does at least provide a set of criteria for assessment and a formal model.
But is it accurate? The proponents of the approach state that it is, but others have pointed to a need to understand the margins of error. Further, there is a debate about the procedures used to compare the accuracy of these methods.
But prediction is hard, especially when there is a very low incidence of the event that we are trying to predict.
In 1955, Meehl and colleague Albert Rosen stated a condition under which a diagnostic test would be efficient can be defined as a situation where prediction by the diagnostic test was better than prediction using only the raw base rates.
By raw base rates we mean the rate at which the thing we are trying to predict occurs in the population. For violent gun deaths in Australia this is thankfully rare; about 0.2 per 100,000 residents. The rate may be even less if we account for events involving people with mental illness.
At present there are no psychometric instruments that consistently pass the criterion of efficiency with a base rate as low as this.
Further, we need to be very careful about stereotyping the mentally ill as potentially “dangerous”. It is simply is not the case that all people with serious mental illnesses are prone to violence.
There are very specific factors that govern the complex relationship between mental illness and violence. We need to understand and prevent people from experiencing them.
The consequences of using prediction to prevent crime are explored in the 2002 movie Minority Report .
That this approach is being actively pursued might become a self-fulfilling prophecy, with greater surveillance producing higher crime detection rates in certain areas which then feed into statistical models, inflating the degree of surveillance again.
We also have to consider the risks of false positives. Are we willing to take an approach where innocent people are incorrectly classified as being at high risk of committing a violent act? What might be the unintended consequences of this approach?
Care needs to be taken in ensuring that we don’t follow a path that will lead us to a false sense of security.
So could incidents such as the Sydney siege have been predicted? Probably not, but the need to believe that we can predict and control our world will still remain.
This article was originally published on The Conversation as "Could the Sydney siege have been predicted and prevented?".
Shortt URL for this post: