Risk is a much mis-understood word. In a technical sense, it is the probability of something happening multiplied by the consequences when it does [see post on Risk Definition, September 20th, 2012]. Tight regulation and good engineering could reduce the probability of earthquakes induced by fracking and such earthquakes tend not to produce structural damage, i.e. low consequences, so perhaps it is reasonable to conclude that the risks are low because two small quantities multiplied together do not produce a big quantity [see last week’s post on ‘Fracking’, 28th August, 2013].
The more common definition of risk is the probability of a loss, injury or damage occurring, i.e. severity is ignored. Probability is used to describe the frequency of occurence of an event. A classic example is tossing a fair coin, which will come down heads 50% of the time. This is a simple game of chance that can be played repeatedly to establish the frequency of the event. It is impractical to use this approach to establish the probability of fracking causing an earthquake, so instead engineers and scientists must simulate the event using computer models. One approach to simulation is to generate a set of models, each based on slightly different set of realistic conditions and assumptions, and look at what percentage of the models predict earthquakes, which can be equated to the probability of a fracking-induced earthquake. When the set of conditions is generated randomly, this approach is known as Monte Carlo simulation. Weather forecasters use simulations of this type to predict the probability of rain or sunshine tomorrow.
The reliability of a simulation depends on the model adequately describing the physical world. We can test this (known as validating the model) by comparing predicted outcomes with real-world outcomes [see post on 18th September, 2012 on ‘model validation’]. The quality of the comparison can be expressed as a level of confidence usually as a percentage. Crudely speaking, this percentage can be equated to the frequency with which the model will correctly predict an event, i.e. the probability that the model is reliable, so if we are 90% confident then we would expect the model to correctly predict an event 9 out of 10 times. In other words, there would be a 10% ‘risk’ that the model could wrong.
In practice we cannot easily calculate the probability of a fracking-induced earthquake because it is such a complex process. Validating a model of fracking is also a challenge because of the lack of real examples so that establishing confidence is difficult. As a consequence, we tend be left weighing unquantified risks in a subjective manner, which is why there is so much debate.
If you made it this far – well done and thank you! If you want more on weather forecasting and extending these ideas to economic forecasting see John Kay’s article in the Financial Times on August 14th, 2013 entitled ‘Spotting a banking crisis is not like predicting the weather’ [ http://www.ft.com/cms/s/0/fdd0c5bc-0367-11e3-b871-00144feab7de.html#axzz2dNrTKPDy ].
I completely agree that probabilities and risks are tied up and should inform decision makers. However, public or published perception of risk is normally not based on probabilities but on perceived single (dramatic) events.