A study described in the book Controlling the Controllable (Groeneweg, 2002, p. 88-89, experiment 2) looked at the ability of incident analysts to distinguish relevant from irrelevant information. The results are intriguing and may put our own incident analyses into perspective.
Participants were divided into two main groups: A ‘trained’ group that received instructions on incident analysis using fault trees (N=15) and a group which received no instructions (N=15). Each of the participants were given a set of 128 event descriptions related to a fictional incident concerning a ferry. 107 of these events were irrelevant to the causation of the incident and 21 events were relevant. The participants had to select the relevant events.
The original study also included a small group of experienced investigators. For these, no statistical differences were found. This may have been due to the small group size. For reasons of brevity, I have omitted that part of the study.
In the untrained group, on average, 8.2 relevant events were correctly identified. This means that 60% of relevant information remained unidentified. In the same group, 12.5 events were selected as relevant, while they actually were not.
The trained group, on average, identified 12 relevant events correctly which means that still 48% of relevant information was not reported. Compared to the untrained group, this difference proved statistically significant. In the same group, 12.4 irrelevant events were reported as relevant.
What does that tell us?
First, it is comforting to see that the group that received some training identified more of the relevant information compared to the untrained group. This seems to suggest that training does indeed pay off.
But more interestingly, a relatively high percentage of relevant events remained unidentified. Even in the trained group, still 48% was missed. The pool of selected events was further diluted by a large number of erroneously selected irrelevant events. These results were obtained under laboratory conditions, with cleanly written event descriptions that were identical to all participants. In an actual analysis, the ambiguity of information is likely to be much higher and as a result the percentage of identified relevant data is probably even lower.
This raises the question: If we on average only find less than half of the relevant information, is all incident investigation futile? If our sole goal is to describe everything that happened on that location during the incident, then yes, perhaps we are doomed to be incomplete. But if our goal is to find ways to improve our organization, a lot can still be pieced together from that 50% through careful analysis.
Next time you’re looking at an incident report, keep in mind that you might only be looking at part of the story. But at the same time, if that analysis leads to demonstrable improvements, it may just have been worth it.