Page 11 - Read Online
P. 11

Page 4                                                      Schmidt et al. J Surveill Secur Saf 2020;1:1-15  I  http://dx.doi.org/10.20517/jsss.2019.02

               could therefore be suggested that failures that include elements of organisational culpability or reputation
               risk, particularly due to human failure, could potentially create reluctance to engage and learn from failure;
               initial responses from BP’s CEO to the Deepwater Horizon incident are an example of this [18,19] .

               2.2.2 How to motivate engagement with learning from failure
               Organisational learning is seen by many as a strategic tool for organisational success and to enhance
                                   [20]
                                                                                   [9]
               organisations’ efficiency . Furthermore, the importance of Knight and Pretty’s  work is again underlined
               here through the clear demonstration of the link between how well organisations managed failures and
               their financial standing. By encouraging organisations to engage with such thinking, providing examples
               and by supporting use of simulations and scenarios, organisations should begin to understand the value of
               such learning.

               2.2.3 What is unlearning from failures and how can it benefit organisations?
                                                                            [21]
               Unlearning from failures can be conceptualised at three levels. Labib  used the theoretical lens from
                     [22]
               Mahler  of categories of organisations unlearning to illustrate how this applies to disasters. In Mahler’s
               view, there are three types of lessons that cause unlearning for organisations: (1) lessons not learned; (2)
                                                                                                   [21]
               lessons learned only superficially; and (3) lessons learned and then subsequently unlearned. Labib  then
               provided three examples from disasters to illustrate how each of these types of unlearning has occurred.

               There are plenty of examples where similar types of incidents keep occurring and repeated incidents occur
               because organisations have no memory since there are plenty of personnel changes that result in a “brain
               drain”, using Kletz’s  term. Our proposed modelling, as we will show below, provides a concise visual
                                [23]
               representation of the causal factors as a simplified mental model, and hence they are easier to remember
               rather than reading narratives of incident reports that usually run hundreds of pages. Such modelling
               approach also helps to establish the relationships among the causal factors, and hence provides a visual
               assessment of the vulnerability (weak or blind spots) in the system, thus informing our analysis of safety
               barriers.


               2.3 Theories of failure
               According to Labib , learning or unlearning from failure can be linked to a wide range of theories.
                                 [10]
               Organisations can learn from failures through case studies or storytelling. However, they might be
               confronted with the concept of narrative fallacy . This theory indicates that humans often search for
                                                          [7]
               explanations to the point where they manufacture them. Two other approaches address aspects of decision
               making. Organisations can either be too risk-averse or too risk-seeking. This is perfectly illustrated by the
                                                          [24]
               Swiss Cheese Model (SCM) introduced by Reason . This conceptual model provides a simple illustration
               of the function in place where, if all guards fail, the whole system fails. The basic idea of SCM is that the
               layers/slices of cheese represents numerous system barriers that exist in the organisation in the form of
               procedures, check-lists and human checks for preventing hazards, while defects or loopholes in the system
               are represented by holes in the cheese layers. This model visualises incidents as the result of accumulation
               of multiple failures in barriers, or defences, represented as an alignment of holes in successive slices; hence,
               it is a simplified model to show the dynamics of accident causation. In other words, the failure occurred
               due to alignment of holes or simultaneous failures (loopholes in the system) of safety barriers. By safety
               barriers, this is analogous to the body’s auto-immune system. More details about SCM and its evolution
                                           [25]
               and limitations can be found in . The model indicates cheese slices as barriers of protection concluding
               that the number of cheese slices identifies the level of risk aversion. Such modelling is easy to understand
               but its simplicity has also been criticised in that it does not represent adequately the relationship between
               different causal factors. Our proposed approach is intended to address this by providing more insight into
               causal relationships. Thus, the FTA, on the other hand, works its way to understand, or predict, what can
               cause the final unwanted event to happen, by working from the undesired event at the top of the FTA,
   6   7   8   9   10   11   12   13   14   15   16