Six myths about safety

This working paper by Denis Besnard and Erik Hollnagel is a little gem. The paper presents six myths about safety, which I summarize in the table below.


Photo courtesy of ©

Disclaimer: Besnard and Hollnagel note that for each of these myths entire books could be (and in some case already have been) written. Their paper provides only a summary of these issues and my table reduces the arguments even further.

Myth and assumptions
Alternative perspective
Illustrations / notes
1. Human error is the largest single cause of accidents and incidents.
‘Human error’ is an artifact of a traditional engineering view, which treats humans as if they were (fallible) machines and overlooks the role played by working conditions in shaping performance.
‘Human error’ is a loaded term that implies some form of wrongdoing and asks for a culprit to be found. It is heavily influenced by the hindsight bias (Woods et al., 1994).
Human error is often a symptom of constraints imposed by the context as a well as a reason for accidents and incidents.
If we consider a safe – or even an ultra-safe – system, then there will be at least 9.999 cases of normal performance for every failure. This makes accidents very rare. If the so-called human error is the cause the event that goes wrong, then what is then the cause of all the other events that go right?
2. Systems will be safe if people comply with the procedures they have been given.
Following the procedure will not only get the job done, but will also get it done well, i.e., safely. It also assumes that departing from the procedures constitutes a risk.
Actual working situations usually differ from what the procedures assume. Procedures are inherently underspecified both in scope and in depth.
Procedure cannot cover all possible configurations a worker might face when performing a task, nor can it completely describe what a worker has to do, how, and when.
Compliance to procedures may be detrimental to both safety and efficiency. Procedures should be used intelligently.
Examples of procedures being deliberately disregarded in order to respond proactively to the potential evolution of a catastrophic situation abound.
3. Safety can be improved by barriers and protection; more layers of protection results in higher safety.
Multiple active and passive safety systems (e.g. ABS, ESP, crumble zones, safety belts, airbags, etc. in cars) protect against harm.
People become comfortable with a certain level of risk. If technology is introduced to enhance safety people may use it to increase performance efficiency while keeping the perceived level of risk constant.
Adding protection invariably increases the complexity of the system. The added components or functions can not only fail, but may also significantly increasing the number of combinations that can lead to unwanted and undesired outcomes.
Technology is not value neutral. Additional protection may change people’s behaviour so that the intended improvements fail to obtain.
One illustration of the risk homeostasis hypothesis described here is the introduction of ABS braking system in the automotive industry. A large-scale study conducted by Aschenbrenner and Biehl (1994; quoted by Wilde, 1994) showed that taxi drivers whose cars were equipped with ABS tended to drive more aggressively in curves.
4. Accident analysis can identify the root cause (the ‘truth’) of why the accident happened.
First, it is assumed that events that happen in the future will be a repetition of events that happened in the past (Cf. the procedure myth).
Second, it is assumed that outcomes of specific events are bimodal; i.e., outcomes are either correct or incorrect.
Third, it is assumed that the cause-effect relations can be exhaustively described.
Unlike technical systems, neither individual nor collective human performance is
Bimodal – i.e. it doesn’t work or fail. Performance varies considerably.
Unwanted outcomes in socio-technical systems do not necessarily have clear and identifiable causes.
A simple methods will output results faster than a complicated method, and that the results will often be familiar ones. The increase in efficiency is, however, offset by a reduction in thoroughness (Hollnagel, 2009a).
5. Accident investigation is the logical and rational identification of causes based on facts.
Investigators are neutral, detached evaluators. Only evidence and characteristics of the case at hand (its complexity, severity, potential for learning, etc.) are taken into account.
Accident investigation guidelines embody a set of assumptions about how accidents happen and what the important factors are.
Investigations become a trade-off between what can be done and what should be done: a trade-off between efficiency and thoroughness.
The need to establish responsibilities can bias in investigations, to the extent that the causes of the unwanted event become a secondary issue.
Accident investigation is a social process, where causes are constructed rather than found. An accident investigation must always be systematic, hence follow a method or a procedure.
RCA conforms to the What-You-Look-For-Is-What-You-Find
(WYLFIWYF) principle (Lundberg et al., 2009).
As Woods et al. (1994, p. xvii) puts it, “attributing error to the actions of some person, team, or organization is fundamentally a social and psychological process and not an objective, technical one.”
6. Safety always has the highest priority and will never be compromised.
The assumption is that safety is an absolute priority in the sense that it cannot be compromised.
Safety has financial implications that cannot be ignored and it is understandable that costs do have an influence on the choice and feasibility of safety measures.
The benefits of safety investments usually are potential and distant in time.
A further complication is that safety performance is often measured by the relative reduction in the number of cases where things go wrong rather than as an increase in the number of cases where things go right.
This means that there is less and less to ‘measure’ as safety improves.
Safety comes first if the organization can afford it. If not, safety is traded off against economy.
Safety will be as high as affordable – from a financial and ethical perspective.
In 2004, BP Texas City had the lowest injury rate in its history, nearly one-third the average of the oil refinery sector.
In the following year, on March 23, a major explosion occurred in an isomerisation unit at the site, killing 15 workers and injuring more than 170 others. This was the worst US industrial accident in over a decade.
In assessment of ‘safety behaviour and culture’ at the BP Texas City employees were asked to rank their perception of the priorities at the Texas City site, using a set of given options. The first three choices were Making money, Cost/budget and Production, respectively. Major Incident and Security only came in fifth and seventh position.

The authors make a series of other important points in the paper that are worth highlighting:

  • Instead of defining safety as a system property, safety should be seen as a process.
  • Safety is something that a company does, rather than something that it has.
  • The focus of safety work should be on what goes right rather than what goes wrong i.e. first find out how we keep safe most of the time.
  • Simple, context-free performance indicators such as fatality rates or accident tallies cannot measure safety.
  • An alternative measurement of safety would be one that accounts for the various parameters it actually relates to: technical control of the process at hand, available resources, social acceptance, and so on.
  • Or as proposed by resilience engineering, the ability to respond, to monitor, to anticipate, and to learn

Please note: I reserve the right to delete comments that are offensive or off-topic.

Leave a Reply

Your email address will not be published. Required fields are marked *