What is your company doing to make things even safer where you are? In this article, Chris Langer from CIRAS suggests here that a safe working environment is not a by-product of robust business processes, but must be separately engineered through conscious effort. Once a safe working environment has been engineered, it can have very obvious knock-on business benefits too.

To effectively engineer a safe working environment, there are several key areas of interest worth exploring in more detail. These form an ‘Engineering Safety Quadrant’ (Langer 2014).

1. More effective organisational learning

For the benefit of business continuity arrangements alone, it makes perfect sense to be able to effectively learn from past safety incidents (or failings) where organisational resolve has been tested. But it shouldn’t be limited to learning from historical events. There are plenty of lessons to be learned from real-time safety events which are potentially transferable to other parts of the business. And that says nothing of all the lessons that could be learned from other organisations, and even other industries.

“All the knowledge required to prevent the disaster already existed in another part of the company.”

The Deepwater Horizon oil well blowout and explosion in the Gulf of Mexico in 2010 is a good example, one that has implications for learning in the railway industry. All the knowledge required to prevent the disaster already existed in another part of the company. Four months earlier, and several thousand miles away, there had been a very similar incident which hadn’t resulted in a blow-out in the north Atlantic. The same precursors to a major disaster were present, but staff in the Gulf of Mexico never got to hear about this major near-miss. If they had heard in time, 11 lives would have been saved and an environmental catastrophe prevented.

With this in mind, but much closer to home, there is no doubt learning in the railway could be more joined up. Indeed, Head of CIRAS Paul Russell said to me recently, “There are multiple platforms in the rail industry to learn from, but it doesn’t feel like there is one concerted effort. RSSB is a natural place for this to be brought together and I know that efforts are under way to improve that. Until then, there are so many pockets of learning, all good in their own right, but not necessarily joined up”.

2. Switching to a focus on positive outcomes

Historically, there has been a huge emphasis on learning from railway incidents. This is essential, of course, so the lessons can be learned and repeat failures avoided. But solely learning from failure has the negative effect of encouraging thinking about safety in rather negative terms. We need to be aware that we can also learn from successful outcomes in the form of good safety practice. Typically, large safety organisations will have areas of average safety performance and some pockets of excellence. We can tap into this potential gold mine of excellence by actively finding out what is going right, rather than waiting for things to go wrong.

The question that really needs answering is: What are we doing right around here to achieve excellent safety performance? We often forget to ask when things are running safely and smoothly, but that is probably the best time to probe.

This will require a shift in our thinking. In the railway industry, for example, there is a strong focus on learning the lessons from signals passed at danger (SPADs). But why wait for an incident? There is plenty to learn from events which go smoothly. For every SPAD, there are thousands of incidences of train drivers stopping safely under controlled conditions. We can learn as much from these incidences as we can from SPADs.

It’s just that we don’t routinely look for things that go well and the associated behaviours. In an analogous situation, we wouldn’t learn to ride a bicycle by watching someone repeatedly fall of it. So why do we often assume learning from SPADs is the best way of encouraging safer driving? It’s certainly one way, but only part of the picture.

3. Talking safety

In the digital age, it is surprising perhaps that reporters to CIRAS often complain that modern safety communications leave a lot to be desired. Issuing a railway employee with a tablet to receive safety information does not always mean the message is clearly understood. Just because the information is electronically received does not automatically mean it will be mentally processed by the user. Our reporters tell us that face-to-face safety briefings have become less common. They lament the fact they cannot ask questions, challenge assumptions, and clarify their safety understanding in a shared environment.

It is relatively easy to put this situation right with some good old-fashioned human interaction. We’re social creatures by nature, after all, and this is surely still the best way of creating a shared understanding of what’s required to do one’s job safely. Instead of waiting for problems to arrive on the doorstep, we can proactively go out to meet them. We are only going to learn why people think and behave the way they do by talking to them.

4. Promote organisational resilience

Organisational resilience happens by design rather than coincidence. In safety terms, this can be defined as an organisation’s ability to recover quickly from a potentially unsafe situation. It involves empowering staff to make safer decisions. How often do line managers actually praise their staff for making well-reasoned decisions in tough operating environments? Praising good safety decisions and behaviour increases the likelihood good judgment will be exercised in future too. Talk of failure may be warranted occasionally, but it should largely be outweighed by talk of resilience to effectively reinforce the desired behaviour. We need to show staff how to succeed, rather the dwell on their failures.

Where human error is discussed in connection with a breakdown in system safety, we need to avoid taking a judgmental stance. It’s always easier to view an adverse event as the logical outcome of a chain of events traceable back to a single cause. This is what is meant by ‘hindsight bias’, or in plain English, ‘being wise after the event’. The chain of events may seem to make sense in retrospect, but this can create a superficial understanding. The truth is that in complex operating environments, there may be several different pathways to failure.

For example, an operator may have been faced with several equally bad options in the unfavourable situation he found himself in. We have to get inside his head at the time of an incident to see how he really saw the situation unfolding. By getting inside the operator’s head in a blame-free atmosphere, we can help identify a safety system’s inherent vulnerabilities. We can then ‘shore up’ the system’s defences to increase overall resilience. However, we needn’t wait until an incident or near-miss report to take action. Conversations with those who have first-hand experience of the operational environment can be a source of insight in addressing system vulnerabilities at a much earlier stage.

In reality, getting inside an operator’s head to see how they perceive the safety risks and system vulnerabilities around them is fraught with difficulty. It can only be achieved in a trusting atmosphere. Blame-free, confidential reporting is one important way of gathering this largely hidden intelligence for the purpose of engineering organisational resilience.

The Engineering Safety Quadrant (Langer 2014)

Promote effective organisational learning Focus more on positive outcomes • Transfer safety lessons from one part of the organisation to another • Learn from other organisations in the same industry • Learn from organisations in different, but related industries • Learn from successful outcomes, not just failure • Ask: What is going right around here? • Identify pockets of safety excellence and use them as examples Talk safety Promote organisational resilience • Talk ‘face-to-face’ about safety far more • Avoid making the assumption electronic transmission of information is the same as communication • Provide plenty of opportunity for staff to challenge assumptions in interactive forums • Get inside the operator’s head • Find out how staff recover effectively from unsafe situations, and praise good decisions • Highlight system vulnerabilities in a trusting atmosphere: Use confidential reporting to uncover hidden intelligence