The burning platform fallacy

Why organisations wait for disasters to invest in safety — and what that delay really costs
There is a phrase that circulates in organisational change management circles: the burning platform. The idea is that people only change when the cost of staying the same exceeds the cost of changing. As metaphors go, it is vivid. As strategies go, unfortunately, it is a catastrophic one.
Most organisations approach safety culture the same way. They invest in human factors training after the fatality. They commission a review of their incident reporting system after the regulator calls. They start asking questions about psychological safety after someone quits and explains why in their exit interview. The trigger is always reactive. The learning is always late.
This is not a moral failure. It is a structural one. When nothing has visibly gone wrong, it is genuinely difficult to make the case for change. Boards want to see return on investment. Operations managers are under pressure to deliver output. The implicit logic — we have been doing this for years and nobody has died — feels like evidence of a functioning system. It is not. It is the absence of a visible failure, which is a very different thing.
The absence of a visible failure is not evidence of a functioning system. It is evidence that the failure has not surfaced yet.
James Reason, whose Swiss cheese model most safety professionals will recognise, made this point with quiet precision. Unsafe acts and latent conditions accumulate in systems long before any accident occurs. The holes in the cheese are always present. The trajectory through them simply requires the right alignment of circumstances — pressure, distraction, ambiguity, a shortcut that has worked a hundred times before. When that alignment occurs, organisations call it an accident. In reality, the conditions for it existed long before the event.
Erik Hollnagel takes this further still. In his framework, the vast majority of operations succeed — not because the system is perfectly designed, but because people adapt, adjust, compensate, and absorb variability in real time. This is what Safety-II thinking calls everyday performance: the constant, invisible effort of human beings making things work despite the gaps between how procedures are written and how work actually gets done. The system appears stable not because it is stable, but because people are working hard to hold it together.
When organisations finally do respond to a crisis, they almost always focus on the individual closest to the event. The person who made the decision, took the shortcut, missed the signal. This is psychologically satisfying — it provides a cause, a resolution, and the comfort of exceptionalism. It was them. It would not be us. The problem with this framing is that it leaves the system intact. The conditions that shaped the decision, the pressures that made the shortcut rational, the culture that made speaking up feel unsafe — all of these remain unchanged. The next person inherits the same environment and faces the same choices.
The cost of waiting for a burning platform is not just financial, though the financial case is compelling enough. HSE data consistently shows that the uninsured costs of workplace incidents — lost productivity, investigation time, reputational damage, staff turnover, litigation — typically run at eight to thirty-six times the insured costs. The human cost is harder to quantify but no less real. People carry the weight of incidents that could have been prevented. Teams fragment. Trust corrodes.
Leaders within organisations also need to consider psychosocial harm before a physical event materialises. If someone is stressed/fatigued, their performance is going to suffer, and they are going to miss critical information and its relevant.
The question worth asking is not "what will it take to change?" That framing already concedes too much. The better question is: what would it mean to understand how your organisation actually works, before circumstances force you to find out the hard way?
Organisations that answer that question proactively — that invest in understanding the gap between their written procedures and their actual practice, that build cultures where people can raise concerns without fear, that treat near-misses as information rather than near-disasters — these organisations do not wait for the platform to burn. They learn before the cost of learning becomes existential.
That is not idealism. It is the most practical thing an organisation can do.

