The Personal MBA

Master the Art of Business

A world-class business education in a single volume. Learn the universal principles behind every successful business, then use these ideas to make more money, get more done, and have more fun in your life and work.

Buy the book:


What Are 'Normal Accidents'?

The theory of Normal Accidents is best expressed as a universal proverb: "shit happens." The more complex a system is, the higher the probability of something eventually going wrong.

Overreacting to Normal Accidents is counterproductive: if you want the system to fail less, making it more complex doesn't help.

The best way to avoid Normal Accidents is to analyze breakdowns when they happen to learn about them and create contingency plans in case they happen again in the future.

Normal Accidents are the reason you should keep your systems as loose as possibly (without affecting its performance). Accidents will happen, it's just a matter of time.

Josh Kaufman Explains 'Normal Accidents'

The Space Shuttle-a vehicle capable of exiting the bounds of Earth's gravity with human travelers aboard-is clearly an extremely complex system.

Strapping a highly engineered airplane to three rockets holding millions of cubic feet of extremely volatile hydrogen gas is the epitome of a highly Interdependent system. Any error has the chance of cascading catastrophically, and every time the Shuttle is launched, millions of things could potentially go wrong.

In 1986, the Space Shuttle Challenger suffered a catastrophe-a seal in one of the rockets froze, becoming extremely brittle. When the seal was super-heated during take-off, it failed. The Challenger exploded seventy-three seconds after lift-off, killing everyone aboard.

It's tempting to believe that it's possible to create a system in which nothing ever goes wrong. Real-life systems always prove otherwise-count on it.

The theory of Normal Accidents is a more formal way of expressing a universal proverb: shit happens.

In a tightly coupled system, small risks accumulate to the point where errors and accidents are inevitable. The larger and more complex the system, the higher the likelihood that something will eventually go very, very wrong.

Over-reacting to Normal Accidents is actually counterproductive. When something goes wrong, our instinctive response is to become hypersensitive, locking things down and adding more controls to prevent the unfortunate event from happening again.

This response actually makes things worse: locking things down and adding more systems only serves to make the system more tightly coupled, increasing the risk of future accidents.

NASA's response to the Challenger tragedy is extremely instructive: instead of completely shutting down or adding more systems that could compound the issue, NASA engineers recognized the inherent risk and focused on finding other solutions to the problem that would minimize the risk of the issue reoccurring without adding more systems that could potentially fail.

The best way to avoid normal accidents is to analyze breakdowns or "close calls" when they happen. Instead of going into the systems equivalent of Threat Lockdown, which can create even bigger issues in the long-term, looking at near-misses can provide crucial insight into hidden Interdependencies. By analyzing the issue, it's possible to construct contingency plans in the event a similar situation happens in the future.

In 2003, the Space Shuttle Columbia suffered a catastrophe of a different sort: the carbon fiber heat shields heat shields designed to protect the shuttle as it re-entered the Earth's atmosphere failed, and the Shuttle disintegrated. Again, NASA focused on how to prevent the issue from happening again without making the system even more tightly coupled. When the Space Shuttle Discovery suffered damage to its heat shields on takeoff a few years later, NASA engineers were prepared, and the crew landed safely.

Normal Accidents are a compelling reason to keep the systems you rely on as loose as you possibly can. There are many positive things to be said for systems, but expecting zero failures is unrealistic in the extreme.

Loose systems may not be as efficient, but they tend to last longer and fail less catastrophically. The more complex a system is and the longer it operates, the more likely it is to suffer a major failure. It's not a matter of if-it's a matter of when.

Be watchful for system failure, and be prepared to respond to it quickly.

Questions About 'Normal Accidents'


"The problem is not that there are problems. The problem is expecting otherwise and thinking that having problems is a problem."

Theodore Rubin, psychiatrist and columnist


From Chapter 9:

Understanding Systems


https://personalmba.com/normal-accidents/



The Personal MBA

Master the Art of Business

A world-class business education in a single volume. Learn the universal principles behind every successful business, then use these ideas to make more money, get more done, and have more fun in your life and work.

Buy the book:


About Josh Kaufman

Josh Kaufman is an acclaimed business, learning, and skill acquisition expert. He is the author of two international bestsellers: The Personal MBA and The First 20 Hours. Josh's research and writing have helped millions of people worldwide learn the fundamentals of modern business.

More about Josh Kaufman →