Read the case study
Read the class discussion

On 20th April 2010, the Deepwater Horizon mining rig brought the world to its feet. It was a drastic and unfortunate event, which now stands as testament for the importance of good analysis, planning, development, testing and implementation of any system - software, hardware, physical, or possibly even psychological.

Personally, this disaster raised the importance for the early detection of issues. [Though out of context] As the saying goes, “stop it at the start” - it’s much easier to approach and address a small issue, than to confront a full force problem. Being that psycholgist/philosopher in me who has too many things on my plate… When dealing with people, do not allow time to mutate your thoughts. Get to the bottom of whatever doubt you have about someone else, and do it p r o n t o.

Anyway I digress…


My group got together and decided that the biggest contributors to the Deepwater Horizon disaster were the following:

  • Oversight and ignorance of small issues - avalanche effect
    One issue leads into another issue, which leads to yet another issue.
    Before you know it, the once seemingly small issue has brought an onslaught of problems arguably worse than what it once was.
    Though alarms were going off in the control room, little response (if not none at all) was given.

  • Safety fatigue
    You could describe safety as being too comfortable in your security.
    While it is good to have safeguards and measures - it is good to remind ourselves to not get complacent with what we have, but to always be on our guard.

  • People
    Pointing fingers around and blaming others doesn’t help to solve any problem. But nevertheless, it is still quite important. Long shift times, possible miscommunication during handovers, conflicting interests of companies. There are so many reasons as to how communication may have failed.

  • Standards
    Supposedly at that time, there was no authority or standard for the testing procedure… And that already sounds bad on its own.


These issues were met with some of our recommendations:

  • Training - “If you don’t know what you’re doing, you don’t know what you’re forgetting to do”.
    Both awareness and knowledge is crucial to anything.
    Without awareness, you wouldn’t know what to do with your knowledge,
    and without knowledge, yelling and complaining won’t be at all helpful.

  • Reducing the presence of failsafes
    With less safeguards, the mining crew may have been more weary and cautious.

  • Rehabilitation of retribution
    It it quite possible that the small issues that arose during the mining rig’s operation were not brought to light under the fear of punishment.
    Realistically, if you want open communication, you must first develop humbleness and humility. If fines for minor infractions were lowered, a lesser sense of ‘dicipline’ or penalty would allow for better communication and openness between parties.


Other groups recommended:

  • Alternate ways to stop oil flow, rather than just cutting the drill pipe.
  • Further testing procedures, perhaps even simulations
  • External auditing of the system, and also of the staff

Leaving today’s session, it’s evident that there was no one single root cause for the failure of the Deepwater Horizon accident.
You can’t ever be 100% certain of anything these days, and that’s why it is so important to plan and prepare; ultimately, if we do not - we put the future at risk