Source: Treehugger
The NY Times has put together a detailed look at the issues and failures behind the Deepwater Horizon disaster (see In Gulf, It Was Unclear Who Was in Charge of Rig). In Saturday's NY Times, Ian Urbina catalogues two kinds of failures: system failures for which BP and its partners were responsible and failures on the part of the regulatory framework that governs (or not) offshore oil and gas development in the Gulf of Mexico. I want to focus on the risk management failures he identifies in this article.
Several weeks ago, I looked at the news reports to that time and argued that the Deepwater Horizon disaster provided some valuable lessons for managing the risks inherent in complex undertakings like drilling for deep water oil and gas deposits. As noted in that post, I think Thomas Homer-Dixon's Ingenuity Gap provides an excellent basis for understanding how the problems we face, such as this blowout, can outrun our ability to deal with them. Echoing these arguments, the article quotes Tad Patzek of the University of Texas, Austin, who concludes that:
"It's a very complex operation in which the human element has not been aligned with the complexity of the system."
And Mr. Urbina goes on to identify four ways in which BP and its partners failed to adequately manage the system's complexity:
- Who was in charge?
Federal investigators heard conflicting answers as to who was in charge of the rig and its activities in the days and hours leading up to the explosion. There was an inherent conflict in the relationship between BP and the drill rig owner, Transocean (and likely Haliburton as well), with BP responsible for paying a daily leasing fee for the rig. Time was a cost for BP and a source of revenue for the contractors.
In Mr. Urbina's words: "Amid this tangle of overlapping authority and competing interests, no one was solely responsible for ensuring the rig's safety, and communication was a constant challenge."
A related issue was the apparent confusion over delegation of responsibilities once something went wrong. The rig captain was upset that one employee pressed a distress button without authorization and another failing to contact shore for help.
- Failure to Plan for Identified Risks.
BP asked for and received permission to exempt the project from the regulatory requirement for a rigorous environmental review. Even once risk assessments had identified a "worst case" blowout as one that might produce 250,000 barrels of oil per day (The blowout is currently spewing out 12,000 to 19,000 barrels per day - or possibly more. It is hard to imagine the devastation the worst case would have led to.) there was no response plan to address that level of risk nor did the company have on hand the equipment that it had indicated would be used to respond to a blowout.
NY Times notes that the rig's spill response plan included a web link for a contractor that directed users to an Asian shopping website, suggesting that it was out of date and/or had not been adequately checked.
- Ignoring own risk management policies and designs.
BP engineers had to get company permission to use equipment, including casings, that deviated from it's own design and safety policies. BP elected to proceed with well casings that had a greater risk of collapsing and used cement in a way that did not meet Halliburton's "best practices".
The Minerals Management Service (federal regulator) had "highly encouraged" (but did not require) companies to have backup systems to trigger blowout preventers in case of emergency - BP did not have them.
- Not responding to failures and indicators that something was wrong.
Well before the blowout and explosion April 20th, the rig was experiencing "kicks" (kicks, or pulses of gas or pressured liquids into the well bore, if not controlled become blowouts), the blowout preventer was found to be leaking fluids at least three times, despite noticing cementing problems in the well bore, BP skipped a quality test of the casing, tests before the blowout indicated abnormal buildup in pressure, .... A picture of persistence in the face of any and all indicators that something was wrong.
Complexity demands rigorous management systems if it is going to be managed successfully.
In the introduction to its 2009 Sustainability Report (Safety section), BP indicates that it follows a "systematic approach":
"BP constantly seeks to improve its safety performance through the procedures, processes and training programs that we implement in pursuit of our goal of ‘no accidents, no harm to people and no damage to the environment’ ......We are carrying forward our efforts on process safety, which is an integral part of our operating management system (OMS) and ingrained within our capability programs."
The evidence presented by Mr. Urbina suggests that while the systems exist on paper, BP is having difficulty implementing them in its U.S. operations.
So to reiterate what I wrote several weeks ago, the lessons (for risk management) coming out of the Deepwater Horizon disaster seem to be:
- have adequate systems in place ( but this isn't enough);
To be fair, its hard to find a major company in the oil and gas business that hasn't gone to the effort of developing risk management systems for safety, process safety, environment, and other key risks inherent in their businesses. Similarly they all have crisis management, emergency response, pandemic and other disaster response plans on the books.
- make sure they are up to date;
However, its the implementation where many management systems start to come unglued. Its not enough put the binders on the shelf or the plans on an intranet site. If systems and plans are up to date, they don't have links to Asian shopping sites.
- make sure they are understood and used by everyone involved; and
Its hard work to communicate and train employees regularly to ensure they understand and can use the systems and plans in their everyday activities. Its even harder to make sure that they continue to be used when time or cost concerns intervene - everybody loves a good shortcut.
- make sure the performance and process data that is used in decision making is the right data and that it is not ignored in response to the pressures of the moment.
Its important to take the time before a project begins to understand what indicators/ measures might help us understand the process we are managing. And then its even more important to pay attention to those indicators once they arise. What's the point of constructing systems and plans to manage complexity if we ignore the indicators that tell us something about the complex environment we are trying to manage?
Perhaps a more succinct way of putting all of this is to follow the old Boy Scout motto: Be Prepared.
Comments