Offshore Safety Management

 Book: Offshore Safety Management

At the PSM Report we publish four blog series, of which this is one. The other blogs are:

  • Plant Design and Operations
  • Offshore Safety Management
  • The Age of Limits

The posts here are related to the contents of the book Offshore Safety Management, the 2nd edition of which was published in the year 2014 (purchasing information is available here.)

Posts at this blog, organized by chapter title, include:

Chapter 1 — Risk Management

Chapter 2 — Major Offshore Events

Chapter 3 — Safety Offshore

Chapter 4 — Regulations and Standards

Chapter 5 — Safety and Environmental Management Systems

Chapter 6 — Contractors

Chapter 7 — Implementing SEMS

Chapter 8 — Safety Cases

Chapter 9 — Formal Safety Analysis


Deepwater Horizon Fire


Almost two years have passed since the explosion and fire on Deepwater Horizon drilling rig. But most of us still vividly remember the tragedy in which eleven men died and almost a billion dollar’s worth of equipment went to the bottom of the ocean. The ruptured well then leaked around 6 million barrels of oil into the Gulf of Mexico for a period of about two months, leading to extensive environmental damage and economic loss. (The event also further established the authority of The Oil Drum; the timeliness and quality of its postings and comments were unrivalled.)

Incidents of the magnitude of Deepwater Horizon (DWH) often lead to a fundamental rethink in the affected industry as to how such an event could have happened and what needs to be done to prevent a recurrence. The manner in which such a rethink is organized is often along the following lines.

  1. What happened? What was the timeline of events that led to the catastrophe? This phase of the investigation requires deductive analysis (think Sherlock Holmes or Hercule Poirot) and is generally much more difficult than it sounds, not least because most people jump to an early conclusion and then fixate on that conclusion regardless of what later facts tell them.
  2. What were the immediate causes of failure? These can include equipment failure, instrument malfunction and operating error. (The phrase operating error is used in preference to operator error in order to minimize the tendency to blame the supervisors and front-line technicians; the event probably was caused by a series of failures along the way — the front-line personnel were simply the last people on the bus.)
  3. How did the management systems fail? For example, in the case of DWH what led to the failure of the Blowout Preventor (BOP)? Specifically:
    1. Were the proper standards for the design of BOPs followed, and are those standards good enough for current and future conditions given that we are working in ever more challenging environments?
    2. Was the procedure for selecting the BOP for this service properly followed?
    3. Was the BOP properly manufactured and installed?
    4. Were the technicians and supervisors properly trained to operate and maintain the BOP?
    5. Were management and supervisors trained in what to do should the BOP or any other equipment fail to operate properly?
  4. If any of the answers to Question 3 are “no” then how should we update our management systems to make sure that this accidents such as this do not recur?
  5. Are the government regulations sufficiently stringent and up to date, and are the regulatory agencies doing their work properly?

With regard to DWH/Macondo the answers to the first two questions can be found in various reports that analyze the incident in detail. Before discussing the response to Questions 3 through 5 it is first useful to consider the question of risk and risk analysis in an industrial context.

The Nature of Risk

Occupational Safety

Risk has three components: a hazard, the consequences of that hazard, and the frequency with which it is expected to occur. The relationship between these three elements is shown in Equation (1).

RiskHazard  =  Consequence  *  Predicted Frequency…………………… (1)

This equation can be illustrated using a simple domestic example. For those who live in a two storey home the hazard of falling down the stairs is always present. The consequences of such a fall range from minor bumps to serious injury, and even death. The frequency of such an event may be say once in five years, and there will generally be a negative correlation between consequence and frequency, i.e., the more serious the consequence the less likely it is to happen.

With regard to offshore drilling a major hazard (probably the major hazard) is the blowout of the well. The consequences can be very serious — as we saw with DWH/Macondo — but the frequency is low, say once every ten to twenty years.

An important conclusion to be drawn from Equation (1) is that that risk can never be zero; hazards always exist, those hazards have consequences and the likelihood of their occurrence is greater than zero. This means that those who apply phrases such as “risk-free” to industrial activities such as the production of oil from subsea wells have not really grasped the true nature of risk. The only way of eliminating risk entirely is to remove the hazard. In the case of falling down the stairs the risk can be eliminated by building a single-storey house, then a person cannot fall down the stairs: guaranteed. With regard to offshore oil production, the only way to totally eliminate risk is to stop drilling and production. While this conclusion may be appealing to many in the Peak Oil community, it is not likely to have a broader acceptance by society in general, particularly with gasoline prices pushing $4 per gallon.

Although Equation (1) provides a useful start to understanding risk, it does not take into account the subjective nature of risk perception; its linearity gives equivalence to the consequence and frequency terms. For example, according to Equation (1), a hazard resulting in one fatality every hundred years has the same risk value as a hazard resulting in ten fatalities every thousand years. In both cases the fatality rate is one in a hundred years, or 0.01 fatalities yr-1.

But the two risks are not perceived to be the same. In general, people feel that high-consequence events that occur only rarely are less acceptable than more frequent, low consequence accidents. Hence, the second option — ten fatalities every thousand years — is considered to be worse. This point can be illustrated as follows.

In a typical large American city around 500 people die each year in road accidents. Although many efforts are made to reduce this fatality rate the fact remains that this loss of life is generally accepted as being a necessary component of modern life, hence there is little outrage on the part of the public. Yet, were an airplane carrying 500 people to crash at that same city’s airport every year, there would be an outcry. Yet the fatality rate is the same in each case: 500 deaths per city per year. The difference in perception is fundamentally subjective. (Other subjective factors come into play. For example, many people would consider that the life of a child is worth more than that of an old person, or someone who goes bungee jumping at weekends will not tolerate the risk associated with having a coal-fired power plant in his neighborhood.)

Given that high consequence events have a higher level of perceived risk, Equation (1) should therefore be modified as shown in Equation (2).

RiskHazard  =  Consequence n  *  Predicted Frequency…………………. (2)

where  n > 1

It can be seen that the consequence term has been raised by the exponent n, where n > 1. Since the variable ‘n’ represents subjective feelings it is impossible to assign it an objective value.

If a value of say 1.5 is arbitrarily given to ‘n’ then Equation (2) for the two scenarios just discussed — the airplane crash and the highway fatalities — becomes Equations (3) and (4) respectively.

Riskairplane  =  500 1.5  *  1………………………………………………………. (3)

=  11180

Riskauto      =    1 1.5  *  500……………………………………………………. (4)

=    500

The 500 auto fatalities are perceived as being equivalent to over 11,000 airplane fatalities, i.e., the apparent risk to do with the airplane crash is 17.3 times greater than for the multiple automobile fatalities.

The above discussion may seem rather abstract and rather on the lines of “How many angels can dance on the head of a pin?” But it explains, for example, why the nuclear power industry faces such bitter opposition. The consequences of the worst-case event — core meltdown — are so bad that the perceived risk goes off the charts. For forty years the nuclear power industry has largely focused on reducing the likelihood of a major event through measures such as the use of sophisticated instrumentation. But, based on Equation (2), nuclear power will never be fully accepted by the general public until the worst-case scenario becomes not all that bad.


Safety and Environmental Management Program (SEMP)

Offshore drilling rigs and production platforms are extremely sophisticated and involve the use of the most advanced technology — something that the general public got a taste of during the drilling the Macondo relief well. For example, in the year 1996 the Shell Oil Company started production from its “Mars” platform in the Gulf of Mexico. Just three years later NASA landed its Mars Polar Lander on the planet Mars. Anecdotally, many people in the offshore oil business believe that it was the platform that embodied the higher level of technology. And that was in the “old days” of 1996 when platforms were operating in depths of “only” 3000 ft. Now we are drilling and producing at four times that depth.

The use of such sophisticated technology and the high consequences of a major event means that managers in the offshore energy industry need to develop Safety Management Systems (SMS) that are equally sophisticated. Generally such as systems have three components, as shown in the simple Venn diagram below.

Types of Safety

The sketch shows three types of safety (with a large amount of overlap between them). A very brief overview of these types of safety is provided below, recognizing that each of them could be the subject of a lengthy blog post in and of itself.

Occupational Safety is what most people think of when they hear the word “safety”.  It covers items such as trips, falls and vehicle collisions. Occupational safety incidents generally do not involve more than one or two people, and the consequences are generally not too serious (as shown above with the example of falling down the stairs).

The process industries have made enormous progress in occupational safety over the last twenty years or so, both onshore and offshore. Incident rates have fallen by factors as great as ten in that time period. The DWH event did not directly involve occupational safety issues — although there is a concern that a company that has a good record in this area may not recognize deficiencies in the other two types of safety.

Fukushima Daiichi

Technical Safety addresses design issues. Just as the best way to reduce domestic energy costs is to build a well-insulated home, so the best way to ensure that events such as DWH do not occur is to design the rigs and platforms to be inherently safe and to ensure that any events that do occur are properly controlled without harm to people or the environment. (It would appear as if the Fukushima-Daiichi incident was to do with technical safety: the 3 meter protective wall did not stop a 12 meter tsunami.)

Process Safety is concerned with the management of the equipment and the persons operating that equipment. It is the area of safety that received the most attention following the DWH/Macondo event.

Companies working offshore generally base their Safety Management Systems on the American Petroleum Institute’s Recommended Practice 75, introduced in the early 1990s. RP 75 states, “The objective of this recommended practice is to form the basis for a Safety and Environmental Management Program (SEMP)”. Many of the larger oil companies have their own SMS, but they tend to be similar to RP 75’s SEMP — they are like dialects of the same language.

At the heart of a SEMP, and of most other SMS, lie the following twelve management and technical elements.

  1. Safety and Environmental Information
  2. Hazards Analysis
  3. Operating Procedures
  4. Training
  5. Pre-Startup Review
  6. Assurance of Quality and Mechanical Integrity of Equipment
  7. Safe Work Practices
  8. Management of Change
  9. Investigation of Incidents
  10. Emergency Response and Control
  11. Audit of Safety and Environmental Management Program Elements
  12. Records and Documentation

In a blog such as this there is not enough space to discuss these twelve elements. Whole books have been written about them (including two by this author — details can be found at But the importance of each should be self-evident, and, even at a first glance, their relationship to DWH can be seen. For example, the failure of the BOP most likely involved Element 6: Mechanical Integrity.

Not only is each of these elements important in its own right but they are also part of an integrated system. To take a simple example, it is necessary provide the technicians with Operating Procedures (Element 3), but just having procedures is not enough; the technicians have to be Trained (Element 4) in the use of those procedures. Procedures and training are two sides of the same coin.


RP 75 and its associated SEMP had been in use for 20 years at the time of the DWH explosion and fire. However RP 75 is a recommended practice — companies were not required by law to implement its requirements (although certain sections of the standard had been incorporated into regulations).

Offshore safety on the Outer Continental Shelf (basically federally controlled waters) had, prior to DWH, been under the jurisdiction of the Minerals Management Service (MMS). For some years this agency had been developing a SEMS (Safety and Environmental Management System) based on SEMP. However, at the time of the DWH event they had not finalized a rule. This approach changed in a hurry in the second and third quarters of 2010. The following events took place in this time period:

  • The MMS renamed itself. Having gone through various iterations the agency which now has authority over offshore safety is known as The Bureau of Safety and Environmental Management (BSEE, generally pronounced “Bessie”).
  • They quickly issued a rule that SEMP was now a legal requirement, with an implementation date of November 15th 2011.
  • They drafted a new rule that is informally known as SEMS II. This proposed rule, which is still under review, adds many new features to the old SEMP/SEMS.
  • They have stated that they intend to beef up their audit capabilities and enforcement actions.

With these changes the BSEE can claim to have responded vigorously and thoroughly to the DWH / Macondo incident. By moving from SEMP to SEMS and then adding SEMS II they now have a regulatory standard that addresses the world of offshore safety, particularly deepwater drilling and production.

Regulators and Risk-Based Standards

Bureau of Safety and Environmental Enforcement

It has already been pointed out that the offshore oil and gas industry is very high tech, and that moves to ultra-deepwater operations push technical boundaries even further. These changes present the regulatory agencies with serious challenges, including the following:

  • How does the agency keep up with new technology, then write rules to cover the changed situation? By the time they have figured out how to regulate one level of technology, industry has already moved on. The agency is in a perpetual catch-up mode.
  • How does an agency write rules for, and then audit, abstract management elements such as Management of Change? With the older prescriptive standards this was not a problem. For example, a pressure vessel had to have two independent pressure control devices. Such a requirement is fairly easy to write and then to audit. Modern management systems are much more difficult to regulate.
  • Regulatory agencies often face a manpower problem — they have trouble recruiting highly qualified people in such a competitive industry as offshore oil and gas, not least because their pay scales tend to be quite a bit lower than their industry counterparts (this also appears to be a problem with those charged with regulating the financial industry).

In response to these difficulties regulatory agencies throughout the world have developed a risk-based approach to managing safety. Such an approach works as follows:

  1. The company operating the offshore facility develops a program for managing safety and environmental performance at that facility.
  2. Management presents the program to the regulators for acceptance (which is why it is referred to as a Safety Case in Europe and other parts of the world).
  3. If the program is accepted the company implements the program.
  4. The regulator audits the facility against that specific program.
  5. Success is measured not by conformance to prescriptive standards, but by achieving a high level of safety and environmental performance. The only measure of success is success.

Some of the more skeptical readers of The Oil Drum may have reservations about this approach (comments to do with foxes and hen houses come to mind). All that can be said is that this risk-based approach is used successfully in other parts of the world, and that BSEE themselves believe that SEMS + SEMS II moves the United States toward a risk-based approach.


This essay has attempted to provide some background to the manner in which safety is managed offshore, and what changes have been made following the Deepwater Horizon / Macondo incident. Based on what has been written here two conclusions are reached.

The first is that the safety and environmental issues raised by Deepwater Horizon / Macondo are not going to go away. Indeed, as a consequence of EROEI (Energy Returned on Energy Invested) pressures the industry will be forced to move into deeper waters and to drill in more challenging subsea formations. Readers of the Oil Drum may wearily point out that we are just postponing the inevitable drop off in the world’s overall production of oil and gas. That’s as may be, but these moves are going to happen, so let’s make sure we do it safely.

The second conclusion is about people. The discussion in this essay has inevitably been somewhat dry, legalistic, rational and theoretical. But the issues that it addresses are all too human, as can be seen from the following list.

  • Jason Anderson
  • Aaron Dale Burkeen
  • Donald Clark
  • Stephen Curtis
  • Gordon Jones
  • Roy Wyatt Kemp
  • Karl Dale Kleppinger, Jr.
  • Blair Manuel
  • Dewey Revette
  • Shane Roshto
  • Adam Weise

These are the names of the eleven men who died on that fateful day, April 20, 2010. The challenge that we all face is to make sure that we never need to publish such a list again.

Tag Cloud