Friday, September 03, 2010

What is the Value of the Near Miss?

The “near miss” is a terribly misunderstood event. This blog is about the near miss as seen from my view, which is not only purely practical, but is intentionally divesting itself of most of the obscuring terminology used in “safety systems.” I abhor when people disguise desired outcomes as systems to escape the scrutiny of logic.

In my universe a near miss is probably better called a “no loss event,” but the terms are interchangeable.The near miss has enormous value as a tool to recognise control failures, modify controls against a given risk, and it is a practical opportunity rather than a threat.

An example of a near miss is easy enough to conjure from the real world. Calgary (Alberta, Canada) is a windy city, and high-rise buildings are bombarded by gusts that have an array of damage potential. Just this year, in one case, a falling piece of unsecured building materials killed a child. Other incidents of falling objects abound, some from that same worksite. Eschewing focus on any of those events specifically, let us consider just the idea of a falling object. Let us identify a risk, cite a control or two, and then explore the value of the near miss opportunity.

The risk, of course, we can just generically phrase as the “risk of falling objects.” This generic risk covers everything from wrenches to hunks of plywood. One applicable control we will call “secure all objects,” and another we will call “ensure safety netting below the immediate work site.” Again, we have chosen generic and simple controls, since our point is illustration.

Outcomes can range, so let us generalise those on a spectrum (see the previous blog post and its link to understand where these spectrum points come from):

  • Fatality or Catastrophic Loss;
  • Major Injury or Major Property Loss;
  • Minor Injury or Minor Property Loss; and
  • Near Miss or Non-Loss Event.

Let’s take a moment to shorten the terms just for ease of reference: Fatality; Major Loss; Minor Loss; and Near Miss.

Now, let us unwind a scenario to describe each, while recognising that though these are generic and imagined each has actually occurred in Calgary in 2010:

  • Fatality: A tool falls off a high-rise site, striking a child and killing them.
  • Major Loss: A tool falls off a high-rise site, and strikes a vehicle, totalling it due to striking it specifically so that it is not repairable.
  • Minor Loss: A tool falls off a high-rise site, and strikes concrete, destroying the tool.
  • Near Miss: A tool falls off a high-rise site, and lands in ploughed ground not presently developed, which results in not even the tool being destroyed since it landed in mud.

In every case here the “risk of falling objects” was encountered. Regardless of the cause of the encounter, the encounter is fundamentally the same. In every case an object fell, yet the effect of that risk encounter is, in each case, of different severity.

Details are irrelevant to the analysis of the controls that failed (in our example, at least). Either or both of our cited controls failed to some degree. For arguments sake let us say that there was a failure to secure all objects, and while netting was in place it did not contain the falling object. Via our investigation of the incidents, we determine for our purposes here that:

  • None of the four tools was secured with safety lines, and all were heavy rivet devices just to avoid obscuring the simplicity.
  • None of the netting in place was sufficiently preventative, and tore through due to the weight.

Now, while there is some edge of surrealism to the example, because we’re pretending the details were similar or identical, the illustrative value is clear: it is possible to have identical causes, identical control failures, and vastly different outcomes. The only variation in our scenarios presented is that in one case a human life was lost, in another an expensive asset, in the third the tool is destroyed, and in the final one nothing is lost. (Note that we’re pretending time was not lost, but in a near miss circumstance time will always be lost, meaning they cost us, which is why a non-loss event isn’t generally correctly named if you call it such. But our pretence is intact, to observe the basic fact of variant outcome.)

The value of a near miss is now obvious if you pretend the order of occurrence of these events is reversed from our list above: the near miss happened, before the other outcomes were seen. And therein lies the value of the near miss as an opportunity, because if you recognise it for what it is and analyse it properly, you will become aware that given the specific landing point, this near miss could have resulted in a fatal outcome. Extrapolating to that possibility, you can then focus on the nature of the control failures and harden those controls.

In our example, pretending the near miss came first, and pretending the safety professional isn’t just churning paper, there are two clear recommendations: secure the heavy tool by a separate tether; and double the strength of the netting (or add a secondary netting, etc.). The recognition of this near miss as an opportunity to harden controls is the difference between next weeks repeat having a more severe outcome, or a lesser one. In no way does the control improvement guarantee nothing can go wrong, but in every way it shows due diligence, productive focus, and risk management.

Near misses save lives, ultimately, by improving practical control measures.

The problem with this is that most people know it, but no one uses the opportunity to generate a competitive advantage. It is apparently easier to wait until someone dies, than it is to try to prevent the problem occurring. And the usual excuse? Because accidents happen. Just like the digging company that might repeatedly rely upon line identifiers who miss critical lines, people excuse control failures until the outcome severity forces recognition of the management failure.

One of the key problems getting a near miss to be recognised as an opportunity is that status quo “safety systems” rely upon statistical outputs and counts that can be negatively impacted by recognising too many opportunities. This negligence is an affront to the idea of operational risk management. Not respecting the near miss as the opportunity to harden controls is the quickest way to ensure that when the luck runs out, the cost is catastrophic.

The Risk Encounter Outcome Model

Today’s post is short, and to the point. In examining ways to visualise risk as a component of operational processes, I’ve formed a model that allows you to visually represent severity of outcome, relative risk exposure, and control coverage. While not for the faint of heart, the paper is posted online at our website here. I would précis it on the blog, but it’s one of those papers best just read in a quiet hour and mulled upon, and a précis won’t give much in the way of insight beyond the paper.

Tuesday, June 22, 2010

The “We’re Not As Bad As…” Syndrome

While chatting with my de facto boss today, we were discussing an often heard phrase that flies from the mouths of otherwise intelligent people, usually when they are preparing to explain why they don’t need to do risk management, or why their “safety programme” sufficiently protects them. To paraphrase, the rush of words come out something like, “We’re not as bad as…” and is followed by a list of companies much worse than their own. The rationale is, evidently, that as long as you can name someone who is worse than you, you are in no need of cyclic improvement of any kind. And thanks to BP, short of killing a half million people directly, I suppose deniers can enjoy complete indifference about their long-term risk management prospects.

What struck in my head after our chat was that this thought process is so incredibly common in life, where an astounding number of people are willing to maintain a crumbling state of being just because someone is observably worse than them. It strikes me that as the world digresses, I will eventually hear someone who is smoking a cigar say, “Well, I only have tongue cancer, but  that’s okay because my Uncle Bill is coughing up his entire lung.” Or, perhaps more depressing, I really do some day expect to hear a pretender to the title “safety professional” actually say something like, “Well, we only killed one person last year; our biggest competitor killed three!” (After 10+ years doing this work, I have, actually, read several remarks that come frighteningly close to that, but never spoken directly with anyone who has had the gall to say it aloud.)

This idea that maintaining a static state is enviable is problematic from several perspectives, but ignoring all perceptive aspects of the problem it represents, applying basic logic tells it for the lie it is. Why? Well, simply because business is about progressive revenue enhancement, which requires growth, and growth is dynamic by nature. Hence, any business that is truly static cannot grow, and so operationally the idea of not managing change is an impossible one to attain – though, the world knows, folks will try. To grow a business, you must be operationally flexible, and to be operationally flexible requires dynamic change management – and that, eschews the idea of static state. Anyone who promotes status quo, then, in any operational domain really represents a pure liability.

Where the thoughts led me today was really more about how helpless people seem to be, and how irrational is the acceptance of loss that is not necessary. You see it in politics where compromise has replaced actual leadership, and in everyday life where we negotiate the least evil available rather than strive for better. In all these avenues of life, where risk management is a genuine reality, what we are really seeing is the price of an unhealthy misunderstanding about what risk is, and how one must manage it to achieve value. The simplest explanation for the frozen state of thinking is fear, but experience suggests that perhaps the real cause is not fear so much as laziness. We seem, as a collective, too lazy to challenge ourselves.

This hits home right now with us, at Pragmatic Solutions Ltd, because we are in an interesting position in terms of growth. We recognise we have a need for new expertise, and new resources in the business, but we have found it almost impossible to open that conversation effectively with partners who could actually enhance the business commercially. There are plenty of talking heads, many of them promoting themselves as experts in one domain or another, but the more they talk the more they tend to expose the depth of ignorance in their thinking. While that kind of statement can seem bitter, it isn’t bitterness but logic that implies the truth in it, because what we have found is that many of these promoters have a very static, patterned approach to their thinking that is not apt to recognise anything of value outside some narrow range. Specifically in the contacts we have made to try to jump-start some pursuit of a valid partner to take forward the ideas we espouse, what we run into is a lack of imagination. If the business prospect isn’t comparable to some pre-existing one, it is dismissed intellectually, and we get suggestions to reshape the product concept to be more like some existing product. While this would be a suitable statement if the product was like another, or immature, it becomes a complete barrier to communication given that our ideas are intentionally fresh. Exactly why would we mutate a new idea into the old form that has already failed miserably, just to commercialise it? If that was our intent, we would never have spent the enormous effort to develop, prove, mature and integrate the new thinking patterns. How do you communicate the power of an idea to people who are intent on erasing the value it represents to make it a metaphor for past failures?

These people are, of course, really representative of the same attitude this blog observed earlier; though, rather than “we’re not as bad as…” they are claiming some dynamic authority due to an adherence to status quo thinking: something like, “we are better than the following thinkers, because we project it to be so.” Yet underneath the same lack of imagination and the same odd adherence to old patterns of business approach and thinking are extant. It is no wonder that innovation is so nearly dead, given that the people who should be innovating are so busy trying to repackage newness in some form that buries the newness. This is of course, risk avoidance by way of dismissal; and it is a distressing thing to a risk manager to see that judgement of value has fallen on a tired cycle, rather than a cycle of improvement and advancement.

Research and Development is what creates progress, and yet commercially speaking, almost no entities exist where there is a true appreciation for the pursuit or new value. This, like the attitude that maintaining sameness will change outcomes, is a very disturbing thought process.

Sunday, May 30, 2010

Deploying a Pragmatic Solution

Developing a pragmatic solution to any problem takes time, and deploying one is challenging. The business challenges to being on the forefront of a market are often overwhelming, but for more than fourteen years, Pragmatic Solutions Ltd. has been travelling a path to a destination. That destination has changed over time, and the path has changed in course of time, because just as our research into risk-management showed us the requirement of functional dexterity in that domain, we long ago realised the same flexibility is a necessity to be the harbingers of a new paradigm.

Traditional Safety has spent decades wallowing in variants of the same approach to new problems, always achieving mediocre results through luck and generating spectacular failures by way of inevitability. You cannot treat symptoms to cure an illness, unless you consider death a cure. Traditional safety will never create safer workplaces, because it has never grasped the problems, and spends more time playing with paper than solving real problems.

While it took us ages to create what we have as a solution-set for generating safer outcomes, we are heartened that it often takes an intelligent observer only a short exposure to the fruits of our efforts to have their “Eureka!” moment. It is frequently painful to see the realisation that there is a better way, a dynamic process that is not indifferent to the well-being of people, and to see it combined with the realisation that to make the change requires a degree of commitment and capability that seems impossible. The number of times smart people have told us, outright, that their organisation doesn’t appear to have the chops to play the tune we wrote is shocking.

What we have to deploy is not another safety pig with a different shade of lipstick, and there is no magical transition from pig to princess promised. What we developed was a systematic approach to changing how operational risk-management is done, and to show how it should be done to integrate the scope of operations entirely, and to ensure that the outcomes sought are measured by something that can be quantified. We rejected the same-old-same-old because we knew that industry, business at large, needed to find a way to produce an outcome that wasn’t luck-based. Repeatable success, cyclically improved, with and ever-increasing productive knowledge base was the goal – the tools, and the methods, prove it possible.

We realised that safety is the outcome of imparting a culture of operational risk-management; and that rather than being something you can pay lip service to, it is an idea that must be lived. There is something honourable in forging ahead on new ground, even with the incredible challenges presented by the inertia in the domain of safety.

And here on the cusp of leaving our research and development phase behind, following the governing principles of risk-management, we recognise that there also comes a time when the core of a thing – our company – needs to reach out and surrender the lead to someone who can commercialise the opportunity. Who that is remains to be seen, but the search is on, because at the end of it all we still know the product-set is right, the problems we solve are real, and the value-proposition is undeniable.

Saturday, May 29, 2010

The Acceptance of Failure

One of the most discouraging realities of a decade and more of research is that it tends to confirm executive management has accepted failure in the safety realm, and seeks not to resolve problems, but to mask them since they seem intractable. The traditional safety purveyors have made it worse by promising change, then resorting to “tried and true” methods that have never delivered positive change, reinforcing the view that safety is a black hole for productivity, investment, and time. It is not a wonder that management rejects safety as a real practical priority when their direct experience has been so negative, and it is no surprise that the idea of a shift in thinking meets resistance, since for years the same-old-same-old has been represented as just that – a shift in thinking.

The problem with accepting failure where safety is concerned is that without change, the cost of doing business increases over time until its weight collapses the reason for doing business. It would be far better, and safer, to embrace a real shift in thinking, but the barriers to that are real, and the challenges when one does shift to the new opportunity are real.

Risk-management is not a standalone solution, a tool that just drops in and solves all the problems, because it depends on so many other systems functioning. It depends on being fed good data, in a timely way, and upon being able to have the range of expertise in an organisation act to increase its value. Even for companies who can access their own data effectively (though we have yet to encounter one), accessing employee expertise becomes an almost insurmountable barrier to adoption. Traditional safety practitioners are highly resistance to being exposed by the evidence the system will generate, managers are hesitant to trust another system when so many betray their interests, and workers are suspicious after so many years of being fed lies about something they understand – safety in the workplace.

To reject failure means to recognise that the claimed problems, are often the symptoms of the actual problems, and to treat them means to treat the problems that lay beneath those systems. It means to understand that operational risk-management requires cultural dexterity, commitment, and a desire to continuously develop new opportunities for productivity. There is no status quo condition for risk-management, and that dynamic of continuous change and continuous cyclic improvement can be daunting to many.

Ultimately, though, one has to wonder if the failure you know is actually safer than the pursuit of excellence. The answer to that question may be irrelevant, given the evidence of actions, which speak louder by far than any words. And yet with the trend toward prosecution along with the failure to act reasonably, it is clear that something must change, and that at some point smart executives will demand more than raw counts to base their perception of workplace safety. The question is not if this happens, since it will inevitably, but exactly how long a timeframe exists for those who cling to antiquated ideas about safety are pushed into the light. How many more bodies need to be piled at the gates of industry before they recognise that far too many of those bodies could have gone home, if only the risks that took them had been managed.

Thursday, May 27, 2010

Closing the Loop

The risk-management method is about loop closure as much as it is about cyclic improvement. One of our more peculiar discoveries over the years has been that the most common single point of failure in all systems, inside the safety domain and out, are described as “loop closure failures.” This means, in plain terms, that almost all of them fail not by bad intentions, or because of ignorance, but because of a lack of stamina. Closure in the context of systems is not a fantastical term, but is really a specific one that means all systems intended to provide themselves a feedback loop must close that loop, or they will fail.

In terms of traditional safety this is shown everywhere you look, but probably most notably by the fact the same cause-effect conditions cause the vast majority of accidents. It shows practitioners of traditional safety are not learning from their aggregate data, because they have no objective mechanisms to educate them, and because they have no aggregate potential across entirely disparate data sources. They cannot, for example, warn that a site is at greater risk because the people on that site have training deficits, because they often do not know, until after the fact, what the site purpose is, who is there, or what they are doing. This reactive stance is a choice, which extends from allowing chaos instead of governance, and failing to manage. Yet, this is no surprise, since traditional safety is concerned with raw counts rather than analysis. That some people can actually provide safety via the traditional model is a shocking testament to individual insight.

When risk-management is embraced, the closure of its feedback loop is where the maximum value is generated, because it can educate, inform, and provide the grist for the decision-making mill that renders better decisions. The challenge is that so few people have been trained to close the loop for any system, to actually follow-through, that the risk-management model can face a distinct and immediate challenge that has nothing to do with its features or scope, and everything to do with how unprepared people seem to be to manage.

Management is almost a lost art, because it has been packaged and those packages ignore that solid management is an analytical process that helps make decisions. Management is not housekeeping, though housekeeping requires management; any more than management is a decision, though it generates them. Management is about perceiving opportunity based upon inputs, about steering resources to achieve outputs, and about ensuring that the decisions create more output than the required input.

Part of the real problem with closing the loop for managers today is bad information, which is often triggered by a reliance on statistical calculations and other formulae. The often repeated idea that business management graduates are awful managers is precisely because too many of them end up in a management position and the only tools they know are to use those formulaic approaches. They apply them correctly, without recognising the inputs are skewed by bad communication, misconceptions, and so on. The rule that stated garbage in equals garbage out is true, and management by statistic ensures failure.

To succeed requires to know the inputs, know the path they take, and to know when the outputs appear they must be returned to the cycle. Closing the loop for risk-management returns value beyond the investment many times over.

Wednesday, May 26, 2010

Directives Matter

If metrics that lie kill people, and proof of due diligence is considered a requirement of modern business, directives are what save lives and prove diligence. And yet, our research taught us that perhaps traditional safety exists in such an operational vacuum as to not require even this basic communication process to function.

In traditional safety, directives are what are commonly called “corrective actions.” Corrective actions provide a fix for a problem where there is risk of loss. One would think that given the possible consequences, including serious harm and personal liability, this element of a safety program would garner serious attention. However, our research demonstrated – and continues to demonstrate – that corrective actions are often never identified as a part of an accident investigation; and when they are identified the corrective action is a mere observation or re-statement of the problem, not a call to any action, corrective or otherwise.

Because a corrective action is critical to fix problems, one generally hopes it meets a few minimum standards of communication. These are: it needs to actually describe the expected action; it needs to indicate who is responsible for completing the action; and it needs to indicate a due date for completion. To engage any level of acceptable proof of due diligence, it also needs to actually be confirmed as complete at some point, preferably in the same decade it is initiated. Also handy would be to understand the context for the corrective action, which is basically an answer to the question as to what triggered this action.

Sampling real data provides some unfortunate insight about how corrective actions are viewed:

Total Directives

Action Indicated

Accountable Person Indicated

Due Date Indicated

Completed

Confirmed

3,868

2,216

2,452

1,837

1,377

1,048

100%

57.3%

63.4%

47.5%

35.6%

27.1%

Based upon those numbers, drawn from actual client sources, it is clear how relevant traditional safety feels directives are. When only 57.3% of all “actions” don’t contain actions, or any text at all to describe their purpose; when merely 63.4% have been assigned to an accountable person; and when less than half have a due date indicated (47.5%), one has to question the purpose of the effort. Even when making the effort, when only 35.6% are ever indicated as completed, and a mere 27.1% are ever confirmed, where is the proof of due diligence?

These numbers prove that corrective actions are paperwork efforts more than real, and that evidently no follow-through is necessary. They also indicate how the communication loop is failing completely, with no accountability at all.

Whether you call them corrective actions or the more appropriate name of directives, it is fairly certain an executive who saw numbers this abysmal would have a few question about why they are even created, and a few serious concerns about whether detected problems are ever corrected at all.