Sunday, May 23, 2010

Implementing Risk-Management

There are many reasons to implement risk-management, but there is only one sure way to guarantee that you implement risk-management well. Three keys exist to unlock the potential of risk-management in any enterprise, regardless of its scale or the nature of its business, and they are required to ensure the cultural shift that is entailed to maintain the benefits of such a paradigm shift.

Unlike traditional safety, which dictates to workers, risk-management is far more participatory. It requires employee engagement at a level that frightens many, despite it being proven to work and to be more cost-effective. This employee engagement takes place at a most basic level by treating employees as risk resources, respecting that they know many of the risks they face, and understanding that communicating their value will cause them to seek additional knowledge that is formative in creating better risk profiles and better control mechanisms. The employee who is engaged in this model invariably responds in the immediate term and with largely positive inputs, and that response is maintained as long as the relationship and communication is maintained. The biggest risk in this is not time-loss, as many fear, or cost increases, but that the employee inputs will be not be reflected in the actionable choices that are communicated as the process of transition occurs. Workers will rapidly divest interest if their input is seen as irrelevant, and the process of adoption is unresponsive to their concerns. Contrary to management fears this distributes too much control, the reality is that to do this well, to engage and maintain engagement, requires stronger management control, and that control is generally accepted more by the workers since the control is not dictatorial, but representative of their collective interests in going home at the end of every day. The higher the risk-profile of the employee, the stronger their engagement will be, since their awareness is enhanced by their perception of personal risk.

The engagement of employees is a culture-changing engagement, and it leads to participatory management by its nature. This terminology aside, what we really gain is that there becomes a clearer separation between practical management activities and productive management activities. As risk-management takes hold, employees will form risk-awareness teams, or control development teams, providing the raw material that managers can shape into productive policies, safe practices, and whatever other tools assist in communicating effectively. The manager role then becomes more focused on enhancing productive returns than on managing immediate concerns. This participatory model flows upward, creating a stronger hierarchy of control at the management level, easier communication, and a higher rate of acceptance of management decisions flowing downward. It will never ease dissatisfaction with decisions that have a negative impact, but it does dramatically reshape the way decisions are viewed, when, for example, the inputs that informed the decision were largely generated by the successive layers of operational employees. If a team of ten welders defines a need for a new control to prevent injuries, and that control is shaped and approved by them, there is a higher likelihood of immediate acceptance. More importantly, because of how those decisions are made, because the cultural shift has embedded the idea that change is acceptable and even desired, imposition of controls does not seem arbitrary – they can and will be changed to reflect the balanced needs of the workers that must implement. To maintain the participatory management process requires a cultural shift toward significant improvements in communication, with less focus on autocratic deployment of decisions and more focus on allowing dynamic communication. The higher the management skills sets, the more effective the productive returns, and the more freeing this aspect of risk-management will be. If the management sees the change in culture as positive, it will be maintained; if fiefdoms and competitive management are the normal, the risk-management model will fail because of competing and divergent interests.

Perhaps the most vital of the keys, since it makes the decision-model function effectively, is related to communication of metrics. Traditional models fail ludicrously because they provide faulty performance metrics by their nature, making it practically impossible to manage “safety.” Risk-management eschews the reliance on raw counts in favour of extension of analysis via linkages. If the metrics are communicated effectively, the decisions that come from the analysis of those metrics will be rational, will hold over the term of their value, and will be easily conveyed downward from the executive level. When managers understand the directives, and workers understand them, then the enterprise functions with a focused effort; when communication falters, and dictums replace decisions, the resistance to implementation renders the process for risk-management defective. The executive branch benefits from risk-management less because of the model itself than its associated mechanisms for communication. Good metrics flowing upward, with the ability to ask questions that have achievable answers, defines a better foundation to make complex decisions. It also shows the level of diligence that is unarguable in worst-case scenarios, because it is possible to observe that based upon the best possible information, the best possible decisions were made, taking into account the agreement of all operational levels of the employment pool. In other words, decisions were neither arbitrary or based on known faulty information. Decisions then become decisions of the company for its benefit, rather than made by the executive without context.

Implementation of a risk-management system involves alignment of organizational behaviours with the operational objectives. This model creates and sustains positive change, because it involves a comprehensive communication model. It, in fact, embeds directive management mechanisms in operations. This imparts a closed loop operational system as a side-effect as certainly as it provides safety as a side-effect. But it all falters if employees disengage, if mid-level management fails the communication requirement, and if the metrics that guide executive decisions are communicated poorly.

Implementing Risk-Management 2

Friday, May 21, 2010

Risk-Management is Cyclic Improvement in Action

Part of the mystery of why traditional safety fails so miserably to generate improvements is less a real mystery than connected to the basic misunderstanding about how to achieve an outcome. It has been observed that safety is an outcome, not a process, and is obvious that to get to a destination you must travel some path. Taking that analogy to the safety domain, you could say that the problem of traditional safety is that it consistently takes the same path and hopes to find itself at a different destination. Risk-management differs in that it not only takes the best path to a destination, but that at its core it recognises that over time both the destination might differ, and so too might the best path to any given destination. This idea of cyclic improvement over time, usually just referred to as cyclic improvement, is at the heart of the value proposition of risk-management.

Rejecting traditional models and focusing on risk-management is the only sensible approach to this topic, but one last meaningful lesson can be taken from traditional methods. That lesson is that a pig remains a pig whether you put lipstick on it or not, a fact confirmed by traditional safety at every turn. Semantics do not define value, they only ever describe it, and no amount of terminology can be twisted to produce value that is not extant.

Risk-management is about focusing on how risk interacts in an operational system, whether the system be a workplace, a society, or any other definable system.

The risk-focused view of the model steps beyond the traditional definition of hazard (though it maintains the scope to address all traditional hazards). We still have a trio of risk groups: Physical (Unsafe Condition); Worker Behaviour (Unsafe Act); and Organisational Behaviour (Unsafe Behaviour). We still identify and qualify the risks, and we still provide a critical scoring system to rank the risks comparatively. Where the model differs significantly is that it separates the hierarchal model of most hazard-centric models and creates controls as separate profiles that can apply and address any number of risks, reducing the management of controls significantly, and installing the concept of the multiple-level relationship into the model.

In “Covenants of the Rose” (2004), Larry L Hansen wrote, “Accidents are patterned and predictable performance symptoms, the final visible evidences of systemic failings and organizational deficiencies.” This is a core recognition of the risk-centric model, and it embraces the idea by defining an array of potential linkages (relationships) for the risk to participate across an integrated system that includes profiles, activities and reactive events. These linkages are the foundation for the analysis potential presented, and this analysis is what leads to avenues for cyclic improvements.

The idea behind cyclic improvement in risk-management is twofold: it recognises that imperfections exist in every iteration of any risk-management process and could be improved; and it recognises that changing contexts for encounters with risks will demand modification of controls over time, regardless of current efficacy. This boils down to a basic approach of risk-management, which is to attempt to manage based upon present knowledge bases, in the most efficient way, without entrenching that process. Or, harkening back to our analogy, risk-management is about travelling a path that can change both to accommodate a new destination, or to accommodate the discovery of new mechanisms to make that path more efficient.

One of the lynchpin elements of the risk-management system is its linkage model, which is the heart of its performance benefits. Where most business groups operate in information silos that tend to be highly partitioned, operational risk-management require a high degree of accessible integration to execute the benefits of linkage analysis.

An example of how powerful this model is can be written in a straightforward way: In a risk-management model, it is possible (assuming all data is of reasonable quality, and that it exists) to analyse the risks an employee faces daily and measure the frequency of exposure, thus producing a matrix to show the stress status of controls (which controls are relied upon most to prevent risk encounter and harm). By knowing the control spread, and which are most critical to prevent the most dire outcomes, one can generate specific inspection routines and focus meetings and planning documents to address the risks that are encompassed. Testing employee awareness comes down to generating a document to show the control requirements and the risks, and comparing that to knowledge gained by way of training controls, meetings they attended, inspections they have done, etc. When that employee changes to some new occupation, based upon linkages that run though both occupation job task definitions and training controls, it is possible to focus additional training to address only the risks not previously known, reducing budgetary costs while ensuring focused awareness. More interestingly, because of how risk-management links to all integrated modules of the broad data-set, you can even generate a best-fit for opening occupations, identifying existing resources that might require no or little additional training, and so might be underutilised – improving productive capacity by maximising the return on investment.

The cyclic improvements are not only made to control and risk profiles over time, but the linkages assist in creating a self-evolving system where systemic failure potential is reduced. This occurs because the risk-management model recognises all operational activities are integrated by a desire to reduce risk exposure, improving productive returns.

A system that will self-improve always beats a static record-keeping system.

Risk-Management is Cyclic Improvement in Action

Thursday, May 20, 2010

Reporting the Right Events in the Right Way

Just reporting the right events, whether they be incidents or activities, is only a small aspect of getting better data to based decisions. Reporting them in the right way is critical to developing a responsive improvement cycle.

It is fairly simple to recognise a three-stage reporting model for “events” that centre around risks encounters. In practical terms the life-cycle of a risk is the essential mechanic that governs this model. We identify the risk, we record when it was encountered without harm, and we record it where it was encountered and harm was incurred.

Risk identification occurs at any point in time when a set of circumstances (unsafe conditions, unsafe act, organizational behaviour) creates or has the potential to create a situation in which harm might occur. This process results in profiling the risk, which consists not only of generically describing it, but using some scale to identify its potential impact. When done this is a powerful first step to prevent the manifestation of impacts that harm the company or workers; but identification is not worthwhile unless the process is adhered to in a consistent way, which makes risks capable of comparative analysis. In a simple example, a risk of a paper cut is likely to have a fairly low harm factor, whereas a risk of being crushed by a dump truck is probably going to present serious harm to whoever encounters it. If your risk profiles are not comparable (ranked on the same scales), there is no way to direct resources by priority based upon critical impact.

Recording risk identifications is about ensuring that we know about a risk factor before it has manifested, preventing any impact on the company or a worker. Whether we detect the risk encounter via an inspection or an event investigation, what we are doing by recording it is to give us a broader base to analyse it. Recognising the risk encounter builds a record of its occurrence rates and their context, and that allows preventative enhancement. It focused personnel on both the common risks, and helps them ask two important questions: why is this risk so commonly encountered; and what will happen if it manifests harmfully? Knowing the answers to those questions means resources are applied to manage threat conditions rather than arbitrarily. Extending the example of paper cuts and dump trucks crushing folks, we might find that paper cut risk occur with significantly greater frequency. Basing operations on a purely traditional model, pretending near misses are recorded dutifully, we would eventually have that count reach some number that triggers resources poured into developing awareness. Meanwhile, the two near misses with the dump truck will be ignored by raw count. But comparing the impact, potential for harm, it is immediately evident that the first near miss of the dump truck is likely to garner immediate attention, and controls will be enhanced to avoid that risk developing into a full blown fatal incident. The defensible provision of ranking in this regard is part of the process that makes near miss recording so valuable.

High impact incidents, where harm is incurred, are always reported, and the process is fairly straightforward with some variations for context within an organisation. The majority of reports are always generated in the field, and flow upward to safety personnel and then beyond. At this stage the classification model comes into play heavily, with a special focus on the outcome potential. At a human level we know what can kill us is always treated with more serious regard than what can make us sneeze a single time. What is vital in this process model is that the process doesn’t end when legislation allows, because the feedback cycle is where the actual constructive value extends from. The analysis on the controls that failed becomes a foundation element for analysis of systemic failures, and it is the mechanism whereby control enhancement is triggered. If the process ends when the report is filed, the process fails.

The primary goal of reporting is to record, and the primary goal of risk-management is to analyse the recorded data. Form-filling tools produce paper, and risk-management systems produce improvements by way of cyclic improvement. Only cyclic improvement creates positive change, cost suppression opportunities, and drives productive improvements.

Wednesday, May 19, 2010

Scope of Discovery

The mercenary of our group has been heard to say, quite often, “One of the problems with idiotic traditional systems is scope of discovery is wrong.” Pressed to explain, he is apt to add, “When you analyse something, anything, you can only do an appropriate analysis if your underlying data was discovered on a broad applicable spectrum.” Luckily, translation of the idea is available for mere mortals who speak something like plain English.

Scope of discovery in traditional safety systems is wrong, because the statistical metrics that govern “whether you are safe” depend upon avoiding recording conditions that skew them. Consequent to that, traditional safety counts what benefits them more than what will harm them, since they are almost exclusively measured by post-event metrics. The problem, what makes this wrong, is that when you make subjective choices about what to record, you create a pool of data that, when analysed, is ignoring what is often the largest part of the data-set that should be analysed.

A case in point is the classification of incidents. The metrics that are used to declare a company safe will degrade that rating significantly if your near miss recording exceeds a certain ratio. While ignoring near miss recording is then almost a matter of commercial survival in some sectors, doing so actually trades off the perception of safety (by way of statistics) with the development of a safer workplace. You cannot fix what you have never seen.

This might not matter if the difference between a near miss and fatality was not often a matter of centimetres or seconds. The intention or recording near miss in a risk-managed environment is to identify the cause of the incident, basically to identify a control failure. Those failures define how to apply resources to better controls, and without the ability to analyse them, we cannot do that. By not recording them for fear of statistical self-destruction, we have no process that will avoid these failures creating unsafe conditions that increase risk until an encounter becomes a serious one, perhaps even a fatal one.

Good safety needs to be a side-effect of intentional management, not luck. This requires more data of a higher quality to perform better aggregate analysis, and if scope of discovery is being repressed the result is a skewed database. You will be analysing risk based not upon risk reality, but upon the encounter of risks where distinct failures created negative outcomes. There is no way to control preventatively with any effectiveness, since the best you will do is create a reactive control modification. What is required is a scope of discovery that provides a massive aggregate pool that can be used to execute predictive analysis.

Workers routinely identify risks in the workplace, and if you capture those identified risks you can control for them. The control mechanisms may or may not be efficacious, but the only way to determine it is to monitor effectiveness. Post-incident control failure analysis is important, but if the only incident types one ever analyses are negative impact ones (injury or fatality), then you are placing faith in the controls rather than assessing them. If you also analyse controls via inspection processes, and include a wide range of near miss and even better “risk identification” events in your analysis pool, you are creating a method to objectively create proactive preventative control improvements.

Scope of discovery is the key to better analysis and the provision of better safety.

In an asset inspection, if you check that a guard is being properly maintained, you are confirming control. If over a year the inspection is indicating the control is not being maintained, you have an opportunity to analyse that data effectively. If in 50% of the cases of asset inspections that control is failing, you have a serious risk pocket.

Now, if you have recorded a dozen near misses where that same control failed, you have a pool of data that can be analysed. Yes, there is no loss of property or personnel, but the reality is that if in a specific organisational location (welder shop floor in some shop, for example) half your near misses are indicating the same control failure, you can project with fair accuracy that at some point near miss will become something more. Maybe the metal fragment the guard is intended to deflect down to a safe pan will hit another machine and damage it, incurring property loss, or maybe that same fragment will cut an employee, blind one, or kill one.

When that happens your incident becomes an injury or fatality is that, unless you have this other data, you will be looking at a failed control in total isolation. The guard was down, the employee was negligent, case closed. Except, if half the inspections done are showing the guard is out of place, and you have a dozen near misses leading to this event, the context is probably different. Why is that guard such a problem? Is the control ineffective? Could you have implemented a modification that avoided the costly injury or fatality?

By limiting scope of discovery you create a false sense of safety, and you ensure your eventual control enhancements are done in a vacuum. Preventative measures are only possible if you have more data of better quality, and if your analysis crosses the limitations of cost-only incidents. The dozen near misses in our example would have alerted to the likelihood this control would be in a failure state eventually, but if the only time you hear about a control is after it failed, it will never be capable of providing preventative measures.

Risk-management is about embracing the range of your available data to ensure risk-awareness is real, and objective metrics exist to assist in focusing resources preventatively.

Monday, May 17, 2010

Breathing Deeply

When we created the solution-set as it stands today, we were distinctly aware that it wasn’t going to be easy to commercialise. The problem wasn’t the product, or even the vast market for it, but the educational curve attached to using the systems well.

When you have the scale of an IBM, Microsoft, or Google, you have the resources to infuse the market with educational context, the manpower opportunity to develop the expert knowledge bases to attack multiple avenues of revenue at once, and the momentum to deliver the product concept as part of broader integrated offerings. When you consist of four bodies, you often consider it pure bliss to have enough resources to last to the next quarter and pay your individual bills. Worse than that, you can’t really hire the expertise to identify the best growth avenue, and even if you do, you usually can’t call upon a cash reserve to execute the plan.

As we researched our solution concepts, and developed the tools to deliver them, we often stopped to take some deep breaths. We asked ourselves, regularly, whether the fight was worthwhile. That the answer was consistently that it was worth the fight was surprising, given we have a spectrum in the four bodies that varies from a true altruist (he actually thinks saving people’s lives is worthwhile) to a true mercenary (who thinks life is cheap and it’s all about cash).

The problem with breathing deeply is that sometimes the air stinks.

In 2008 when we took a deep breath, we discovered the investment community had about as much interest in an actual product concept as it had in anything deemed hard. Of course, we saw how that worked out when the markets crumbled, losing billions that were invested in emptiness; and we saw how that played out in 2009, when desperate corporate bailouts were done to prevent the people who caused the problem from suffering and dragging everyone else down with them. Throughout that cycle, we noticed distinct odour of misdirected fear in the air.

In 2010, of course, there isn’t as much air around as there was in the past, apparently. Now, taking a deep breath requires the kind of faith that it takes to leap off a cliff because everyone else thought it was a good idea. The lemming effect that led the investment bankers to abortive doom (abortive, since they were largely bailed out by the small guys), seems still to be in effect.

Then again, part of the problem with our product-set is that it is a concrete product-set. It can be explained, and the explanation is scary to people who like double-digit returns on the quarter but can’t conceived an emerging market that really is one. They can’t think beyond what exists, making explaining the opportunity to the standard investment group complex. The real killer, of course, is that to accelerate adoption requires exposure, education, and massive deployment to support revenues quickly. The crux of the problem is that with most investment groups, education kills the interest, since it has no direct revenue stream according to the standard wisdom.

What is odd, is that in 2009, we became distinctly aware that the real champion of our product-set would eventually be in the technology domain. They are the only companies that seem to have any grasp of emerging markets, and because of the tool elements they have the component parts to engage the core on levels that would drive revenues indirectly until the direct market matured.

Microsoft, for example, is large enough to meld this kind of risk-management tool into their back-end services, expose its interface through SharePoint, and even link it to their web strategies. Google not only has the delivery infrastructure (the product has been a cloud application for longer than the idea of the cloud has even existed), but the broad reach to actually push this to enterprises on volume levels that would reduce the cost of entry to almost nothing.

Being the size of a flea, though, the best idea in the world has no fast uptake opportunity; and that deflects investment about the same way flea powder deflects fleas.

Amusement aside, the real challenge of 2010 is no longer about the product-set, which while dynamic and cyclically improving is a fixed value point, but about how to develop a mechanism to accelerate its introduction, such as functional pre-screening and integrated vendor management. Is it pursuing partners crazy enough to recognise the potential value of this opportunity, seeking a partner or buyer with resources, or reshaping the product and directing the knowledge we gained over the years into providing something more traditional?

Sunday, May 16, 2010

Data Quality is Critical

One of the revelations of our research was that the quality of data in traditional safety programs is abysmal. When we would ask safety personnel for their investigation documents, we would get documentation that often was so incomplete as to make it impossible to identify human involvement at all. In the scope of legacy data we handled, we had an average 15% rate of serious injury incidents where the injured party could not be identified. In other cases, multiple reports on the same accidents would contradict on basic levels such as classification choices, with no additional documentation available to ascertain which was correct (both were probably incorrect). The incredible data quality deficit was made worse, by an order of magnitude, by the massive amount of static documentation that was often available.

The data quantity should probably have been less surprising, given that traditional safety operates on counts alone most times, but it was amazing to encounter piles of documents about safety meetings, some with identified hazards listed, and have no associated documentation to indicate if the assigned corrective actions were ever completed. It was usually impossible to even ascertain who was at a meeting, or who was assigned to ensure the corrective action. Even when follow-up might have been done, the scale of the paper made locating the proof impossible.

The problem of generating paper is that paper is impossible to track effectively. You can not ensure it is kept, often cannot relate it to anything else, and its fundamental fault is that every page is discrete and in no way connected to the next. Even conscientious feedback gets lost when the paper is never collated, reviewed, classified, or audited.

Traditional safety is less sensitive to bad data than risk-management because it is about counts. Showing fifty inspections is fifty activities counted, and no one ever really asks you to prove they were worthwhile. In risk-management, fifty inspections with no connective substance will expose themselves as irrelevant instantly.

One of the greatest barriers so far to deploying risk-management for even the best intentioned companies has been that when they can get their data (not always as easy as it should be), they can almost never get quality data. The employee list will contain names no one has ever heard of, have missing people, have multiple spellings of the same name, etc. The occupations list will be a third the size of the employee list, with obvious spelling mistakes, and no real relationship between people and occupations (on average, clients have about 60% of listed occupations that have no apparent people employed to do those occupations). This low quality is fatal to any system that relies upon delivering upon value propositions.

Of course, in a grander scheme, the low-quality data exposed by trying to transition to risk-management should raise flags. Management quickly becomes disillusioned, if they care at all, when they discover things like Human Resources cannot give a list of employees that makes any sense at the click of a button, or that all training records are in paper form and George down the hall might have a spreadsheet he keeps that shows it in relation to the folks he knows about, which contradicts the one Bob keeps.

One of the saddest experiences we have had is having to say to companies they have no reliable data sources, because as soon as those words are used, they have one of two reactions: they bury their heads in the sand and pretend this isn’t a problem in their normal operations; or, they see it and think they cannot afford to fix the problem.

Data quality is critical for risk-management to function operationally; but, it is vital to recognise it is critical to day-to-day operations. If your information systems cannot provide reliable basic data about operations, you need to rectify that whether you want to transition to risk-management or not. The end result is the decisions you make are based on real quality information, rather than obscure guesses.

Saturday, May 15, 2010

Your Company Can’t Do Risk-Management

What our research has shown is that most companies cannot do risk-management, whether by their own design or using our solution-set. In fact, most companies cannot ever achieve the risk-management paradigm shift. This proclamation would be depressing but for the explanation of why this is the case, which exposes the actual problem isn’t the risk-management solution-set, but choices being made. So, here are the top ten reasons your company can’t do risk-management, presented as statements. If management says even one of these, they are never going to successfully transition to real management, and will remain in chaos mode forever.

  1. Our workers don’t know safety.
  2. Our workers are safe only when we’re watching them.
  3. Our safety program is world-class.
  4. Our safety personnel are experts.
  5. Our accident rates are below the industry average.
  6. We believe anything more than zero-incident rates is unacceptable.
  7. We are constantly doing more safety activities.
  8. We are always training.
  9. We are receiving safety awards for our excellent workplace safety.
  10. We are a safe company.

While harsh to say, the instant any one of those statements is made with anything other than a sarcastic edge, management is choosing to maintain status quo. Real effort is not likely to be expended, or maintained, to make any changes, because things are “good enough.”

The problems with those statements helps expose why they are dangerous:

  1. If a manager says workers don’t know safety, then what they are saying is the people who provide all operational productivity do not care about their own wellbeing. Saying that is masking the real statement, which is usually that “workers are doing things that negatively impact our productivity.” This attitude that workers are ignorant is not only false, but makes the entire provision of safety a farcical effort: without engagement there is no uptake. Workers need to be viewed as resources, assessed on a per worker basis over time, and brought collectively to a standard of safe operations that imparts higher overall productivity by way of risk-aversion and effective control compliance. Not believing workers can provide value to the safety effort, undermines their contribution.
  2. If a manager says workers are safe only when we’re watching them, then what they are actually saying is our workers are poorly trained, poorly deployed, and poorly managed. Well-trained workers doing jobs for which they are qualified are not inherently safer when watched. Yes, people become complacent, but that is about communication, not monitoring. Monitoring is the data driver that makes for good feedback and communication, not an answer to worker complacency. Believing that watching is managing is a display of ignorance about how people function, and says a great deal about management but nothing about the frontline workers.
  3. If a manager says the safety program is world-class, what they mean is that it is good enough for their needs. The reality is even risk-management doesn’t allow resting on laurels. Processes that impart safety are never wisely measured as world-class, because the sad fact is the world isn’t a safe place. The manager who says, “good, yes; great, never” is always going to be world-class by default, because they are always forcing change management into operations, seeking efficiencies, and maximising the risk-aversion of their workers. Just like safety is an outcome of process, so too is world-class a default outcome of pursuit of better process.
  4. If a manager says safety personnel are experts, what they may as well be saying is, “I don’t want to know; tell the person I hired.” There is no such an animal as a safety expert, because individual people are incapable of a broad enough view to be mechanically objective. We see what our experience allows. This doesn’t degrade the value of safety expertise, but what it does is qualify that the safety expert is not valuable beyond the scope of their knowledge and experience. Far better to have a safety professional who actually manages the processes that underlie and produce safe conditions. The expertise there is the same expertise that a good human resources manager has, because so many processes that affect safety are about human relationships to processes.
  5. If a manager says accident rates are below the industry average, they have basically shrugged. It is fairly easy to be above average in most aspects of life, given that mediocre is the modern standard for performance for most purposes. Does that satisfy anyone? Does it engender growth, revenue generation, or profitability? Does being better than the other losers really imply a pursuit of excellence? Sadly, it seems to for many; but it is observable that the companies that really excel never express such attitudes. They are far more likely to proclaim they are not yet good enough.
  6. If a manager says we don’t accept anything more than zero-incident rates, they are being wilfully ignorant. Business requires operations of some sort, and operational activities require risks undertaken. To believe, even for an instant, that it is possible to exist in a zero-incident state forever is foolish. But focusing on the zero in that declaration, will often create conditions where suppression of incidents is commonplace, increasing risk until inevitably the exposure exceeds chances of avoidance, after which the massive impact of the accident will eradicate the entire organisation. For an example, look no farther than Bear-Stearns, a company that assumed enormous risks and consistently expressed how risk-aware and averse they were, bilking investors out of billions. Lip service does not create risk-aversion, it enhances risk.
  7. If a manager says we are constantly doing more safety activities, they are really saying, “I get lovely reports with many numbers that mean nothing, but damn they look important.” Corner such a manager and ask them a question like, “How does doubling the number of safety meetings impart more safe outcomes?” Their answer, if they are conscientious will rightly observe that better communication can actually do that. Now, say to them, “Prove it has.” Suddenly, those obscure counts, unrelated to anything – or, worse, in reality often showing no impact on accident rates – mean very little. Smart managers see through the numbers to the realities, and question efficacy because they realise scarce resources are being applied to activities that may have no value returns associated.
  8. If a manager says we are always training, a commendable idea, ask them, “Why?” better than 90% of managers we have asked that question achieve glassy stare status in seconds, and the best ones always admit they haven’t a clue. They seldom even know, at the executive level, what people are being trained to do. When they find out, they often sit in stunned silence and can observe that traditional safety, being how often it takes the easy path, frequently applies pointless easy training without ever checking value at all, and lags far behind on more complex critical training needs. Training is an ongoing process, but has to also be explicable. It better matter, since it is one of the highest cost aspects of control imposition faced by any company.
  9. If a manager says we are receiving safety awards for our excellent workplace safety, unless they are smirking, they really need to educate themselves to what it takes to get a safety award. If you are in an industry that kills a handful of people every few months, you might end up at the top of that heap and awarded for only killing Bill in shipping. There isn’t anything enviable in being the best or the worst, or a certificate that is awarded based upon statistical tricks that have no connection to any reality. Try getting an award if you report every actual recordable in an industrial setting. Your statistics will betray you make you look awful, and you can easily be beat by a company that has killed a handful of people and not bothered to diligently report their recordable events. The fact the near misses you recorded meant you killed no one matter not whit to the statistical formula.
  10. If a manager says we are a safe company, what they mean is “We haven’t killed anyone recently, or injured anyone recently.” Safe is a purely subjective term. A quality management team at the executive level will always balk at making such a statement, because an easier and more truthful one sounds so much better: “We try to be a safe company, and it takes a lot of effort to maintain that.”

What it takes to do risk-management, producing real safety from the process, is recognising that attitude is almost everything. You have to want to become safer, stay safer, and reduce your costs to do that. You need to commit to the long-term value of being able to measure your progress, focusing your resource applications to produce improvements, and understand that at the end of the day you never end the effort. Risk-management becomes a profit-driver because it is a productivity-enhancer, which just has the odd side-effect of producing safer workplaces.