Benchmarking comes up in virtually every conversation I have with clients. Companies rightfully want to judge where they are after a period of change, either compared to industry averages, the competition or themselves internally between various sites. They like to measure progress and see scores about who’s in the lead and who’s yet to begin the culture change journey.
Yet to benchmark your culture accurately you need to investigate what’s actually happening every day on your sites, rather than what you think should be happening, and understand how effective your efforts are at getting people to think about safety. This is what most people overlook in the rush to compare their organisation.
For example, having 10 toolbox talks a week may look simple enough on a spreadsheet but this isn’t going to make a difference if audiences sleep through them because of sheer boredom! So you need to check your processes and polices are working in practice if you want to seriously understand your culture. Without this level of understanding your initiatives and training may be a complete waste of time because they don’t address real underlying issues and perceptions.
Culture assessments are one way to gain this holistic understanding of people’s behaviour on the frontline. Done as part of your benchmarking process, they give you detailed diagnostic feedback about your organisation’s cultural maturity.
Whichever benchmarking process you use, when it comes to examining the results make sure you avoid these common pitfalls.
When companies get the results from their assessments, they want to know how each site compares (even if this wasn’t the main reason for the assessment) so it very quickly turns into a numbers game. But very often sites are not alike, in size, number of staff, education levels – they may not even do the same job; yet everyone gets caught up in the numbers.
So sites at the bottom of the chart will be defensive and ones at the top will be complacent, unwilling to join in with new initiatives because they think they don’t need to bother.
Industry-led safety benchmarking tools often only look at LTI rates and other injury data, but these aren’t the best indicators of safety culture.
For example, it’s proven that in immature safety cultures managers sometimes hide incidents to maintain the illusive zero target, and sometimes staff don’t even realise they should be reporting some types of injuries or near-misses. You can have world-class companies who go a long time with few LTIs then suddenly have a spate of them and lose their ranking, even though they haven’t changed a thing.
So comparing yourself externally may lead you into a false sense of security, where all is well with a low LTI rate, and then suddenly something happens which managers can’t hide and the real picture emerges.
Various cultural maturity measures are available on the market, many with five step formats but not all of them take into account the wider attitudes, values and beliefs that really matter. That’s why many benchmarking exercises end with a score that indicates where you are compared to X number of companies, but give very little explanation of how they arrived at that value. In fact, at JOMC we deliberately set out to avoid this problem and designed a benchmarking process with the types of behaviours at all levels of management as indicators, rather than numbers alone.
Effective benchmarking should provide a line in the sand for you to progress from and a clear indication of where you’re trying to get to. And understanding attitudes, values and beliefs at every level of your organisation is the most effective way to involve people in developing the interventions that’ll help get you there.
If your benchmarking process doesn’t take account of these important factors, perhaps it’s time to choose a new one.