Skip to content

Using free AI to solve health and safety problems

bridget-200x200Generative AI has changed the way many professionals work. While there are pitfalls to be aware of, there are plenty of opportunities for those with health and safety responsibilities to use AI to improve productivity. 

Many expensive health and safety products, such as incident reporting and risk assessment systems, include some elements of AI. But you can start to get the benefits of AI using tools already included in existing subscriptions, or available for free. 

Bridget Leathley, Lead Consultant at Tribe Culture Change, shows you how to harness AI safely - solving problems and getting reliable results. 

What is generative AI? 

There are lots of varieties of artificial intelligence (AI). Speech recognition can translate the spoken word into text, while natural language can interpret the text. Computer vision interprets still or video images. Data analytics finds patterns in numbers, such as your KPIs. Machine learning can be used to improve how any of these techniques work. But generative AI is now one of the most well-known forms of AI, because of the availability of free tools such as ChatGPT, and ‘inclusive’ tools such as Microsoft Copilot. I’ll use ‘genAI’ when talking generally, and the specific product name where relevant. 

GenAI looks at a large language model (LLM) and predicts which words would come after other words in response to the questions you ask. Most free genAI tools are using a very large LLM, with information that hasn’t been sanitised or checked. Paid AI tools allow organisations to make sure that only verified information becomes part of the LLM on which the responses from the AI are based. Paid tools also prevent your company information being leaked outside. With free tools you must ask questions in a way that doesn’t identify your organisation and doesn’t share any commercially sensitive information. So, with that limitation, what can you do with free genAI? Here are some examples of some of the ways I’ve been using free AI. 

Identifying relevant HS legislation 

If I ask an open question like “what law applies to safety?” or even “what law applies to safety in utilities?” the answer will be too broad to be helpful and dominated by law in the USA. Take time to write a brief and be prepared to have a few attempts to get it right. 

With the example above, I would need to be more specific about the type of utility (for example, electrical distribution), give examples of the types of tasks (digging up roads, climbing poles) and where relevant state some exclusions (such as power stations and generation). 

Prompting Tips 

  1. Be clear about the format you need for the answer. In my first version of a request to both ChatGPT and Microsoft Copilot my prompt started with the request to “Summarise the law on…” ChatGPT suggested nine pieces of regulation and Copilot only six (with the notable omission of the Work at Height Regulations, even though I’d mentioned climbing). I revised the request to “Provide a comprehensive list of...” and both systems listed ten pieces of law. 
  2. Be careful what you ask for. When I’ve asked for “A summary” without boundaries I’ve been presented with almost as much to read as the original document. Instead, I can ask “Which clauses in each regulation are relevant to electrical utility workers operating on public streets and customer premises?” 
  3. Ask for references. ChatGPT doesn’t provide references unless you ask it to, and while Copilot does, it’s often to secondary sources (such as someone’s blog). GenAI tools don’t appear to give a .gov domain a higher value than a .wordpress or .reddit domain. As a result, if many blog writers mistakenly refer to “working at height regs” genAI will offer that over the more correct “Work at Height regs”. GenAI commonly gives the wrong year for legislation if something has been updated in the last couple of years. There could be millions of references to the old legislation online, and only thousands to the current legislation - numbers win in generative AI, not authenticity. You must therefore check the name and date of the legislation at source, and check the specific clauses of interest. To do that, you’ll need the URLs, so include a statement in the brief to “Provide the URL in full.” 

Writing risk assessments 

Please, never ask genAI to write a risk assessment. You can however get it to help you work through the steps. For example, describe the job being carried out, and ask the AI to suggest hazards to look for. Use these to prompt a conversation between people doing the job and managing the job to confirm which hazards are relevant and how they are being controlled. 

Then go back to the genAI and ask it what controls might be relevant for a specific hazard. Review these to decide if it’s something you’re already doing or something that might be practical in the future. If you copy and paste all the hazards and all the controls suggested by genAI you will have an unwieldy risk assessment full of controls that are not in place. If you stay in charge of what is included, you can improve the quality of your risk assessments.  

Topics for a HS training course 

The step-by-step approach also works well for training courses. If you have a standard for writing learning objectives, give the genAI an example or rule as part of the briefing, and describe the audience to be trained. For one course ChatGPT suggested 48 different learning objectives, under 12 topics. I asked it to categorise these by whether they could be met through e-learning, in a classroom, or required on-the-job training. From this, I could select relevant objectives for the type of course I was developing. 

Hypothetical case studies for training courses 

Generative AI can hallucinate. When I first used ChatGPT in 2023, I asked it for evidence of the effectiveness of a particular approach. Rather than disappoint me (there was no evidence) it invented realistic sounding references to real journals. The dates and volume numbers lined up. The authors were recognised in the field. But an article in that journal from that author on that date did not exist. A less well-known free AI tool called ‘Perplexity’ is a better choice if you’re looking for peer-reviewed evidence. It will still offer links to commercial websites, but it does make it easier to find peer-reviewed references. Alternatively, a simple text search in Google Scholar might be sufficient. 

While this put me off asking ChatGPT about evidence, I learned to take advantage of its ‘imagination’. I wanted an example to explain how to deal with events with a low probability but a high consequence in the context of warehouse workers. My imagination failed me. I remembered the days when I could chat with colleagues over a coffee to get an idea. Instead, I had a chat for a few minutes with ChatGPT, and it ‘hallucinated’ or imagined a few different options, leaving me to select and develop the most credible one. 

Analysing accident reports 

Given the concerns about security, I would never cut and paste accident information into free genAI and ask for an analysis. However, I have used ChatGPT to tidy up rough notes and make them into sentences. Code any personal names, for example as A, B and C. I’m not ready to let AI analyse the causes of accidents, as it will do this based on all the biased judgements it has read in previous accident descriptions. ChatGPT offered me a ‘five whys’ analysis of my one-line description “A worker used a drill to break into pavement without checking for underground services”. Within seconds, it concluded that “the organisation doesn’t have a formal process for regularly reviewing and updating training content for high-risk tasks” without suggesting there could be any alternative explanations. 

Conversations 

Most genAI tools save old conversations, so I reuse these when I have a similar task. Whether the conversation is hours, months or even years old, ChatGPT ‘remembers’ it like it just happened. This avoids the need to remind it to use British English spelling and allows you to use previous conversations to provide context. For example, I can reuse a conversation where I’ve already taught ChatGPT how to structure a learning objective for another course. 

Summary rule set 

  1. Give it some context and boundaries: the better the prompts, the more useful the results.
  2. Ask for references and check them: not all sources are correct, and genAI doesn’t have a perfect understanding.
  3. Stay in control by working step-by-step, asking for multiple options. You are the best judge of what is relevant, so stay in charge by reviewing suggestions and picking the best.

Treat genAI as a junior colleague. If you brief it well, supervise it and take responsibility for its work, it can be a really helpful assistant. 

 

Find out more