From undertaking hazardous activities, to identifying and predicting risk, to continuous monitoring, the use of artificial intelligence (AI) has the potential to bring about significant change in workplace health and safety, but not without associated implications for prosecution and sentencing.
Opinion
The impact of AI on health and safety prosecutions and sentencing
One of the key opportunities for AI in the health and safety space concerns risk management. Employers have a duty to mitigate risk and AI-driven tools can assist with this by gathering data on circumstances surrounding incidents and near misses. For instance:
- A business’ CCTV infrastructure can be linked to computer vision software to process data captured on-site in real time. Such data can identify near misses between workers and vehicles that management would not necessarily be aware of otherwise
- Workers can wear devices incorporated into their clothes which give off an audible warning if the worker gets too close to dangerous machinery
- Sensors can be used to detect abnormalities in variables like temperature, noise and air pressure which predict incoming failures and alert management to the need for maintenance before an incident occurs.
Photograph: iStock/XH4D
The knowledge and data obtained from incident-prevention technologies (such as the above) can then be harvested to improve understanding and avoidance of workplace risks, in turn informing the training of workers to prevent future health and safety incidents and tackle under-reporting.
The use of AI is not, however, without risks; these risks must themselves be recognised, considered and mitigated if the benefits are to be realised.
Health and safety prosecutions
Employers must take all reasonably practicable measures to eliminate or mitigate foreseeable risks associated with their work activities, whether identified by AI tools or not.
With increased use of data-gathering technology in safety management, there is a potential impact of AI systems usage on prosecutions for health and safety breaches. Once health and safety data has been collected, it is the responsibility of the company to manage this data in the context of its health and safety management systems. If the information is not properly managed, a risk arises that important safety data is ignored. For instance, if the data indicates that a certain type of incident is likely to happen, but no action is taken, it will be more difficult for the company to argue that it did not have prior knowledge or foreseeability as regards the risk than if no data had been collected in the first place.
Another potential scenario is one where an AI system identifies a risk which then becomes the focus of disproportionate time and resources and a different type of health and safety incident occurs. Could this be utilised in a defence against prosecution to say that the focus of time and resources had been directed elsewhere by the data generated? If the risk was foreseeable, the fact that an AI system did not identify or prioritise that risk is unlikely to amount to a defence. However, it might assist with justifying why a certain safety management approach was taken and could form part of mitigation on sentencing.
Laura White is a senior associate at Pinsent Masons
It is vital, therefore, to ensure that both employers and employees understand the particular AI systems chosen and the volume and nature of the data generated; there should be adequate training on use, including limitations, potential biases and the importance of maintaining human oversight. Organisations will need to build into their health and safety management arrangements procedures for managing safety data. In the event of an enforcement action, courts will expect this and are likely to take a dim view where it cannot be demonstrated.
Sentencing
A further enforcement aspect relating to AI systems usage concerns sentencing for health and safety breaches. Under the health and safety sentencing guidelines, where a company ignores health and safety concerns raised by employees and others, this is an aggravating factor which can increase the level of culpability on sentencing (and consequently, the size of fines imposed). Ignoring health and safety related data generated by AI systems is arguably comparable to ignoring employee concerns; in both cases, the company is put ‘on notice’ as to the relevant risks.
In this way, ignoring data generated by AI could also be regarded as an aggravating factor on sentencing.
It is exciting to think of a future where AI systems have the potential to improve workplace health and safety through enhanced hazard monitoring, equipment control and introduction of techniques that minimise human error. However, it is also important to bear in mind the potential legal implications in relation to enforcement and sentencing.
Laura White is a senior associate and Sasha Jackson a trainee solicitor at Pinsent Masons law firm. See:
pinsentmasons.com/people/laura-white
Email. [email protected]
OPINION
Achieving Net Zero will boost the nation’s health
By Alex Sobel MP on 01 December 2021
In June 2019, the UK Parliament passed legislation requiring the Government to reduce net emissions of greenhouse gases by 100 per cent by 2050. This ground-breaking legislation saw the UK become the first major economy to commit to a ‘net zero’ target.
Managing workplace transport and occupational road risks – a perennial challenge
By Matthew Sulley, Pinsent Masons on 12 November 2024
According to Health and Safety Executive (HSE) statistics, 138 workers were killed in work-related accidents in 2023/24, with 25 of those fatalities involving being struck by a moving vehicle in the workplace – a 25 per cent increase on the same figure for 2022/23.
Can a ‘Budget for working people’ finally ‘Get Britain Working’?
By Mike Robinson FCA, British Safety Council on 01 November 2024
How many column inches were taken up in the build-up to the Budget wondering who exactly the Government meant by ‘working people’? And now that we know what was in it, does it really matter?