Opinion

AI in the workplace - health and safety friend or foe?

By on

With the use of AI in the workplace predicted to become widespread, it is vital that employers carefully manage any associated health and safety risks, such as negative health impacts from using AI to monitor people’s work rates.


Artificial Intelligence (AI) has been described as “the science of making machines smart”. There is no getting away from the fact that it is making appearances in more and more aspects of life, including in the workplace. Whether this makes you excited or you are a terrified technophobe, it is important for health and safety professionals to have an awareness of AI in the workplace, including the scope for positive change and also the possible risks as we move into this ‘Brave New World’.

On one hand, AI can seek to improve workplace safety through hazard monitoring and equipment control; introduce techniques which minimise human error; virtual reality can be utilised to conduct practice drills; and increasingly sophisticated surveillance can assist with crime detection and prevention.

Hannah Burton: "It seems everyone wants to join the AI bandwagon, for fear of being left behind."

However, hazards and equipment are not the only things subject to monitoring in this new ‘Digital Age’. The employees themselves are often being monitored and this is likely to increase on an exponential basis in the coming years. Such personal monitoring could have a negative effect on employee health and wellbeing, as well as raising important issues about data protection and privacy.

There may also be concerns about job losses and data bias/discrimination. In certain sectors, rather than AI replacing jobs, it could cause the intensification of work instead; where humans are competing with computers or workers’ completion times are being tracked, resulting in a focus on speed at the expense of wellbeing, health and safety.

It seems everyone wants to join the AI bandwagon, for fear of being left behind. And that rush to embrace technology means we are at risk of not factoring in adequate safeguards which could leave organisations (and wider society) left to face both anticipated and unanticipated consequences, including legal risks.

HSE – ahead of its time

While we feel the AI train is hurtling at full speed down the track, it has in fact been on this track for some time but perhaps not given the attention it deserved. Back in 2000, the Health and Safety Executive (HSE) published an article, Industrial use of safety-related expert systems. This identified potential safety gains and hazards associated with “expert systems” (essentially a precursor to what we now call AI), and provided a helpful blueprint for where we have ended up 20 years later.

By 2016, in its annual Foresight Centre Annual Report on The digital revolution and changing face of work, the content had become more dystopian, considering possible implications for occupational safety and health, such as the use of autonomous vehicles at work, wearable tech (including smart contact lenses and temporary tattoos), and immersive technologies like ‘virtual reality’, which may distort one’s perception of reality and lead to physical complaints regarding the musculoskeletal system and motion sickness.

There is currently no regulator or body of law in the UK governing the development, deployment or use of AI. Photograph: iStock

What does ‘safety’ in an AI context really mean and what are we doing as a society to achieve this?

AI safety is largely concerned with preventing accidents, misuse or other harmful consequences that could result from AI systems. This could include things like machine ethics and AI alignment (which aims to steer AI systems to humanity’s intended goals).

There is currently no overarching regulator or body of law in the UK governing the development, deployment or use of AI. In a survey of more than 1,000 people by the Prospect trade union, referred to by the Guardian newspaper in May 2023, almost 60 per cent of people in the UK said they would like to see regulation of AI in the workplace. The UK government favours “a pro-innovation approach to AI regulation”, as set out in its recent White Paper on its proposed framework for regulating AI. The aim is for any ultimate AI framework to be underpinned by five principles:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness;
  • Accountability and governance, and
  • Contestability and redress.

There is currently no intention for a new regulator to be set up. The UK government has recently confirmed that that it has established a central AI risk function within the Department for Science, Innovation and Technology (DSIT) to monitor AI risks. It is expected that the DSIT will work with other government departments to develop the UK regulatory approach.

There is likely to be overlap with the role of existing regulators, such as the Information Commissioner’s Office in respect of data protection matters, and the Equality and Human Rights Commission in respect of the use of AI by public bodies.

Separately, the Artificial Intelligence (Regulation and Workers’ Rights) Bill (a Private Member’s Bill that sought to regulate the use of AI technologies in the workplace), was put forward for its first reading in the House of Commons in May of this year, but failed to progress through Parliament. Despite developments such as The Bletchley Declaration⁶ signed by the UK, and other countries attending the AI Safety Summit, in November 2023, it remains doubtful there will be tangible regulatory changes in the UK in the near future.

In the EU, the European Parliament has recently approved a new AI Act, which was first introduced in 2021 but is not likely to come into force until 2025. It seems that the only thing moving quickly is the technology itself.

What can organisations do to mitigate against health and safety risks associated with AI?

The general principles underpinning the Health and Safety at Work Act 1974 will extend to the use of AI in the workplace, without the requirement for any legislative changes (although these may follow in the coming years). Therefore, the same considerations that would usually apply when assessing health and safety risks should apply here too, albeit employers, businesses, legal experts and health and safety practitioners responsible for managing the health and safety risks from AI are likely to have to be creative and reactive as new technology and trends present themselves, and health and safety professionals may find they have an expanding remit.

Many organisations are appointing a ‘Chief AI Officer’, who will likely work closely with IT and legal colleagues to navigate matters in the coming months and years. The Chief AI Officer and wider team would ideally work together to put in place clear and robust policies and procedures for using AI in the workplace – for example, conditions of use, a code of conduct and an ethical framework. While we will likely be waiting some time for new AI-specific legislation to be enacted in the UK, these policies could be guided by the five principles referred to in the White Paper, as noted above.

Photograph: iStock

Other steps to take may include conducting data protection impact assessments where a new process or technology will be introduced, and also considering any other measures to increase data security, such as encryption and testing.

The testing of AI technology is critical. A case in point is the government pausing the rollout of new all-lane running smart motorway schemes (which employ technology to track incidents, measure traffic levels and change speed limits in real time), until five years of safety data is available.

Will reliance on AI stand up to the ‘reasonably practicable’ defence in prosecutions under the Health and Safety at Work Act 1974?

The reasonably practicable defence is a key concept within the HSWA and it means an employer must take all reasonably practicable measures to eliminate or mitigate risks associated with their work activities, including the use of AI systems. If a prima facie breach of the legislation can be established, the burden of proof passes to the defendant to prove on the balance of probabilities that they have done all that was reasonably practicable in the circumstances to prevent the risk.

Therefore, while the law recognises that elimination of all risks by an employer is often impractical, it is anticipated that the courts will expect organisations to take steps to mitigate risks associated with AI.

How might an organisation show reasonably practicable steps have been taken to mitigate against AI risks? They might seek to show that:

  • A robust AI-specific risk assessment was undertaken
  • Any safety measures identified in the risk assessment were properly and promptly implemented
  • Steps were taken (and a relevant framework agreed) to ensure the organisation is up-to-date with best practice and relevant persons received comprehensive training
  • The position was monitored and reviewed – an internal AI team was established with a clear remit and division of responsibilities, AI risk assessments and policies were kept under regular review, feedback was obtained from colleagues and changes were implemented to address any concerns (both actual and anticipated).

Where do we go from here?

It may be most helpful to consider AI risks to simply be an extension of existing health and safety risks, but to recognise that specialist input (particularly on technical matters) may be required to better understand AI products and the risks which may flow from their use.

If your organisation is already utilising AI, or is likely to in the future (it is perhaps more difficult to think of a sector which will not succumb), then appointing an internal AI contact or team is likely to be a prudent first step. The challenge of putting in place AI-specific policies and procedures will then follow. By getting on top of this early, you are much more likely to have AI as your friend, than an out-of-control enemy on the rampage.

Hannah Burton is an associate solicitor at Pinsent Masons LLP. Contact her at:
pinsentmasons.com

OPINION


Alex Sobel MP (1)

Achieving Net Zero will boost the nation’s health

By Alex Sobel MP on 01 December 2021

In June 2019, the UK Parliament passed legislation requiring the Government to reduce net emissions of greenhouse gases by 100 per cent by 2050. This ground-breaking legislation saw the UK become the first major economy to commit to a ‘net zero’ target.



HGV Close Up iStock deepblue4you

Managing workplace transport and occupational road risks – a perennial challenge

By Matthew Sulley, Pinsent Masons on 12 November 2024

According to Health and Safety Executive (HSE) statistics, 138 workers were killed in work-related accidents in 2023/24, with 25 of those fatalities involving being struck by a moving vehicle in the workplace – a 25 per cent increase on the same figure for 2022/23.



Budget Red Box iStock stocknshares

Can a ‘Budget for working people’ finally ‘Get Britain Working’?

By Mike Robinson FCA, British Safety Council on 01 November 2024

How many column inches were taken up in the build-up to the Budget wondering who exactly the Government meant by ‘working people’? And now that we know what was in it, does it really matter?