Insights

Nighat Sahi

Published 6 August 2025
B0BEFAB3-7AF0-49F8-86B2-AD99CC6EC993
image

AI and the Workplace Update

Government Approach and Legal Framework

This time last year we wrote an article about AI and the workplace, acknowledging its potential whilst urging caution in certain areas.  In the year since, whilst AI has continued to embed itself in our lives and become more and more mainstream at work, the legal difficulties that surround its use are still very much at large.

These issues have led to increasing concern across many different sectors that greater clarity and a more robust and consistent framework for regulation and compliance enforcement is required. At present, there is no general statutory regulation of AI although as we’ve set out below, there is some existing legislation (such as the General Data Protection Regulations (GDPR)  and the Equality Act 2010) that is relevant to how AI is used.

AI – So where are we now?

In the 18 months, there has, of course, also been a change of government along with a number of parliamentary bills and advisory reports. However, for employers and business owners, it remains a quagmire as you try to navigate and understand the many white papers, different regulatory guidance and industry specific standards as applicable to your business and AI and the workplace.  

In the following articles, we’re going to take a look at the current AI legal framework and the key areas which need to be carefully considered by any business using AI.  We’ll also be providing our own advice on next steps and proactive measures that businesses should be taking if they are procuring or using AI in the workplace.  In this first article, we’re going to set out the government’s policy position, the main developments in the last year, the legislation that is currently applicable and the available guidance for businesses.

A light touch approach and the governing principles 

In March 2023, the UK government published a AI White Paper titled A pro-innovation approach to AI regulation in which it proposed a flexible and risk-based framework. The proposals included establishing a central authority to coordinate regulatory work across all sectors and provide guidance to businesses on best practices and compliance. The then government’s approach has been described as a light touch “pro-innovation approach”. This meant making use of existing legislation rather than enacting new laws, with the burden falling on individual regulators to “develop or update guidance to take account of these principles and provide clarity to business.”

The principles referred to are: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress.

In February 2024, the government published its consultation outcome report, A pro-innovation approach to AI regulation in which it reiterated its “a pro-innovation regulatory framework for AI” as outlined above and endorsed a “regulatory framework [that] builds on the existing strengths of both our thriving AI industry and expert regulatory ecosystem.”

In April 2024, the Trades Union Congress (TUC) published the draft Artificial Intelligence (Employment and Regulation) Bill. It set out a potential UK legislative framework for regulating the use of AI in the workplace. The proposed framework was more aligned with the codified EU AI Act (see below) and focused on ‘high-risk’ AI in employment situations.

The Bill proposed protections for workers, employees and jobseekers and addressed concerns about transparency, discrimination and the impact of AI-driven decision-making on employment rights. However, following the General Election, it has not   progressed through the legislative process.

On 30 April 2024, the Equality and Human Rights Commission published An update on our approach to regulating artificial intelligence. The Equality and Human Rights Commission is the independent equality regulator for England, Scotland and Wales and its report highlighted it concern about a wide range of AI risks, including risks of bias and discrimination as well as risks to human rights. For 2024/25, its priorities are focused on “reducing and preventing digital exclusion, particularly for older and disabled people… the use of AI in Recruitment Practices, developing solutions to address bias and discrimination in AI systems”.

On 1 May 2024, the Information Commissioner’s Office (ICO) published Regulating AI: The ICO’s strategic approach, setting out the ICO’s approach to AI regulation. The ICO is a regulatory body responsible for enforcing data protection law and regulations. The report provides guidance on AI and data protectionautomated decision-making and profiling.

Post General Election 2024

Following the election, Kings Speech stated that it was the intention of the (new) Labour government to “to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”. This was consistent with pre-election manifesto pledges.

On 1 August 2024, the European Artificial Intelligence Act (EU AI Act) came into force. It introduced a “uniform framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach”.

On 5 September 2024, the UK signed a new legally-binding international treaty (The framework convention on artificial intelligence) governing the safe use of AI. It provides an international legal standard of obligations and principles to be followed by states across the world and includes important safeguards against its risks, such as using biased data which may prejudice decisions. The treaty provides that AI systems must comply with a set of principles including protecting personal data; non-discrimination; safe development; and human dignity. However, whether treaty is to be incorporated into UK law remains to be seen.

On the 8 October 2024, the Government launched the Regulatory Innovation Office (RIO). Its primary purpose is “to streamline regulatory processes, eliminate barriers, and coordinate cross-sector innovation challenges”. Its focus is on fast-growing areas like AI in healthcare, engineering biology, space and other sectors. 

On 26 March 2025, the Equality and Human Rights Commission published their Strategic plan 2025 to 2028. This acknowledged“significant new opportunities and threats are emerging, including from the advance of artificial intelligence (AI) and other technologies.” The plan promised to “deliver a core programme of regulation to support compliance with equality and human rights law” and to asses “serious threats to equality and human rights, for example ensuring AI does not lead to discrimination on the basis of race…”

In March 2025, the AI Regulation (Bill) was re-introduced to the House of Lords having been first introduced in November 2023 but having come adrift as a result of the dissolution of Parliament ahead of the General Election.The Bill proposes establishing an AI Authority, a dedicated regulatory body responsible for overseeing AI development and ensuring compliance with new legal requirements along with codified principles. This would mark a departure to the current “light touch” approach and would bring the UK more in line with the EU approach. Whether this Bill comes into force in its current or revised form remains to be seen.

The EU and the US approach

As outlined above, the EU AI Act came into force on 1 August 2024 and will be fully implemented by 2 August 2026, with some exceptions. It adopts a risk-based rules model, with AI applications categorised according to their potential harm and with corresponding compliance obligations. There will be a general-purpose AI (GPAI) Code of Practice, which is currently being developed and although not legally binding, this will be a crucial tool for ensuring compliance.

Similar to the UK, the US States has a voluntary AI standards approach which is more  industry-led rather than legislative although some US states have enacted legislation addressing discrimination in high-risk AI systems. The US Federal Government has been directed to follow a set of key principles of responsible use of AI.

The UK regulatory landscape

If all of the above has left you bewildered, you’re not alone so below is a list of the legislation that is currently applicable to an employment situation, against the background of the above.

The Equality Act 2010

This prohibits discrimination based on protected characteristics and is key in terms of protecting employees.  AI tools used in recruitment, promotions or dismissals must not result in direct or indirect discrimination. Issues that can arise include AI systems that perpetuate historical biases or exclude certain groups unfairly. Employers also have a duty to make reasonable adjustments for disabled people, including in digital systems.

Employment Rights Act 1996

Section 98 outlines grounds for fair dismissal and is relevant where an AI tool is used without transparency or human oversight in disciplinary or dismissal decisions.

UK GDPR / Data Protection Act 2018

This important legislation governs how personal data is processed. In respect of AI, employees have the right not to be subject to a decision based solely on automated processing (Article 22 UK GDPR), unless certain conditions are met. Issues may also arise where AI is used to monitor performance or make employment decisions.

There are provisions about transparency, accountability and the rights of data subjects, and organisations must demonstrate their compliance with data protection principles.

Human Rights Act 1998

There may be circumstances in which AI tools infringe on workers’ privacy (e.g. via constant surveillance or biometric monitoring) and thereby breach Article 8 (right to respect for private life).

Regulation and guidance

Sector-Specific Regulations

Certain sectors have their own regulatory frameworks such as the Care Quality Commission (CQC) in healthcare settings and the Financial Conduct Authority (FCA) in the financial services sector.

The Institute of Electrical and Electronic Engineers (IEEE), a body which sets industrial global technical standards, is developing its IEEE P7000 series of standards relating to the ethical design of AI systems.

Government guidance includes:

The February 2024, the then government published anIntroduction to AI assurance guide, which explains  “the concepts and terms related to ensuring the responsible development and deployment of AI systems. The guide focuses on the underlying principles of AI assurance rather than technical details, offering a foundational understanding for those new to the subject.”.

The March 2024 the government guide Responsible AI in Recruitment Guidance was published. This “focuses on assurance good practice for the procurement and deployment of AI systems for HR and recruitment. It specifically focuses on technologies used in the hiring process, such as sourcing, screening, interview and selection.”

The guidance highlighted the risk of unfair bias or discrimination and the risk of digital exclusion for applicants who may not be proficient in, or have access to, technology.  

Written in non-technical language, the guidance set out processes and assurance measures and mechanisms for employers to consider and put in place, including:

  • Impact assessments, monitoring, performance testing and due diligence
  • Training staff
  • Principles and policies and reasonable adjustments, and
  • Procedures for contesting AI based decisions.

In November the new government published its Guidance for using the AI Management Essentials tool. This is a self-assessment tool designed to help businesses establish robust management practices for the development and use of AI systems. It is primarily intended for small to medium-sized enterprises and start-ups.

ICO guidance as mentioned above provides principles for using AI in a way that complies with data protection law with the emphasis on transparency, fairness, accountability, and explainability. It also recommends Impact Assessments for high-risk applications.

The EHRC Guidance as mentioned above recommends regular audits and human oversight.

Summary

It’s clear that while there is a wealth of information about AI and the workplace, those on the ground are left trying to piece together the different guidance that may influence how they should procure and deploy AI. In the next two articles, we’ll take a look at specific issues that can arise in the workplace when using AI, the principles that should underlie any use of AI in the workplace and the measures a business should now have in place to ensure they operate in a way that respects employee rights and is compliant with the government’s approach and existing legislation.

However, in the meantime, if you would like to discuss the use of AI and the workplace, please get in touch. 

The legal content provided by RSW Law Limited is for information purposes only and should not be relied on in any specific case without legal or other professional advice.

Copyright is owned by RSW Law Limited and all rights in such copyright are reserved. Material is not to be reproduced in whole or in part without prior written consent.