Artificial Intelligence and Employment Law

Following the release of ChatGPT at the end of 2022, artificial intelligence (AI) has raised important issues in HR and employment law.

Generative AI works by processing everything available to read and see on the internet and the data which its users have shared with it, and identifying the complex patterns that make up human communications. So, for example, if it is asked for a flexible working policy for a UK-based company, it can sift through all the similar documents to which it has access, and come up with a fairly sensible suggestion.

The advantages of AI for employers are that it can take over the most time-consuming and repetitive tasks. It can generate ideas to inspire employees who are working on creative projects. It can save time by producing in a matter of minutes, a document which may take a human many hours. Employers are experimenting with using AI for purposes such as interviewing job candidates, and outsourcing helpdesk features to robots. Employees have been disciplined for using AI to rapidly write lengthy documents for them, then passing them off as their own work.

In the absence of specific legislation governing AI in the workplace, and pending possible UK government guidance, it is important that employers understand how existing legal risks and obligations may affect their use of AI. These include:

Discrimination: AI uses information which was originally inputted by humans, therefore human biases are inevitably included in its results, unhelpfully reinforcing stereotypes, and even putting employers at risk of discrimination claims. The AI cannot yet detect sarcasm and distasteful humour, so it cannot filter these out of its results. Amazon famously had to scrap an AI recruiting tool which taught itself that male candidates were preferable to female candidates. Existing protections from discrimination under the Equality Act 2010 continue to apply to all forms of AI used in employment, and employers should ensure that the AI they use is not in breach of that. ACAS’s article My boss is an algorithm takes a more detailed look at the ethics of algorithms in the workplace.

Data protection: Generative AI, such as ChatGPT, uses the data it is given to identify patterns and create new and original data or content. Any employers using data in this way must ensure that they do so in a manner which is compliant with the Data Protection Act 2018 and the UK GDPR. See the ICO’s Guidance on AI and data protection for more information.

Inaccuracies: AI has also been known to make up information which it cannot find, resulting in inaccuracies. This is known as ‘hallucination’.

Monitoring and surveillance: Reports suggest that a third of workers are being digitally monitored at work, for example via remotely controlled webcams or tracking software. Royal Mail for instance recently admitted to using tracking technology to monitor the speed of postal workers. As above, employers should ensure compliance with data protection legislation in any monitoring of its workforce, as well as ensuring it doesn’t breach the right to privacy under the Human Rights Act 1998.

Unfair dismissal: Recent reports predict that AI could replace the equivalent of 300 million full-time jobs. With that comes concerns about the treatment of workers and the erosion of workers’ rights (for example as highlighted by the TUC in its latest conference). Under the Employment Rights Act 1996, employees with over two years’ service have the right not to be unfairly dismissed. If the use of AI reduces the need for employees to carry out a particular type of work, employers should ensure that an appropriate procedure is followed before making any decisions in respect of those staff members. Where dismissal is contemplated, they must ensure that there is a fair reason for dismissal. Care should also be taken to ensure that the way AI is used does not breach the implied term of trust and confidence between employers and employees, since doing so could give employees the right to bring a constructive unfair dismissal claim.

What can employers do about AI?

Employers may want to consider the following:

  • Develop a strategy for the use of AI in the workplace, with consideration as when its use is and isn’t acceptable.
  • Introduce a policy (or update existing policies) regarding the appropriate use of AI by staff.
  • Use AI impact assessments to identify and mitigate any risks when introducing AI into the workplace.
  • Retain a human element in decision making, to ensure that managers have final responsibility for decisions.
  • Ensure full transparency over when and how AI is used, especially when it impacts employees or potential employees.
  • Deliver training on the use of AI, ensuring it covers issues such as appropriate use of data, accuracy and bias.

If you would like to discuss any of the topics in this article, please contact us by emailing enquiries@perspectivehr.co.uk or by phoning 01392 247436.

Get in touch to discuss your HR ambitions

Accreditations

                 

© 2020 Perspective HR

Built with Gusto