Artificial Intelligence (AI) is the talk of the town right now if the town is run by chatbots, automated emails, and a fridge that knows you’re out of milk before you do! – (ChatGPT’s attempt at a funny opening line).
But beyond its comic abilities, AI has serious and far-reaching real-world consequences, including in employment relationships.
AI refers to the development of computer systems that can perform tasks typically requiring human intelligence. It is transforming business operations and the way we work at pace, often without us even recognising it.
AI engages a broad range of technology, but commonly, it includes machine learning (where systems improve automatically through data), natural language processing (enabling AI to understand and generate human language), and computer vision (allowing AI to analyse images and videos).
While many employers and employees are increasingly using AI to automate business systems and processes to create efficiencies or shortcuts, this comes with legal risks and implications that must be managed. Employers (and employees) need to think carefully about the role it does or could play in their business.
The first significant impact of AI on employment relationships is likely to occur before it even begins. AI is being increasingly used in recruitment processes to screen resumes and assess candidates.
However, if not carefully managed, AI systems can perpetuate existing biases present in their training data, leading to discriminatory hiring practices. Employers should ensure that AI tools used in recruitment are designed and tested to mitigate biases, promoting fair and equitable hiring practices.
AI systems often rely on vast amounts of data, some of which may be personal or sensitive. Employers must ensure that the use of AI applications complies with privacy principles, including obtaining informed consent, ensuring data accuracy, and implementing robust security measures to protect against data breaches.
Transparency about how AI systems use employee data is crucial to maintaining trust and complying with legal obligations.
These obligations extend to the employee monitoring capabilities of AI, which can track productivity and analyse communications. While these tools can enhance efficiency, they raise issues relating to employee privacy and autonomy. Employers need to balance the benefits of AI-driven monitoring with respect for employees' privacy rights, ensuring that any surveillance is proportionate, transparent, and compliant with legal standards.
As employers are required to provide a safe working environment, they must also assess and manage any health and safety risks arising from AI implementation. Workplace safety could be impacted in various ways, from automating hazardous tasks to introducing new risks associated with human-AI interactions. Employers need to ensure staff are adequately trained to work alongside AI systems and that any potential hazards are mitigated.
At the end of the employment life cycle, there may be further legal implications arising from the increased use of AI to perform human tasks. In particular there is a lot of concern about machines taking over jobs and employees being laid off as a result. This will likely result in legal challenge as the line between a genuine redundancy and replacement becomes blurred.
While the progression of AI is inevitable and will bring significant benefits to workplaces, there are also potential pitfalls and risks to employers, which must be actively managed.
This is important because when an AI system or the use of an AI system creates an issue, determining who is responsible for errors or misconduct can be complex.
The recent case of two US lawyers who used generative AI to prepare legal submissions provides a clear example of how it can all go wrong.
In this case, two US lawyers and their law firm were ordered to pay $5,000 (USD) for submitting fake court cases generated by ChatGPT. Lawyer Steven Schwartz used ChatGPT to find supporting cases for his client’s injury claim, unaware that the information was fabricated. His colleague, Peter LoDuca, did not review Schwartz’s citations, trusting his long-time colleague's work.
The Judge noted that while using AI was not inherently improper, lawyers have a responsibility to verify the accuracy of their work. It was found that the lawyers had failed in their duty by submitting non-existent judicial opinions and then persisting in defending them even after their authenticity was questioned.
AI is set to transform how work is performed. As these changes become more prevalent, employers will need to consider what, if any, limitations should be placed on AI usage and what those limitations might be.
It will also be important for employers to create clear accountability frameworks, including establishing protocols to address any issues that may arise from AI usage and ensuring that employees understand their responsibilities when interacting with AI systems. This is going to be an interesting ride.
Originally published in The Post