With the rapid integration of Artificial Intelligence (AI) into all parts of our daily lives, it was only a matter of time before the New Zealand courts were required to deal with the issue of hallucinated/fictitious cases created by generative AI.
Generative AI is a type of artificial intelligence that can create new content such as text, images and videos, by learning patterns from existing data. It uses machine learning models to generate outputs similar to the data it is developed on.
In a recent Employment Court decision, LMN v STC, a self-represented litigant cited a supposed precedent to support their argument for procedural leniency due to financial hardship.
However, the Court quickly dismissed the proposition, finding that in fact “no such case exists.” The judgment noted that the case citation appeared to have been generated by AI and gently reminded the litigant that information produced by generative tools “ought to be checked” before being relied upon in court proceedings.
Given the litigant was self-represented this appears to have been regarded simply as an oversight. However, had this been a lawyer, this could represent a significant breach of their professional duties and obligations as officers of the court, including not to mislead the court. This could have had significant repercussions.
Whilst the Court did not dwell on the error or penalise the individual in this case, the incident raises the pressing question, how do we manage AI-generated content in professional settings, especially when the stakes are high?
This question is becoming more significant due to the risks associated with AI-generated content. Such risks largely stem from the design of these tools, which operate by analysing and reflecting available datasets and filling in information where it is not currently available.
As a result, these tools may perpetuate societal biases in current datasets and “hallucinate” facts where the information is not available, resulting in the generation of false or misleading information that can be harmful.
So, when these tools do make errors, how do we determine fault? Is it the user’s responsibility to verify the accuracy of the information? Should AI developers be held accountable for inaccurate or fictional content produced by their algorithms? Or should fault lie with the organisation or business if an individual is allowed to use such tools as part of their work?
Determining responsibility can be complex, and there is an inherent paradox with AI decision-making: is someone at fault, or is everyone at fault?
For example, where should fault lie when a financial services firm uses AI to evaluate credit scores and issue loans, but the assessment is inadequate and the borrower defaults. Or how about AI being used in the health sector to address the problem of wait times, where an inadequate assessment could be fatal.
Beyond “hallucinations” and inaccurate content, AI generated material also raises issues of plagiarism. When users prompt these tools to generate content, the output may inadvertently replicate existing works without proper credit.
AI plagiarism raises a plethora of both ethical and legal concerns, but ultimately, the same question arises, where should fault lie if plagiarism causes harm.
Currently, we do not have strict regulations or liability frameworks to answer this question. Instead, at this point we will need to continue to rely on ad hoc decisions from institutions or courts to address issues as they arise.
But perhaps we should be thinking more proactively. Nobody appears to be under any illusion that AI will transform how work is being performed, yet we seem to be meandering under the impression that we have it under control. Which, we probably do, until we don’t.
So, whilst the LMN case is not a scandal as such, it is a cautionary tale. It highlights the growing pains of integrating powerful new technologies into traditional systems.
The Court’s response was measured and appropriate, offering guidance for how to address AI-related missteps with both firmness and fairness. This case was relatively low risk, but as AI continues to evolve, so too must our frameworks for using it responsibly.
Whether in the courtroom, the classroom, or the boardroom, the message is clear: AI is a tool, not a truth-teller. And like any tool, it must be used with care.
Originally published in The Post