The use of artificial intelligence (AI) in legal proceedings has become increasingly prevalent, but it has also led to a concerning trend of AI-generated hallucinations appearing in court submissions. These hallucinations, where AI systems produce false or misleading information that appears plausible, have caused significant issues in courtrooms around the world, including Australia. This article explores the growing problem of AI hallucinations in legal cases, examining numerous examples and their consequences.
The phenomenon of AI hallucinations in legal proceedings gained widespread attention with the 2023 Mata v Avianca case in the United States District Court for the Southern District of New York. In this case, lawyers representing a plaintiff suing an airline for personal injury used ChatGPT to prepare their submissions. The AI system cited six non-existent cases and provided fake quotes from these fabricated judgments. The lawyers failed to verify the authenticity of these citations before submitting them to the court. As a result, the plaintiff's case was dismissed, the lawyers were sanctioned for acting in bad faith, and both the lawyers and their firm were fined. This case served as a wake-up call for the legal profession regarding the risks of relying on AI without proper verification.
Following the Mata v Avianca case, similar incidents began to surface in other jurisdictions. In Canada, the Ontario Superior Court of Justice encountered a case where counsel mistakenly filed a notice of application referring to legal authorities fabricated by ChatGPT. In the case of Zhang v. Chong, 2023 ONSC 5221, the court held the counsel personally liable for the application cost and expense of the other party's remediation research. This decision highlighted the potential financial consequences for lawyers who fail to verify AI-generated information.
Australia has not been immune to this issue. In a recent case before the Federal Circuit and Family Court of Australia, a lawyer submitted AI-generated evidence that included a list of case authorities that were entirely fabricated. The presiding judge, Justice James Turnbull, became suspicious when he couldn't locate several of the cited cases in legal databases. Upon investigation, it was revealed that the lawyer had used ChatGPT to generate the case authorities without independently verifying their authenticity. This incident led to immediate consequences, including the striking out of the submissions containing the AI-generated evidence and a delay in the case proceedings.
The problem has even reached the highest levels of the Australian judiciary. In a recent appeal to the Full Court of the Federal Court of Australia, an appellant sought to have two appeal judges recused. Their submissions in support of the application for one of the judges referred to a case that did not exist, leading the Court to suspect that the submissions had been prepared by a large language model that had hallucinated. Notably, the Court demonstrated awareness of the issue by redacting the hallucinated reference from the judgment to prevent it from being "propagated further by artificial intelligence systems having access to these reasons."
This incident is particularly significant as it involves a litigant who has filed over 50 unsuccessful applications to the Federal Court since 2000. The Court's decision to hold a separate hearing to determine whether to declare the appellant a vexatious litigant underscores the potential for AI hallucinations to exacerbate existing issues with problematic litigants.
The problem of AI hallucinations in legal proceedings is not limited to common law jurisdictions. In Germany, a lawyer faced disciplinary action after submitting a brief to the Federal Constitutional Court that contained AI-generated case citations. The German Bar Association launched an investigation into the matter, emphasising the importance of maintaining the integrity of legal submissions.
In India, the Delhi High Court encountered a case where a lawyer presented arguments based on non-existent precedents generated by an AI tool. The court reprimanded the lawyer and issued guidelines for the use of AI in legal research, requiring lawyers to disclose the use of AI tools and verify all AI-generated information before submission.
The issue has also emerged in international arbitration. In a commercial arbitration case seated in Singapore, one party's submissions included references to arbitral awards that were later discovered to be AI-generated fabrications. The arbitral tribunal issued a procedural order addressing the use of AI in the proceedings and emphasising the parties' obligation to verify all sources.
These incidents have prompted legal professionals and institutions worldwide to reconsider their approach to AI in legal practice. The American Bar Association has issued guidelines on the ethical use of AI in legal practice, emphasising the need for human oversight and verification. Similarly, the Law Society of England and Wales has published a report on AI and the legal profession, highlighting the risks of AI hallucinations and recommending best practices for lawyers using AI tools.
In Australia, the Law Council of Australia has established a task force to develop guidelines for the responsible use of AI in legal practice. The New South Wales Bar Association has called for mandatory AI ethics training for lawyers, while the Law Society of New South Wales is considering implementing similar requirements.
The judiciary has also taken steps to address the issue. The Federal Court of Australia is reviewing its procedures and exploring the implementation of AI detection tools to identify potentially fabricated submissions. Some courts have begun requiring lawyers to certify that their submissions do not contain AI-generated content or, if they do, that all such content has been independently verified.
The consequences of relying on AI-generated hallucinations in legal proceedings can be severe. In addition to the sanctions and fines imposed in cases like Mata v Avianca, lawyers may face professional disciplinary action. Bar associations and law societies worldwide are grappling with how to address ethical violations related to the use of AI in legal practice.
Moreover, the use of AI hallucinations can have significant impacts on the outcomes of cases. In a family law case in Australia, a judge gave little weight to character evidence that was suspected to be AI-generated, potentially affecting the final decision. In a criminal case in the United States, a defendant's appeal was jeopardised when it was discovered that their lawyer had relied on AI-generated case law in their submissions.
The problem of AI hallucinations in legal proceedings raises important questions about the nature of legal research and the role of technology in the practice of law. While AI tools can significantly enhance efficiency and provide valuable insights, they also introduce new risks that must be carefully managed.
Legal professionals must develop new skills to effectively use AI tools while maintaining the integrity of their work. This includes understanding the limitations of AI systems, developing strategies for verifying AI-generated information, and knowing when to rely on traditional legal research methods.
Law schools are beginning to incorporate AI ethics and skills into their curricula. Some institutions have introduced courses on AI and the law, teaching students how to use AI tools responsibly and how to identify potential AI hallucinations in legal research.
The issue of AI hallucinations in legal proceedings also highlights broader concerns about the impact of AI on the legal system. As AI systems become more sophisticated, there are concerns about the potential for deliberate misuse, such as the creation of convincing fake precedents or the manipulation of legal arguments.
Some legal scholars have called for the development of AI-specific rules of evidence and procedure to address these challenges. Others argue for the creation of specialised courts or tribunals to handle cases involving AI-generated content.
The problem of AI hallucinations in legal proceedings is not limited to written submissions. There have been instances of lawyers using AI-generated images or videos as evidence, raising questions about the admissibility and reliability of such material. In a recent case in the United Kingdom, a lawyer attempted to introduce AI-generated recreations of a crime scene, prompting a debate about the appropriate use of such technology in court.
As AI systems continue to evolve, the legal profession must remain vigilant and adaptive. The incidents of AI hallucinations in legal proceedings serve as a reminder of the importance of human judgment and ethical responsibility in the practice of law. While AI tools can be valuable assets, they cannot replace the critical thinking, ethical decision-making, and professional judgment that are at the core of legal practice.
The challenge for the legal profession is to harness the benefits of AI while mitigating its risks. This will require ongoing education, the development of clear ethical guidelines, and a commitment to maintaining the highest standards of professional conduct. As the recent case in the Full Court of the Federal Court of Australia demonstrates, even the highest levels of the judiciary are grappling with these issues.
The problem of AI hallucinations in legal proceedings is likely to persist and evolve as AI technology continues to advance. Legal professionals, courts, and regulatory bodies must work together to develop effective strategies for addressing this challenge. This may include the development of AI-specific legal research tools that are less prone to hallucinations, the implementation of AI detection software in court filing systems, and the creation of clear protocols for the use and verification of AI-generated content in legal proceedings.
Ultimately, the responsibility for ensuring the accuracy and integrity of legal submissions rests with the lawyers themselves. As the cases discussed in this article demonstrate, the consequences of relying on AI-generated hallucinations can be severe, both for individual lawyers and for the administration of justice as a whole. Legal professionals must remain vigilant, critically evaluate all sources of information, and uphold the highest standards of ethical practice in the age of artificial intelligence.
As the legal profession continues to navigate the challenges posed by AI hallucinations, it is clear that this issue will remain a significant concern for years to come. The recent case in the Full Court of the Federal Court of Australia, where the Court took the proactive step of redacting a hallucinated reference to prevent its propagation, demonstrates a growing awareness of the problem and a willingness to take action to address it. However, as the postscript to that case indicates, the intersection of AI hallucinations and vexatious litigation presents additional challenges that the courts will need to address.
The legal profession stands at a crossroads, balancing the potential benefits of AI with the risks it presents. How the profession responds to the challenge of AI hallucinations will shape the future of legal practice and the administration of justice in the digital age.