
![]() |
Attorneys, beware 'AI hallucinations' - the real consequences of fabricated citationsAs the use of artificial intelligence (AI) becomes part of daily life - from academic to legal research - a recent High Court judgment has once again shown that blind trust in the information provided by AI can have dire consequences. ![]() Image source: Freepik A law firm has been penalised and is facing a referral to the Legal Practice Council after non-existent cases - likely generated by AI - were cited during an application for leave to appeal. According to Retha Beerman and Safee-Naaz Siddiqi, Knowledge Management specialists at Cliffe Dekker Hofmeyr (CDH), this case serves as a grave warning and an urgent summons for the legal profession to employ stringent safeguards against professional negligence in the age of AI. “Ultimately, legal practice in South Africa is at a crossroads,” say Beerman and Siddiqi. “We can embrace AI’s potential to improve efficiency and access to justice, but only if we remain vigilant, using reliable databases and cultivating a culture where verifying citations is second nature.” The case in a nutshellIn Mavundla v MEC Department of Co-Operative Government and Traditional Affairs and Others, the applicant’s legal team sought leave to appeal against a prior High Court ruling but relied on seven non-existent cases. Despite multiple opportunities, the team failed to verify these references, raising suspicions that a generative AI tool had been used without oversight. The presiding judge criticised the team’s negligence and lack of accountability, especially as the candidate legal practitioner denied using AI and the firm’s senior principal offered little reassurance. The judge ultimately dismissed the application, penalising the attorneys by ordering them to pay certain costs from their own pockets and referring the matter to the Legal Practice Council for possible professional misconduct proceedings. The broader issue: "AI hallucinations"The incident highlights an unsettling flaw, sometimes referred to as "AI hallucinations", where an AI engine confidently produces plausible-sounding but ultimately fictional references. Beerman and Siddiqi both warn that these bogus authorities can appear deceptively legitimate, even to the trained eye, usually appearing complete with case numbers, year citations, and made-up judicial remarks. “In fast-paced legal practice, practitioners under time pressure may mistakenly accept these results as genuine unless they diligently confirm them against trustworthy sources." The real harm arises because legal argument depends on accurate precedent. When false citations slip through, legal practitioners risk embarrassment, costs orders, and damage to the court’s trust in counsel’s integrity. Beerman and Siddiqi explain, “In South Africa, which is grounded in constitutional values and a strong tradition of precedent, any contamination of the record by fake cases undermines the credibility of the entire legal system.” Ethical duties and the need for vigilanceSouth African legal practitioners owe a fundamental duty of candour to the court, as enshrined in the Code of Conduct for Legal Practitioners. The judge in Mavundla underscored that courts rely on counsel to cite real and relevant authorities. Whether caused by negligence, over-reliance on AI, or supervision lapses, presenting fictitious precedents to a court is the direct opposite of that duty. “Candidate and junior legal practitioners, in particular, may be tempted to rely on AI for efficiency,” say Beerman and Siddiqi. “However, this does not absolve them – or their supervising principals – of the ethical obligation to ensure all submissions are accurate.” Many ethical and hard-working legal professionals may feel uneasy about their ability to navigate a safe path in a world where generative AI poses problems that they do not understand, and require skills they do not have. Beermans and Siddiqi argue that ignoring this skills gap and failing to gain a deeper comprehension of emerging technologies could be considered an ethical lapse in itself. Vigilance is non-negotiable when it comes to legal practice, and the cornerstone of AI-assisted legal research is meticulous verification. Beerman and Siddiqi explain, “No matter how convincingly an AI tool presents a source, legal practitioners must always confirm its authenticity and relevance, reading the original judgments to avoid citing non-existent cases or misrepresenting the law.” This judgment should serve as a catalyst for conversations about how best to integrate AI into a legal environment founded on precision. “While the technology undeniably streamlines research, it must never replace a lawyer’s critical judgement. Indeed, AI is most beneficial when used in concert with human expertise: legal practitioners must do the heavy lifting to confirm, interpret and apply the law,” Beerman and Siddiqi conclude. |