
- Recent judgments have signalled a growing need for clearer ethical guidance, practitioner safeguards and judicial consistency in handling AI-generated materials.
- The article challenges emerging judicial tendencies to discount or discredit AI-generated content without evidentiary justification, warning of the risks of procedural unfairness and anti-innovation bias.
- It proposes practical steps to help legal professionals adapt responsibly.
Generative artificial intelligence (AI) tools such as ChatGPT are increasingly being used to assist with legal drafting, research, and summary writing. As access to such tools has widened, so too has the potential for unintended misuse, particularly where lawyers, litigants or tribunal users submit AI-generated content that contains inaccuracies, fabricated case law, or stylistic features that draw suspicion.
Recent decisions in England, Wales and Ireland reveal how courts and tribunals are beginning to respond to this development. This article explores seven illustrative cases, drawing attention to outright misuse but also to a