It is also calling for clearer guidance on AI use when preparing court documents—pleadings, skeleton arguments, witness statements, expert reports and other documents such as chronologies, lists of issues and neutral case summaries. It has asked the Solicitors Regulation Authority (SRA) to review its code of conduct and issue ‘clear’ guidance on AI use when preparing court documents, and recommended HM Courts and Tribunals Service produce ‘simple and accessible’ guidance on AI use in court.
Potential risks of AI include confidentiality and data protection errors as well as inherent biases, unfair outcomes and inaccuracies or hallucinations of entirely fictitious cases.
Ian Jeffery, Law Society chief executive, said: ‘There have to be safeguards for accuracy and fairness that build public trust in the system.
‘A range of measures, including training and good governance of AI systems must work alongside new rules on transparency. Clear guidance is needed to support legal professionals and the public navigate this new AI era.
‘Other countries are acting to create the right conditions for responsible AI use in the courtroom and there is no reason why we shouldn’t too.’
The Civil Justice Council set up a working group on AI, led by Lord Justice Birss, in January last year. Its interim report and consultation, ‘Use of AI for preparing court documents’, which closed last month, noted AI had transformed the way legal professionals worked but retained ‘significant risk’. It proposed legal representatives be required to make a declaration relating to its use where AI ‘has been used to generate evidence on which the court is being asked to rely’ but not of its use for transcription or spell checking.




