These are some of the main risks with large language models (LLMs), highlighted in updated Bar Council ethics and practice guidance. The guidance concludes barristers must remember that they are ultimately responsible for any legal work produced.
AI-hallucinated fake cases and citations have been accidentally included by lawyers acting in a number of cases this year.
The updated ethics guidance highlights the fact that LLMs do not have a conscience or social and emotional intelligence. It refers to recent case law on the subject as well as academic research into the reliability of AI research.
Barbara Mills KC, chair of the Bar Council, said: ‘As the guidance explains, the best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity.’




