The guidance, issued this week, highlights that any information put into public AI chatbots tools ‘should be seen as being published to all the world’, therefore no private or confidential information should be entered into them.
It advises judges to check the accuracy of any information provided by an AI tool as it ‘may be inaccurate, incomplete, misleading or out of date’ and AI tools may ‘make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist’.
Moreover, ‘AI chatbots are now being used by unrepresented litigants. They may be the only source of advice or assistance some litigants receive’.
The guidance can be viewed here. It suggests AI tools can be used for administrative tasks such as writing emails or presentations but not for legal research or analysis.