
- Explains what AI hallucinations are, what risks they present and how the US, EU and UK are responding.
Many of us will have doubtless read about the New York lawyer who used Chat GPT to help write a legal brief. In the brief, six of the submitted cases were fictitious; Chat GPT invented them and even reassured the lawyer concerned, Steven Schwartz, that they were real, saying they could be found on LexisNexis and Westlaw. Schwartz was subsequently penalised for his actions with a $5,000 fine.
What are AI hallucinations?
This case is an example of AI ‘hallucination’, where an AI chatbot, like Chat GPT, invents something, like a case, in an effort to answer its user’s questions. Given the supposed ‘super intelligence’ of AI, one would assume that these hallucinations are rare. This, however, is far from true. A Stanford University study found that AI chatbots hallucinate between 58% and 82%