
Analysing seven recent cases, including Ayinde, Al-Haroun and Harber, Ma warns that judges are beginning to discount submissions purely because they ‘look like’ they came from ChatGPT. Such suspicion, he says, risks procedural unfairness and chills innovation. While fabricated cases have rightly drawn sanctions, many litigants—especially those in person—are penalised for honest use.
Ma urges regulators and the judiciary to embrace AI literacy, not reflex scepticism, through verification protocols, training, and consistent guidance. His call: regulate and educate, don’t alienate. ‘AI is here to stay,’ he writes, ‘and the courts must learn to live with it.’