German journalist Martin Bernklau made a shocking discovery earlier this year when he typed his name into Microsoft's AI tool, Copilot.
"I read there that I was a 54-year-old child molester," he tells ABC Radio National's Law Report.
The AI information said Bernklau had confessed to the crime and was remorseful.
But that's not all.
Microsoft's AI tool also described him as an escapee from a psychiatric institution, a con-man who preyed on widowers, a drug dealer and a violent criminal.
"They were all court cases I wrote about," Bernklau says.
The tool had conflated Bernklau's news reporting with his personal experience and it presented him as the perpetrator of the crimes he'd reported on.
It also published his real address and phone number, and a route planner to reach his home from any location.
When AI tools produce false results, it's known as an "AI hallucination".
Bernklau isn't the first to experience one. But his story is at the forefront of how the law and AI intersect.
And right now, it's all pretty messy.
To take Copilot to court or not
When Bernklau found the hallucinations about him, he wrote to the prosecutor in Tübingen, the German city where he's based, as well as the region's data protection officer. For weeks, neither responded, so he decided to go public with his case.
TV news stations and the local newspaper ran the story, and Bernklau hired a lawyer who wrote a cease-and-desist demand.
"But there was no reaction by Microsoft," he says.
He's now unsure of what to do next.
His lawyer has advised that if he takes legal action, it could take years for the case to get to court and the process would be very expensive, with potentially no positive result for him.
In the meantime, he says his name is now completely blocked and unsearchable on Copilot, as well as other AI tools such as ChatGPT.
Bernklau believes the platforms have taken that action because they're not able to extract the false information from the AI model.
AI sued for defamation
In Australia, another AI hallucination impacted the mayor of regional Victoria's Hepburn Shire Council, Brian Hood, who was wrongly described by ChatGPT as a convicted criminal.
Councillor Hood is in fact a highly respected whistleblower who discovered criminal wrongdoing at a subsidiary of the Reserve Bank of Australia.
He launched legal action against OpenAI, the maker of ChatGPT, which he later dropped because of the enormous cost involved.
If he'd gone through with suing OpenAI for defamation, Councillor Hood may have been the first person in the world to do so.
'Not an issue that can be easily corrected'
In the US, a similar action is currently proceeding.
It involves a US radio host, Mark Walters, who ChatGPT incorrectly claimed was being sued by a former workplace for embezzlement and fraud. Walters is now suing OpenAI in response.
"He was not involved in the case … in any way," says Simon Thorne, a senior lecturer in computer science at Cardiff School of Technologies, who has been following the embezzlement case.
Mr Walters' legal case is now up and running, and Dr Thorne is very interested to see how it plays out — and what liability OpenAI is found to have.
"It could be a landmark case, because one imagines that there are many, many examples of this," he says.
"I think they're just waiting to be discovered."
But when they are, there may not be a satisfying resolution.
"[Hallucinations are] not an issue that can be easily corrected," Dr Thorne says.
"It's essentially baked into how the whole system works.
"There's this opaqueness to it ... We can't work out exactly how that conclusion was reached by ChatGPT. All we can do is notice the outcome."
Could AI be used in court?
AI doesn't only feature in complaints. It's also used by lawyers.
AI is increasingly used to generate legal documents like witness or character statements.
Victorian lawyer Catherine Terry is heading a Victorian Law Reform Commission inquiry into the use of AI in Victoria's courts and tribunals.
But Ms Terry says there's a risk of undermining "the voice of the person", an important element of court evidence.
"It may not always be clear to courts that AI has been used … and that's another thing courts will need to grapple with as they start seeing AI being used in statements before the court," she says.
Queensland and Victorian state courts have issued guidelines requiring that they be informed if lawyers are relying on AI in any information they present in a case.
But in future, courts could be using AI, too.
"AI could be used for efficiency in case management [or] translation," Ms Terry says.
"In India, for example, the Supreme Court translates the hearings into nine different local languages."
AI could also be used in alternative dispute resolution online.
It's further fuel for clear legal regulations around AI — for those using it accurately as well as those impacted by its mistakes.
"AI is really complicated and multi-layered, and even experts can struggle to understand and explain how it's used," Ms Terry says.
Ms Terry welcomes submissions to the Law Reform's AI inquiry, to help increase clarity and safety around AI in the legal system.