Lawyer brought to court for using AI... Then gets caught using AI while defending not using AI

Well, that's exactly what happened to Michael Fourte, a New York lawyer.
Text: Óscar Ontañón Docal
Published 2025-10-23

New York. If you thought using AI in court documents was risky, imagine getting caught, and then using AI again to explain you hadn't used it in the first place. And then getting caught again. Well, that's exactly what happened to Michael Fourte, a New York lawyer.

The saga (first reported by 404 Media) began when Fourte was caught submitting court filings riddled with hallucinations, all generated by... AI. Rather than admitting his mistake, Fourte tried to explain himself. Only problem? The explanation was also drafted by AI.

Law

Judge Joel Cohen wasn't impressed. In his ruling, Cohen pointed out the obvious: Fourte relied upon unvetted AI to defend his use of unvetted AI. Translation: he tried to use a robot to explain why the robot messed up in the first place.

The drama didn't stop there. After the plaintiffs flagged the errors, Fourte insisted AI wasn't even involved, demanded proof, and claimed the cases weren't fabricated. "The cases are not fabricated at all," he said, despite them being very much fabricated, of course.

Eventually, he admitted using AI, claimed "full responsibility," and simultaneously tried to blame his team. Ultimately, the court granted sanctions against Fourte, proving that there is indeed a limit to how far AI excuses can stretch in court. A cautionary tale for lawyers...

Back