What happened
The US Tax Court is developing guardrails against artificial intelligence misuse, particularly concerning self-represented litigants and sensitive taxpayer data. Judge Mark V. Holmes recently criticised an attorney for citing AI-generated, non-existent precedents in Clinco v. Commissioner, highlighting the court's intent to address such issues. The court notes over three-quarters of its cases involve pro se parties, complicating disciplinary actions. Existing tools like IRC Section 6673 allow penalties up to $25,000 for frivolous arguments. This follows other courts sanctioning lawyers for AI-generated hallucinations, including a $1,000 penalty by the Tenth Circuit and attorney disqualifications in Johnson v. Dunn.
Why it matters
Formalising AI misuse rules in the US Tax Court introduces new compliance requirements for legal professionals and tax preparation services. Procurement teams must scrutinise AI tool capabilities and vendor liability, as reliance on unverified AI output risks significant financial penalties up to $25,000 and career-ending sanctions. Security architects face heightened demands to protect sensitive taxpayer information, including Social Security and bank account numbers, from potential leakage through AI systems.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




