• Colorado Wants to Regulate Your AI — and You Are the Deployer

    Colorado's AI Act takes effect on June 30, and its deployer obligations apply to anyone who uses AI as a substantial factor in consequential decisions — including law firms. 'Legal services' is one of the statute's eight enumerated categories. Most of the legal profession has not grappled with the fact that it is on the regulated side of this law.

    Read more
  • The Errors Are More Interesting Than the Apology

    Sullivan & Cromwell’s AI-contaminated bankruptcy filing has drawn coverage for the firm’s apology. The three-page errata is more revealing: errors that suggest AI corrupted correct citations during editing, a compliance program that failed despite being rigorous, and a supervision obligation the firm’s letter concedes without naming.

    Read more
  • The Trained Volunteer Lost. The Chatbot Should Worry.

    A federal court dismissed Upsolve's challenge to New York's unauthorized-practice-of-law rules, holding that trained non-lawyers cannot give individualized legal advice — even for free, even with safeguards, even with disclaimers. The opinion never mentions AI. But it describes AI legal tools more precisely than any opinion that has.

    Read more
  • New York Wants to Ban Your Chatbot From Answering Questions

    New York Senate Bill S7263 would impose civil liability on chatbot proprietors whose systems provide 'substantive' responses in areas reserved for licensed professionals — and declares that disclosing the chatbot's non-human status is not a defense. The bill's impulse is understandable, but its mechanism confuses information with advice and would suppress exactly the kind of public legal education that existing law permits.

    Read more
  • The Model Will Not Push Back

    Hallucination gets the headlines, but sycophancy may be the more dangerous failure mode for lawyers. An LLM that systematically validates your reasoning instead of challenging it functions as a mirror, not a counsel. And mirrors make poor advisors.

    Read more
  • You Probably Have a Duty to Warn Your Clients About ChatGPT

    Heppner established that consumer AI conversations are not privileged. But the case also raises an uncomfortable question for practicing lawyers: if a known hazard to the privilege now exists, do you have a duty to warn your clients about it? The answer, under existing ethics rules, is almost certainly yes.

    Read more
  • Your AI Conversations Are Not Confidential — And a Federal Court Just Said So

    A comparison of Anthropic's data-handling policies across Claude's consumer and commercial tiers — and why the distinction now carries real legal consequences after the SDNY's decision in United States v. Heppner.

    Read more