It’s one of the biggest legal fights of our era – and most people are only catching fragments of it through headlines. Can an AI company train its models on your book, your article, your music, without your permission? And if it does, is that theft – or is it just… how technology works?
US courts are trying to answer those questions right now. In 2026, the stakes couldn’t be higher.
The core question: is AI training « fair use »?
Let’s start with the basics, because this matters.
Courts are now confronting a critical question: does training AI on copyrighted material qualify as fair use, or does it require compensation to creators? Following major lawsuits from publishers, authors, and entertainment giants – and a historic $1.5 billion settlement – judges have issued divided rulings. Some see AI training as transformative innovation, while others warn it could overwhelm creative markets and undermine the very purpose of copyright law. Barsiklaw
That tension – innovation vs. creator rights – is at the heart of everything happening right now. And it’s not an easy one to resolve. Reasonable people, including reasonable judges, are landing in very different places.
The landmark cases shaping US copyright law in 2026
Bartz v. Anthropic: the $1.5 billion settlement
A group of authors sued Anthropic in a class action lawsuit, alleging that Anthropic illegally copied their books. After significant briefing, the court ruled that AI training on copyrighted books constitutes fair use, but storing pirated copies does not. After that ruling, the case settled for $1.5 billion, with an estimated payout of approximately $3,000 per work. Norton Rose Fulbright
The ruling itself was nuanced – the training was arguably legal, but the method of obtaining the data was not. That distinction is crucial, and it’s something other AI companies are watching very closely.
Kadrey v. Meta: a parallel story, a different emphasis
In Meta’s case, district judge Vince Chhabria also sided with the technology company, but he focused his ruling on the issue of whether Meta had harmed the market for the authors’ work. Chhabria argued that the key question in virtually any case where a defendant has copied someone’s original work without permission is whether allowing such conduct would substantially diminish the market for the original. MIT Technology Review
Same outcome as the Anthropic case. Two very different legal roads to get there. And that divergence – two judges, two theories, both ruling for the AI companies – leaves a lot of room for future courts to go a different direction entirely.
Just two days after the Anthropic ruling, Judge Chhabria sided with Meta but cautioned that AI training « in many circumstances » would not qualify as fair use. He raised concerns that generative AI could flood the market with new content and erode incentives for human creators. PYMNTS
That warning is easy to miss when you just see « Meta wins. » But it matters.
In re OpenAI: 20 million logs and counting
The OpenAI litigation is the most sprawling of the bunch. US District Judge Sidney Stein affirmed an order compelling OpenAI to produce the entire 20 million-log sample of ChatGPT conversations, not just the subset implicating plaintiffs’ specific works. The ruling marked a significant discovery victory for the news organizations and authors suing OpenAI. Jones Walker LLP
And then – as if that weren’t enough – on March 9, 2026, the court ordered OpenAI to produce additional reservoirs of 78 million and 10 million logs, on top of the 20 million already ordered. Discovery is ongoing. Norton Rose Fulbright
Those logs could prove to be the most important evidence in the entire AI copyright debate. If they show that ChatGPT routinely produces outputs that substitute for or compete with copyrighted content, OpenAI’s fair use defense becomes substantially harder to sustain.
The Supreme Court draws a line: no copyright for purely AI-generated work
Beyond the training data debate, there’s another major copyright question: can an AI itself hold a copyright?
On March 2, 2026, the US Supreme Court denied certiorari in the Thaler v. Perlmutter case, leaving in place the lower court’s decision confirming that material must still have human authorship to be copyrightable. Norton Rose Fulbright
The Copyright Office emphasized that hundreds of works incorporating AI have been registered when a human author is present and exercises creative input. For now, works made solely by autonomous AI are not eligible for copyright protection in the United States. Morgan Lewis
So: AI as a tool? Fine, potentially protectable. AI as the sole creator, with no human in the loop? No protection, full stop. That’s the law as it stands today.
What’s coming next – and what it means for everyone
More hearings are scheduled in 2026 in disputes involving Anthropic and music publishers, Google and visual artists, Stability AI, and AI music generator companies. Upcoming rulings could either bring clarity to how fair use applies to AI or deepen the legal uncertainty, shaping whether the industry operates under broad fair use protections or a licensing framework that alters its economic model. PYMNTS
Meanwhile, the Copyright Office itself is also making moves. In March 2026, the Copyright Office published a notice of proposed rulemaking requesting public comment on proposed changes to its fee schedule – the first adjustment since 2020 – citing significantly increased costs due to inflation and other factors. U.S. Copyright Office
The industry is watching. Creators are watching. And the outcomes of these cases will ripple far beyond the US – because what American courts decide about AI and copyright tends to set the tone globally.
What should creators and businesses do right now?
A few practical takeaways:
- If you’re using AI to create content, ensure a human is meaningfully involved in the creative process and is named as author – otherwise, you may not be able to copyright the result.
- If you’re an AI company, the « we used public data » defense is no longer sufficient on its own. The method of acquiring that data, and what your outputs look like, are both under scrutiny.
- If you’re a rights holder, document your works carefully. The cases moving toward trial in 2026 will test how well plaintiffs can prove market harm.
- And if any of this feels overwhelming – that’s probably because it is. US copyright law in the age of AI is genuinely complex, and it’s evolving faster than most legal frameworks can keep up.
FAQ – AI and copyright in the US
Can AI companies legally train on copyrighted works? US courts have issued divided rulings. Some found AI training to be fair use; others warned it may not qualify. No definitive Supreme Court ruling exists yet, and the question remains actively litigated.
Is content generated by AI protected by copyright in the US? Not if it’s created purely by AI with no human creative involvement. The Supreme Court declined to extend copyright protection to fully autonomous AI-generated works in March 2026.
What was the Anthropic copyright settlement? Anthropic settled a class action lawsuit brought by a group of authors for $1.5 billion, after a court found that while AI training itself may be fair use, storing pirated copies of books is not.
When will US courts issue clearer rulings on AI and fair use? Major fair use decisions in ongoing cases are expected no earlier than summer 2026. Multiple circuit appeals are also pending.
Found this article useful? Share it with someone navigating the AI and copyright landscape – there’s a lot of confusion out there, and good information is genuinely hard to find.
Also read: What entered the US public domain in 2026 – and what it means for creators (link internally)


