Input
Changed
Plaintiffs Fail to Provide Specific Evidence of Damages "Unauthorized Data Collection and Training Destroy the Creative Market" Legal Interpretation of Fair Use Still Pending

A U.S. court has ruled in favor of Meta in a copyright infringement lawsuit over artificial intelligence (AI). The court stated that the plaintiffs failed to provide specific evidence of damages, but clarified that the ruling does not mean Meta’s actions were deemed legal. Given that there are separate precedents stating that data collection for AI training does not qualify as fair use, this case is expected to significantly influence the outcomes of similar lawsuits in the future.
Outcome Hinged on ‘Litigation Strategy Failure,’ Not Legality
On June 26 (local time), Reuters reported that the previous day, the U.S. District Court in San Francisco, California (Judge Vince Chhabria presiding) dismissed the AI copyright infringement lawsuit filed against Meta Platforms and ruled in favor of the defendant. The court stated, “The plaintiffs failed to demonstrate that Meta’s actions encroached on the market for their works” and added, “The plaintiffs pursued the wrong legal approach and did not provide sufficient supporting records.”
The lawsuit, filed in 2023, involved a group of American authors who claimed that Meta’s AI language model “LLaMA” was trained on their books without authorization. The plaintiffs argued that their copyrighted works had been used for LLaMA’s training after being illegally shared online. They also contended that the AI-generated content included sentences similar to their original works or imitated their writing styles, thereby infringing on their economic rights as creators. Given that the AI was used for commercial purposes, they asserted that fair use should not apply.
However, the court found the plaintiffs’ arguments legally insufficient, citing a lack of specific evidence demonstrating actual harm and the mechanisms by which it occurred. The ruling explicitly stated, “This decision does not mean that Meta’s AI training methods are lawful. It merely highlights that the plaintiffs failed to meet their burden of proof.” Legal experts widely view this verdict not as an exemption for AI companies from copyright liability but as a case where the plaintiffs lost due to a flawed litigation strategy and insufficient evidence.

Creative Industry Voices Growing Criticism: “AI Undermines Copyright Revenues”
While Meta has gained temporary relief with this ruling, its legal challenges are far from over. The company remains entangled in multiple AI copyright infringement lawsuits across the United States, and this recent case was merely one of the more high-profile examples. Notably, the plaintiffs in this case have already stated their intent to amend and refile their complaint following the loss. This raises the strong possibility that the same legal issues will surface again in future proceedings.
In response, Meta continues to lean heavily on the “fair use” argument. This principle, enshrined in U.S. copyright law, allows for the use of copyrighted works without permission under certain public-interest conditions. Meta has consistently stated, “We’ve developed innovative open-source AI models for individuals and businesses,” emphasizing that fair use of source material is critical to that process. The company also maintains that the AI-generated text outputs are not direct copies of original works but rather “statistically generated language results,” which it argues should not constitute copyright infringement.
Amid these developments, creators and authors are increasingly calling for systemic reforms beyond mere damage claims. Many are advocating for a formalized “prior consent system” for AI training data and greater “transparency in data sourcing.” They argue that voluntary self-regulation by corporations is insufficient to protect intellectual property rights. Legal experts and advocacy groups are pushing for regulatory updates or even revisions to existing copyright laws at the government level. These debates are expected to impact not only Meta but the broader AI industry, with this Meta-author lawsuit shaping up to become a key milestone in the ongoing legal discourse.
AI Training Not Considered Fair Use, Says Prior Court Ruling
The legal interpretation of whether using copyrighted works for AI model training falls under fair use has already entered a full-fledged judicial review stage in the United States, independent of the Meta case. In February, the U.S. District Court in Delaware ruled that ROSS Intelligence, a former competitor of news agency Thomson Reuters, violated U.S. copyright law by copying existing content to build its AI-based legal platform.
Presiding Judge Stephanos Bibas pointed out that ROSS’s data usage was commercial and non-transformative, stating that it appeared aimed at directly competing with Reuters. The court rejected ROSS’s fair use defense and highlighted the potential market impact of using copyrighted works for AI training. In the ruling, the judge noted, “ROSS’s active, unauthorized use and transformation of the copyrighted material could harm Reuters’ potential AI training data market.”
This ruling is significantly influencing how AI companies worldwide approach the use of copyrighted materials. Until now, generative AI models have typically crawled and used tens of millions of documents, images, and audio files for training, operating in a legal gray area that was often treated as de facto lawful. However, with this court-imposed limitation, the legality of copyright in all AI training datasets is now expected to face increased scrutiny and legal debate going forward.