A US judge has upheld the core copyright claims made by a group of visual artists in a class action lawsuit against four AI companies, including Stability AI.
The artists have also made a number of other legal claims against the AI companies, including for unjust enrichment, breach of contact and the breaking of rules relating to copyright management information. Those claims have now all been dismissed, in some cases ‘with prejudice’, meaning that the artists will not be able to include them again in possible future claims.
As with other copyright cases involving AI companies, the core copyright claim is that Stability – and the other AI companies involved – used copyright protected content to train a generative AI model without permission.
While this particular case is about visual works, all of the lawsuits brought against AI companies by creators have similar allegations of copyright infringement at their heart.
This means that, while the specific medium – visual art, music or literature – may differ, the underlying claims in the various lawsuits are similar enough that the outcome of one could have a significant impact on all the other cases. As a result, it is important for the wider creative industries that those core copyright claims are fully explored in the courts.
Despite the ‘non-core’ elements of the claim being dismissed, lawyers working for the artists see this latest decision by Judge William H Orrick as a positive step forward. The AI companies have “failed in their efforts to get the case dismissed”, attorney Joseph Saveri told Law360. “After extensive law and motion practice, Judge Orrick has sustained our core claims. We are looking forward to completing discovery, class certification and preparing the case for trial”.
The defendants in the case are Stability AI, Midjourney, DeviantArt and Runway AI. It centres on the training of Stable Diffusion, the text-to-image generative AI model developed by Stability. The artists claim that the other three companies have helped train, worked with and/or distributed the AI model.
The artists claim that their artworks were used to train Stable Diffusion without permission, which leads to two claims of copyright infringement. First, because copies of the works were made during the training process. And second, outputs generated by the model, or at least some of the outputs, directly infringe the earlier works.
Many AI companies argue that the ingestion of existing copyright protected works as part of the training process of AI models constitutes ‘fair use’ under American copyright law, which means permission from copyright owners is not required.
Additionally, AI companies insist that claims made by creators and copyright owners that the outputs from a generative AI are derivative works of content used in training are based on a fundamental misunderstanding of how the technology works. And where the outputs from a generative AI do closely resemble a work or works contained within the training dataset, that is an isolated bug rather than something that is an intentional design feature of the generative AI.
Summarising the claims related to Stable Diffusion’s outputs, Orrick says the artists “allege that Stable Diffusion is built to a significant extent on copyrighted works and that the way the product operates necessarily invokes copies or protected elements of those works”.
Declining to dismiss this claim at this stage, he writes, “the plausible inferences at this juncture are that Stable Diffusion by operation by end users creates copyright infringement and was created to facilitate that infringement by design”.
In other words, when someone uses Stable Diffusion to create an image, the resulting image is an inherent infringement of copyright, and that this is not an accident, but rather something intended in the way the Stable Diffusion model operates.
The judge goes on to add that third party research shows that “training images can sometimes be reproduced as outputs from the AI products”, but cautions that this needs to be explored more fully, saying, “Whether true and whether the result of a glitch (as Stability contends) or by design will be tested at a later date”.
To date, most of the copyright cases brought against AI companies include both copyright infringement claims and claims that other laws have been breached, such as breach of contract or unjust enrichment. Orrick previously dismissed the other claims in this case but gave the artist’s the opportunity to submit an amended complaint.
In the amended complaint that followed, the specific list of other claims were different for each of the four defendants. Nevertheless, Orrick is not convinced by any of the claims that are not related to copyright law, and so he dismissed them.
The dismissal of other legal claims – like breach of contract and unjust enrichment – has been a feature in many of the AI lawsuits that have been filed by the media and entertainment industries. But, in the main, the core copyright claims, especially around the ingestion of content, remain.
We are still awaiting for the first big test case to get to trial where those copyright claims can be properly debated and considered in court. The outcome of the first case to get to trial will likely have a big impact on all the other copyright and AI cases, including those filed by the music industry against Anthropic, Suno and Udio.
Earlier this week, a court filing revealed that the Anthropic lawsuit filed by a group of music publishers is unlikely to see a courtroom until 2026. If Anthropic succeeds in its dismissal motion due to be filed tomorrow, then that case may not come to trial at all – though that, currently, seems fairly unlikely.