Artists suing Stability AI, Deviant Art, and Midjourney hit a roadblock this week in their quest to prove allegations that AI image generators illegally use copyrighted works to mimic unique artistic styles without compensation or consent.
On Monday, US district Judge William H. Orrick dismissed many of the artists' claims after finding that the proposed class-action complaint "is defective in numerous respects." Perhaps most notably, two of the three named plaintiffs—independent artist Kelly McKernan and concept artist/professional illustrator Karla Ortiz—had apparently never registered any of their disputed works with the Copyright Office. Orrick dismissed their claims with prejudice, dropping them from the suit.
But while McKernan and Ortiz can no longer advance their claims, the lawsuit is far from over. The lead plaintiff, cartoonist and illustrator Sarah Andersen, will have the next 30 days to amend her complaint and keep the copyright dispute alive.
Lawyers representing the artists suing, Matthew Butterick and Joseph Saveri, confirmed in a statement to Ars that the artists will file an amended complaint next month, noting that in the meantime, discovery in the case is proceeding. They also told Ars that nothing in Monday's order was surprising, because it was "consistent with the views" expressed by Orrick during an earlier hearing.
"Judge Orrick sustained the plaintiffs' core claim pertaining to direct copyright infringement by Stability AI, so that claim is now on a path to trial," the lawyers' statement said. "As is common in a complex case, Judge Orrick granted the plaintiffs permission to amend most of their other claims. We're confident that we can address the court's concerns."
Stability AI, Deviant Art, and Midjourney did not immediately respond to Ars' request for comment.
Judge “largely” grants motion to dismiss
Artists suing have alleged that companies behind popular AI image generators are guilty of direct and vicarious copyright infringement, as well as violations of the Digital Millennium Copyright Act and California laws regarding unfair competition and rights to publicity. They argued that since text prompts can generate images "in the style of" their works, every image generated should be considered a "derivative work"—based on various artists' copyrighted works—that could potentially be "misconstrued as fakes." Orrick considered the complaint flawed, agreeing with the defendants that artists seemed somewhat confused about how image generators actually work. Their complaint alleged that Stable Diffusion ran off "compressed copies" of images, which defendants said "contradicted" how plaintiffs described the diffusion process as "an alternative way of storing a copy of those images" by using "statistical and mathematical methods to store these images in an even more efficient and compressed manner." In his order, Orrick demanded clarity on this, writing:Plaintiffs will be required to amend to clarify their theory with respect to compressed copies of Training Images and to state facts in support of how Stable Diffusion—a program that is open source, at least in part—operates with respect to the Training Images. If plaintiffs contend Stable Diffusion contains "compressed copies" of the Training Images, they need to define "compressed copies" and explain plausible facts in support. And if plaintiffs’ compressed copies theory is based on a contention that Stable Diffusion contains mathematical or statistical methods that can be carried out through algorithms or instructions in order to reconstruct the Training Images in whole or in part to create the new Output Images, they need to clarify that and provide plausible facts in support.Andersen's core claim of direct copyright infringement will be allowed to proceed against Stability AI, as the maker of the open source image synthesis model Stable Diffusion, but not against DeviantArt and Midjourney, which built tools using that model but had nothing to do with training it. (DeviantArt and Midjourney remain on the hook for other claims that could be amended, however.) Stability AI tried and failed to argue that Andersen cannot "proceed with her copyright infringement allegations unless she identifies with specificity each of her registered works that she believes were used as training images" for the company's open source image synthesis model, Stable Diffusion. Orrick wrote that Andersen had sufficiently pleaded her case at this stage and suggested that if Andersen could "plausibly plead that defendants’ AI products allow users to create new works by expressly referencing Andersen’s works by name, the inferences about how and how much of Andersen’s protected content remains in Stable Diffusion or is used by the AI end-products might be stronger." But Orrick seemed less convinced by artists' "vicarious infringement theory," where artists alleged that defendants' products can be used by "imposters" to produce “fakes." He said that since artists admitted that "none of the Output Images are likely 'to be a close match for any specific image in the training data,'" Andersen's complaint is "devoid of any allegation" that any of her "specific works (or any other class member) were used to create 'fakes of their works.'" She will need to fix that to advance that particular claim. To succeed in her core copyright claim against Stability AI, Andersen would likely need to show that a specific copyrighted work was copied without permission rather than referencing haveibeentrained.com—a way for artists to search for their works in AI training data. The AI training data search engine results provided "a sufficient basis to allow her copyright claims to proceed," Orrick wrote, ultimately denying Stability AI's motion to dismiss, but those results are "insufficient" to prove direct copyright infringement, as the website's search result "pages show many hundreds of works that are not identified by specific artists," not just Andersen's images. To uphold the DMCA claim, Andersen would also need to be more specific, Orrick wrote, identifying (for each defendant) exactly which copyright management information was stripped or altered. All the other claims will need to be amended, Orrick wrote, "to address the many issues identified." But the artists succeeded on one major front: Midjourney's motion to strike the class-action allegations was denied. Orrick wrote that it was too "premature" to preclude the "possibility" of a class-action certification in this case. Since Andersen is the remaining artist in the proposed class, the fate of any artist hoping to shield copyrighted works from fueling an AI machine churning out knock-offs may rest on how she amends her complaint. It's the broad scope of the class-action lawsuit that Orrick seems to be scrutinizing most at this juncture. He wrote that one of the artists' basic claims—that every image generated by Stable Diffusion's model is a derivative work—"is simply not plausible" because there's no way to prove that every image in the training set is a copyrighted work. It likely also doesn't help that his point seems to be proven by the other named plaintiffs dropped from the complaint this week, who have alleged that their non-copyrighted works were also included in Stable Diffusion's training data. "Plaintiffs are given leave to amend to provide clarity regarding their theories of how each defendant separately violated their copyrights, removed or altered their copyright management information, or violated their rights of publicity and plausible facts in support," Orrick wrote.