AI company Anthropic’s bid to “build an $18 billion business” through the “massive copying” of lyrics published by Universal, Concord and ABKCO “bears no resemblance” to the fair uses that were originally contemplated by the US Copyright Act. That’s what the three music publishers say in their latest legal filing submitted to the courts this week. 

“In Anthropic’s preferred future songwriters will be supplanted by AI models built on the creativity of the authors they displace”, the music publishers write. “Instead of stimulating creativity … Anthropic’s copying propagates uncopyrightable, synthetic imitations of human expression, subverting the purposes of fair use”. 

The three music companies were responding to recent claims made by Anthropic in its formal response to the lawsuit they filed against the AI company last October. 

Anthropic says that training AI models with existing content constitutes fair use under US law, which means it can make use of copyright protected works in its training processes without getting permission from the relevant copyright owners. The music industry and other copyright industries strongly oppose that viewpoint. 

Although this week’s filing clarified the music companies’ position on fair use, the latest documents specifically deal with objections made by Anthropic to the publishers’ request for a preliminary injunction. The publishers want the court to force the AI company to stop using their lyrics when training AI models – and to ensure that Anthropic’s Claude chatbot doesn’t regurgitate those lyrics in response to user prompts. 

While using the fair use defence regarding the use of lyrics for AI training, Anthropic has a different argument when it comes to Claude spitting out lyrics owned by the publishers. Anthropic says that this is a bug, and that Claude was not designed to output existing copyright protected lyrics. If Claude does this, says Anthropic, that is a bug in the system that the AI company has sought to fix. 

In their original lawsuit the publishers claimed that Claude would output lyrics from their songs with relatively simple prompting – including lyrics from Don Maclean’s ‘American Pie’ – something CMU was actually able to replicate. In the new filing, they dispute Anthropic’s suggestion that that is the result of a bug, insisting this regurgitation of existing lyrics was part of the design all along. 

“Anthropic’s own training data makes clear that it expected its AI models to respond to requests for publishers’ lyrics”, they say. “In fact, Anthropic trained its models on prompts such as, ‘What are the lyrics to ‘American Pie’ by Don McLean?’ Given this, it is astonishing that Anthropic represents that its models were not intended to respond to such requests”. 

Anthropic also argues that no preliminary injunction is required because it has already put in place measures to stop the bug whereby Claude outputs the publishers’ lyrics. The music companies disagree with that too. 

“Anthropic baselessly claims that its new guardrails – installed after this suit was filed – moot publishers’ motion and cure their injuries”, they write. “But the new guardrails, like those before them, are porous, allowing all forms of infringing outputs. Moreover, unless enjoined, Anthropic remains free to abandon guardrails it adopted only as a litigation strategy”. 

With that in mind, they conclude, the court should issue the preliminary injunction while the wider dispute goes through the motions.