Over the past twelve months – and, in fact, for considerably longer than that – CMU has been following the evolution of AI and its intersection with the music industry very closely.
In 2018 we presented a full day of content about artificial intelligence and music at The Great Escape – the Brighton-based UK music business conference.
Five years on from that we’ve been working a lot over the past twelve months with a range of industry organisations in the UK and Europe to help explain what AI is, to help people in the music industry understand the challenges, opportunities and threats of AI, and what things they need to be aware of when it comes to AI.
We’ve delivered bespoke training and workshops to companies, organisations and boards; we’ve spoken at conferences, and offered insight, consultancy and support to companies and organisations trying to get their head around what AI means to their business, members, or the wider industry.
And, of course, we’ve written – what feels like almost every day – about the rapid advance of artificial intelligence and its evolution and impact on the music industry.
There was no shortage of news about AI in CMU in 2023.
As with society at large, artificial intelligence became a much bigger talking point within the music industry, as music creators and music companies explored how AI can enhance their work and enable new creative and commercial opportunities.
With AI technologies getting ever more sophisticated, with more tools to employ and opportunities to pursue, and with many of the important legal questions yet to be answered, it’s important to understand what happened with AI and music in 2023 to be fully prepared for 2024.
Music + AI: what happened in 2023?
2023 was the year when debates around artificial intelligence started to top the agenda in society at large, and also in the music industry.
Of course, the recent attention is simply the latest chapter in the ongoing story of AI. Artificial intelligence technology has been in development for decades and the impact of AI on the music industry has been discussed in some quarters for years.
Nevertheless, AI is now much more in the public consciousness, partly because the sophistication of AI is increasing fast, and partly because lawmakers are now giving much more serious consideration to how they should regulate it.
However, the watershed moment that really catapulted AI into the public consciousness was the launch – in late 2022 – of OpenAI’s product ChatGPT which, for the first time, made powerful AI available to almost anyone and offered an intuitive chat interface to interact with the underlying technology.
ChatGPT is an example of what’s known as ‘generative AI’ and is a type of ‘Large Language Model’ or LLM that uses a ‘generative pre-trained transformer’ – the GPT of its name – to create its output.
Generative AI models can ‘create’ content – text, images, audio, video and even music. The types of output generative AI can produce will depend on the underlying model used by the technology, and the way those models have been developed and trained.
Generative AI and its impact on the music and content industries
Generative AI – and the various outputs of generative AI, and the impacts of those outputs – are, unsurprisingly, of particular interest to the music business and other content-driven industries.
People within the music industry are increasingly exploring the application of generative AI – and especially those models that assist in creating music or which can generate music from scratch.
It’s still early days for models that can truly generate original music – though 2023 has seen a number of significant developments, including the launch of MusicGen by Meta, Lyria and MusicLM by Google Deepmind/Alphabet, MuseNet from OpenAI, and Stable Audio by Stability AI.
There are also a number of platforms that use AI to stitch together existing musical segments or stems based on prompts from a user the most notable of which, in terms of attention, is probably Boomy.
And there are other AI tools that can assist in the music-making process, for example generating songwriting ideas, transforming vocals into different voices and helping with mastering.
With increased interest in AI – and increased access to AI platforms and products – 2023 has seen many music-makers and music companies explore how generative AI can be used as part of the music creation process. This includes what possible new products and experiences AI can facilitate; and to what extent the AI models that generate music from scratch are a threat or an opportunity for the music industry.
There was a lot of discussion in 2023 within the music community about how AI will impact on music creation, music marketing, and the music business more generally.
There are clearly opportunities created by AI, and many ways that AI technologies will enhance the business.
An increasing number of music creators and music companies are exploring and identifying way to capitalise on those opportunities, and figuring out which AI products and services may offer ways to enhance their work.
In this article CMU takes a look at the deals, disputes and debates, lawsuits and lobbying, and innovation and exploration that informed the conversation.
Music + AI: Innovation, Deals, Partnerships
An increasing number of music-makers are using AI tools as part of their creative workflow.
In April, distributor Ditto published the results of a survey of nearly 1300 artists. 59.6% said they were using AI: 11% to support their songwriting, 20.3% for music production and 30.6% for mastering.
In July another distributor, Believe-owned TuneCore, published the results of a survey of nearly 1600 artists. Half of those surveyed said that they had a positive perception of AI and around a third said they were interested in using it as part of their music-making process, with a similar number saying that they thought AI could help them to market and promote their music.
Music + AI Collaborations: artists, labels and music libraries
Some artists and labels have also partnered with AI companies to develop new products and experiences for fans.
Universal Music and Spinnin’ Records both announced partnerships with Endel, a company that uses AI to create “personalised soundscapes to help you focus, relax and sleep”.
Grimes partnered with CreateSafe to build a platform where people can create vocals that imitate her voice.
Warner Music announced a project that will use AI to recreate the voice and image of Edith Piaf in a new animated biopic.
And both Universal Music and Warner Music began participating in a YouTube Music AI Incubator, which says it aims to bring together “some of today’s most innovative artists, songwriters and producers” to help inform YouTube’s “approach to generative AI in music”.
Some music companies, though primarily production music libraries to date, have also collaborated with various AI companies that are training models to generate music from scratch.
These are licensing deals where the production music library explicitly makes content it controls available for training. For example Meta licensed music from ShutterStock and Pond5 to train MusicGen, while Stability AI did a deal with AudioSparx.
Whether contributors of music to these libraries were explicitly aware that their music could be used in this way is unclear – and, even if they were, those music-makers may not be fully aware of the long term implications of this move.
When it comes to negotiating licensing deals with AI companies around the training of their models, the wider opportunity for the music industry depends to an extent on the outcome of one of the biggest disputes.
Music + AI: Copyright Disputes
For the music industry – and the wider copyright industries – generative AI presents a number of challenges which have placed copyright owners in opposition to AI companies. A key dispute relates to the copyright obligations of AI companies and how they have developed their technology.
Generative AI models are trained by being exposed to content.
There are two ways to provide training content: commission and create content specifically for training the model, or use existing content. If existing content is being used then the big question is does the AI company need permission from whoever owns or controls the copyright in that existing content?
The music industry is adamant that permission must be sought to use any copyright-protected material to train AI.
This is based on at least two separate arguments. The first is that, at the very least, in order to ingest content to train AI, an AI company will need to make a copy of that content – and making a copy requires the permission of the copyright owner.
Therefore, goes this argument, the AI company must negotiate a licensing deal with the relevant copyright owners before making use of any existing recordings and songs to train generative AI models.
The second argument is that, if an AI model has, for example, been trained with a catalogue of existing songs and then generates a new song based on what it learned, then that new song is basically a derivative work of the original songs.
Copyright owners also have control over the adaptation of their work, meaning the AI company would need to negotiate a deal for that too, the terms of which would also identify who owns any rights in the new material.
AI + Copyright: training, fair use, and data mining
While some AI developers are already collaborating with music creators and music businesses, most technology companies building generative AI products disagree with the music industry when it comes to copyright.
They say that the idea that a piece of music generated by AI is a derivative work of the material used for training is based on a misunderstanding of how generative AI works – and would involve a radical expansion of what is meant by the term ‘derivative work’.
They also insist that the initial copying of music as part of the training process is covered by existing copyright exceptions.
It is true that copyright law identifies certain scenarios where people can make use of copyright-protected works without getting permission from the copyright owner. These commonly include things like parody, making copies for private use and critical analysis.
Whether copyright exceptions extend to training for AI is currently open to debate.
When it comes to making copies in order to train an AI model, AI companies would likely rely on exceptions that relate to text and data mining, which do exist in some countries. For example, there is a data mining exception in European law.
“Sacem intends to restore the exclusive rights of creators by making data-mining operations subject to prior authorisation.”
However, that data mining exception provides an opt-out for copyright owners, meaning digital platforms often exclude any content stored on their servers from the exception. And in October, French collecting society Sacem announced it was exercising the opt out for its entire repertoire across the board.
In 2022, the UK government proposed introducing a new text and data mining exception specifically to benefit the AI sector.
“These proposals would give the green light to music laundering – if the government truly wants the UK creative industries to be world leading, they must urgently rethink these plans.”
Unsurprisingly, this resulted in a major push-back from all the copyright industries, including the music industry. UK Music said that the proposed new exception would be “dangerous and damaging” and would allow AI companies to “launder” music in order to generate new content.
In January 2023, a committee in the House Of Lords criticised the proposal and in the following month the UK’s (then) Intellectual Property Minister George Freeman announced that the plan to introduce the new exception had been dropped. Instead “deeper conversations” would take place with the creative industries and the tech sector to consider the copyright obligations of AI companies.
Nevertheless, there are some countries where AI companies probably can rely on data mining exceptions in law. In a recent submission to WIPO (the World Intellectual Property Organisation) Universal‘s VP Of Global Content Protection, Graeme Grant, raised concerns about “text and data mining exceptions to copyright law as enacted in 2021 by Singapore”.
He also added that “legislation in Japan, introduced in 2009 and amended in 2018, also includes too broad an exception which, while it is not unlimited and includes some protections for rightsholders, has the potential to cause confusion”.
Meanwhile, in the US, AI companies are relying on the related and more ambiguous concept of fair use.
AI + Music: Fair Use Disputes
In the context of the US, we know that many AI companies are of the opinion that training a generative AI model with existing content constitutes ‘fair use’ under American copyright law.
“I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use…”
The US Copyright Office defines fair use as “a legal doctrine that promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances”.
Section 107 of the US Copyright Act provides the statutory framework for determining whether something is a fair use, stating that four factors should be considered: Purpose and character of the use; nature of the copyrighted work; amount and substantiality of the portion used; and the effect of the use upon the potential market for or value of the copyrighted work.
The dispute over fair use in the context of AI was set out in black and white when the Copyright Office undertook a consultation which received thousands of submissions, including some from technology companies working on AI.
“We believe that training AI models is an acceptable, transformative and socially beneficial use of existing content…”
Various AI companies submitted statements to the Copyright Office
Stability AI: “We believe that training AI models is an acceptable, transformative and socially beneficial use of existing content that is protected by the fair use doctrine and furthers the objectives of copyright law, including to ‘promote the progress of science and useful arts’”.
OpenAI: “[We believe] that the training of AI models qualifies as a fair use, falling squarely in line with established precedents recognising that the use of copyrighted materials by technology innovators in transformative ways is entirely consistent with copyright law”.
Google: “The doctrine of fair use provides that copying for a new and different purpose is permitted without authorisation where – as with training AI systems – the secondary use is transformative and does not substitute for the copyrighted work”.
The music industry made its own submissions insisting that the training of an AI model cannot be considered to be fair use.
“Generative AI models… threaten to displace human creators by producing outputs that do not embody human creativity while supplanting works of human creativity in the marketplace.”
That included US record industry trade bodies the RIAA and A2IM, which were even more resolute on this point when submitting a ‘reply comment’ to the Copyright Office that responded to the submissions made by the tech companies.
The RIAA and A2IM observed that the first principle of copyright is “to promote human creative endeavours” and “that purpose is served by protecting human creators from having their works used to develop generative AI models that threaten to displace human creators by producing outputs that do not embody human creativity while supplanting works of human creativity in the marketplace. Such uses will rarely, if ever, be fair uses”.
Following the publication of the submissions made to the US Copyright Office, the VP Of Audio at Stability AI, Ed Newton Rex, resigned over his employer’s position on fair use.
In a statement explaining his decision to resign he wrote: “One of the factors affecting whether the act of copying is fair use, according to Congress, is ‘the effect of the use upon the potential market for or value of the copyrighted work’. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use”.
Newton Rex is a UK musician and entrepreneur who previously founded music AI start-up Jukedeck, which was acquired by TikTok. He oversaw the development of Stability’s music AI product Stable Audio, which launched in September, trained with licensed music.
AI + Music: The Transparency Debate
Another big debate around AI relates to transparency.
There are two elements to this. First, the music industry wants AI companies to be transparent about what datasets they have used to train any one model. And second, the music industry wants content generated by AI to be clearly labelled.
Even if the music industry wins the argument over the copyright obligations of AI companies, it will remain tricky for record labels and music publishers to enforce their rights if they don’t know what music has been used to train each generative AI model.
However, if AI companies are forced to publish a list of all their datasets – identifying what data and content, and therefore what recordings and songs, have been used – then it will be easy for music companies to identify if their rights have been exploited. The music companies could then force the AI companies to get licences, otherwise they will be liable for copyright infringement.
“53% of musicians surveyed said they were concerned about how fans might perceive the use of AI in the music-making process.”
The labelling requirement would mean that AI-generated music could be easily identified.
There is still some debate to be had about how this requirement might work. In particular, would the requirement only apply to music that is entirely generated by AI or might it also apply where a human creator uses AI as part of the music-making process?
Forcing labelling on AI-assisted music might prove controversial within the music community, because it seems likely that at least some music-makers won’t want to declare their use of AI as part of the music-making process.
In October, the results of another survey were published by Pirate, which operates a network of studios and rehearsal spaces.
Of the 1000 musicians surveyed, 25% said they had already experimented with AI tools and 46% said they’d consider using them in the future, but only 48% said they’d admit to using AI when making music. 53% of those surveyed also said they were concerned about how fans might perceive the use of AI in the music-making process.
AI + Music: Lobbying + Regulation
With all the ambiguities and disputes around the copyright obligations of AI companies, the music industry obviously wants some legal clarity.
Fortunately lawmakers in most countries are now giving serious consideration as to how AI in general should be regulated. That provides an opportunity for the music industry to seek the clarity it needs and, if necessary, changes to the law.
A number of US music industry organisations sought to coordinate that activity – both across the music community and across the world – by launching the Human Artistry Campaign at South By Southwest in March.
Industry organisations including the RIAA, A2IM, the NMPA, the Music Artists Coalition and the Recording Academy were among those involved in the launch of the Human Artistry Campaign.
Other organisations from across the world subsequently signed up, including some representing other artforms, and the creators and copyright owners in those disciplines.
The Human Artistry Campaign set a template for how many music companies and music industry organisations speak about the topic of AI and creation.
A statement setting out the position of the Human Artistry Campaign began with the positives. It noted how AI “assists the creative process” and “has many valuable uses outside of the creative process”, including powering fan connections, music recommendations, track identification and payment systems. “We embrace these technological advances”, it stressed.
The Human Artistry Campaign set out the challenges created by AI, urging technology companies and lawmakers to ensure that human creativity is protected. It also listed the demands of the music industry: that AI companies must respect copyright and seek consent before making any use of existing materials, and that there must be full transparency about datasets and clear labelling of AI-generated content.
AI + Regulation: The EU AI Act
Lawmakers in multiple countries began consultations about the regulation of AI in 2023, providing an opportunity for the music industry to lobby on this issue. Meanwhile, in the European Union, final negotiations in relation to an AI Act – first proposed in 2021 – rumbled on throughout much of 2023..
The EU’s AI Act seeks to regulate many different uses of AI and initially the regulation of generative AI specifically was not a priority. However, thanks to the ‘ChatGPT effect’, generative AI models became a bigger area of focus in the latter phase of negotiations. Of particular interest to the music industry is the section that sets out transparency obligations for AI companies.
By the end of 2023, the act was at what is called the trilogue stage, where the European Commission, European Parliament and EU Council come together and try to agree a final draft.
There were fears in the music community that the transparency obligations would be greatly watered down because of the lobbying of technology companies. Ahead of talks between the Commission, Parliament and Council regarding the final draft in December, various music industry organisations called on lawmakers to ensure strong transparency obligations remained.
That seemed to work. The IFPI subsequently said that an agreement reached by the three EU institutions at those talks “makes clear that essential principles – such as meaningful transparency obligations – must be fully reflected in the final legislation”.
AI + Copyright: Lawsuits
At the same time that lawmakers considered new regulation for AI, a number of lawsuits were filed by copyright owners, primarily in the US.
In these lawsuits the copyright owners argue that AI companies have not met their obligations under existing copyright laws.
If these cases get to trial, the courts will need to decide whose interpretation of copyright law is correct. Is the music industry right to argue that a tech company needs to secure a licence to train a generative AI model with existing recordings and songs? Or are the AI companies right that such training is covered by a copyright exception or – in the context of American law – constitutes fair use?
Most of the test cases, although relevant to the music industry, were not filed by music companies.
With text and image generative AI models more advanced than models relating to music, it is mainly authors, photographers, newspaper publishers and image libraries pursuing the big test cases in this domain.
However, there was one specific music case – though it was focused on lyrics, rather than compositions or recordings. In October, a group of music publishers sued Anthropic, an AI company that has received investment from both Amazon and Google. The music companies said that Anthropic had used their lyrics without a licence when training its chatbot Claude.
Backing up that claim, their lawsuit alleged that: “As a result of Anthropic’s mass copying and ingestion of publishers’ song lyrics, Anthropic’s AI models generate identical or nearly identical copies of those lyrics, in clear violation of publishers’ copyrights”.
It added: “When a user prompts Anthropic’s Claude AI chatbot to provide the lyrics to songs such as ‘A Change Is Gonna Come’, ‘God Only Knows’, ‘What A Wonderful World’, ‘Gimme Shelter’, ‘American Pie’, ‘Sweet Home Alabama’, ‘Every Breath You Take’, ‘Life Is A Highway’, ‘Somewhere Only We Know’, ‘Halo’, ‘Moves Like Jagger’, ‘Uptown Funk’ or any other number of publishers’ musical compositions, the chatbot will provide responses that contain all or significant portions of those lyrics”.
“This foundational rule of copyright law dates all the way back to the Statute Of Anne in 1710, and it has been applied time and time again…”
The lawsuit filed by music publishers cited legal precedent – from English law – all the way back to 1710.
“A defendant cannot reproduce, distribute and display someone else’s copyrighted works … unless it secures permission from the rightsholder”, it stated. “This foundational rule of copyright law dates all the way back to the Statute Of Anne in 1710, and it has been applied time and time again to numerous infringing technological developments in the centuries since”.
Most of the cases testing the copyright obligations of AI companies accuse the defendants of various violations of copyright law. Initial responses from the tech firms then seek to get most of the claims dismissed, accusing plaintiffs of misunderstanding how generative AI works or misrepresenting what copyright law says. In many cases, judges have accepted these arguments and cut the lawsuits back.
In the music publishers v Anthropic case, the AI company’s initial response was to try and get the lawsuit dismissed on jurisdiction grounds. The music companies have filed their lawsuit in Tennessee, but Anthropic argues it should be fought in the Californian courts, the state where the AI company is based and where most of the other test cases are being pursued.
However, even once judges have trimmed down these lawsuits, generally the core complaint from the copyright owners still stands.
That core complaint is that if an AI model has been trained with existing content without a licence, then the company that undertook that training is liable for copyright infringement. Which means the lawsuits can proceed. And if and when they get to trial, the fair use defence will be properly tested.
AI + Music: Vocal Clones
One use of generative AI that grabbed plenty of headlines in 2023 relates to the AI models that can create vocal clones – and more specifically, vocal clones that sound like a specific artist.
One track featuring AI-generated cloned vocals that got a lot of attention was ‘Heart On My Sleeve’, created by a producer called Ghostwriter and which imitated the vocals of both Drake and The Weeknd.
After going viral on social media the track popped up on streaming services, before being quickly removed following the intervention of Universal Music, which works with both Drake and The Weeknd.
For the music industry, this use of AI is one of the biggest concerns, but also possibly one of the biggest opportunities.
Legally speaking, AI-powered vocal clones pose an interesting question. Can an artist protect their voice and unique vocal style from cloning – and if so, how?
For an AI model to generate vocals in the style of Drake, it will almost certainly need to be trained on samples of Drake’s voice and vocal style – which means that this training will almost certainly use Drake’s existing recordings. And as far as the music industry is concerned, that use would need permission from the copyright owner, who could then negotiate a licensing deal that sets out how the Drake-vocal-generating AI could be used.
But an AI company might argue that training a model in that way to make a vocal clone is fair use and therefore no licensing deal needs to be negotiated.
Meanwhile, for artists there is an additional concern.
They may not own the copyright in their recordings, because it is common in the music industry for record labels to own the copyright in the tracks that they release. Therefore, even if an AI company did negotiate a licence, it would likely do so with the artist’s label rather than the artist direct. The label may choose to involve the artist in the deal negotiations, but may not be obliged to under copyright law.
Both these factors have posed another important question: are there any other legal protections beyond copyright that an artist can rely on to protect the use of their voice?
That question has put the spotlight on publicity or personality rights, which exist in most countries and allow people to stop the exploitation or commercialisation of their likeness, image or personality. Trademark and data protection law may also be relevant in these circumstances.
In his submission to WIPO in December, Universal Music‘s VP Of Global Content Protection Graeme Grant explained how the major had gone about getting ‘Heart On My Sleeve’ removed from the streaming services, initially relying on copyright law, but subsequently trademark and publicity rights.
The original version of the Ghostwriter track, he said, “contained a sample from a UMG-controlled track called ‘No Complaints’ by Metro Boomin, [so] was removed on the basis of copyright infringement. A new version … was then uploaded to [streaming services] with the Metro Boomin sample removed, which was reported on the basis of trademark and name, image and likeness violations”.
In both the US and Europe, publicity and personality rights have been successfully enforced in the past.
This has generally been against brands who hired a singer to perform sound-a-like vocals in the style of a famous artist for use in an advert. Famous cases include Bette Midler vs the Ford Motor Co, and Tom Waits against snacks company Frito-Lay.
It is thought these rights could be enforced more generally against people or companies generating sound-a-like vocals – including by using AI – for release rather than use in advertising.
We actually had a potential test case relating publicity rights and soundalikes working its way through the courts in 2023.
Rick Astley sued Yung Gravy and his label Universal Music over the track ‘Betty (Get Money)’. It heavily interpolated Astley’s ‘Never Gonna Give You Up’, but rather than sampling – and licensing – the original recording, the rapper recreated the vocals from the 1980s hit in a way that sounded very like the original – albeit by using a human vocalist rather than AI.
Yung Gravy and his label had negotiated a licensing deal with the publishers of ‘Never Gonna Give You Up’ that allowed the interpolation of the original song into the new work. By not sampling the original recording, no licensing deal was required with Astley’s label. However, Astley argued that – because the new vocals sounded so similar to his in the original track – his publicity rights had been violated.
In the end the lawsuit was settled before it got to court.
Had it got to trial, even though AI was not employed in this scenario, the action would have been an important test case about publicity rights and soundalikes.
Specifically, whether publicity rights under Californian law can protect a voice more generally, and not just in an advert where brand endorsement is implied.
Publicity and personality rights are complex, and work differently from country to country, and in the US from state to state.
In some countries – including the UK – there is not currently any publicity or personality right in law. Given these ambiguities and inconsistencies – and the important role these rights may play when it comes to using AI to create voice clones – the music industry has called on lawmakers to refine, extend and/or strengthen these rights.
In the US, the music industry has called for a publicity right to be introduced at a US-wide federal level. In the UK the music industry has called for a publicity right to be introduced into law for the first time.
Representatives for all three major record companies have made statements about publicity and personality rights in the last year.
Universal Music‘s VP Of Global Content Protection, Graeme Grant, in his submission to WIPO, discussed what refinements to the law are required to protect the interests of the industry in the context of AI.
He said that providing AI companies cannot use the fair use defence or rely on data mining exceptions current copyright law is probably sufficient to regulate the use of music in the AI space. However, he also added that, “additional protection of personal rights (ie voice and likeness) may be necessary”.
Meanwhile, back in July, Universal‘s General Counsel Jeff Harleston told a US Congressional hearing on AI: “We urge you to enact a federal right of publicity statute”.
And on an investor call in November, Warner Music CEO Robert Kyncl said it is important that “name, image, likeness and voice is afforded the same protection as copyright”.
Also in July, Sony Music‘s President of Global Digital Business Dennis Kooker, speaking at another Congressional hearing, said: “Existing state ‘right of publicity’ laws are inconsistent and many are not sufficient to protect Americans against AI clones. Creators and consumers need a clear unified right that sets a floor across all fifty states”.
In the UK, there have been various calls for a publicity right to be introduced into UK law.
In July, UK Music published a position paper on AI which, among other things, said: “A new personality right should be created to protect the personality / image of songwriters and artists”.
While clarity is sought on voice clones and publicity rights, the music industry has started to add unofficial voice cloning sites to its piracy watch list, citing both copyright and publicity right concerns.
In October, the RIAA asked the US government to also add voice cloning sites to its Notorious Markets list of piracy platforms.
AI + Music: Opportunities from vocal cloning
While the music industry sees voice cloning AI as a potential threat when it is used to exploit an artist’s voice without permission, it’s also clearly an opportunity. Artists could choose to allow their voices to be cloned, by fans or other musicians, in a way that creates new revenue streams.
Grimes was one of the first artists to capitalise on this opportunity.
In May, she partnered with a company called CreateSafe to launch a website – Elf Tech – that allows anyone to generate tracks featuring vocals in her voice, on the condition they share with the musician any royalties paid for the streaming of those tracks. She then announced a partnership with Believe’s Tunecore to distribute and monetise tracks generated using the Elf Tech that feature her vocal clone.
In July, UK drum and bass producer – and machine learning engineer – DJ Fresh teamed up with Montreal-based software engineer to launch a service called Voice-Swap.
The company says it has developed an AI-powered tool “to help producers, artists and writers who don’t want to use their voice on songs use AI to transform their voice to sound like one of our featured artists”. Those featured artists, the company’s website notes, “are partners who benefit from the use of their AI model”.
In November, Universal Music and Warner Music announced that they were working with YouTube and Google Deepmind on a voice cloning tool called Dream Track.
Currently piloting in the US, the tool allows an initially limited pool of creators on the platform to output short music clips for use in their videos which include AI-generated vocals that imitate specific musicians.
Artists participating in the pilot include Charlie Puth, John Legend, Sia, T-Pain, Demi Lovato, Troye Sivan, Charli XCX, Alec Benjamin and Papoose.
AI + Music debate in the UK
In the UK, like many other countries, there has been a lot of debate within the political community about AI.
Indeed, the UK government has been keen to be seen as a leader in the AI debate, with Prime Minister Rishi Sunak staging a global summit on the topic at Bletchley Park, a venue picked because of its significant role in the history of computing.
Copyright issues weren’t really on the agenda at that summit, but UK Music still used the event to restate its position – which essentially echoes the core objectives of the Human Artistry Campaign: that AI companies must get permission before using existing content, that there should be full transparency of what data is used to train each AI model, and clear labelling of AI-generated works.
The UK government did also provide some opportunities for copyright issues to be specifically discussed.
The Intellectual Property Office convened a working group involving technology companies and organisations representing different creative and copyright industries, including the music industry. And, in November, the Department For Culture, Media & Sport organised its own meeting of the creative industries to discuss AI concerns.
The DCMS meeting caused some controversy within the music community.
UK bosses of all three major record companies were invited to attend, but there were no representatives speaking for artists and songwriters.
In fact, most of the participants in that meeting represented corporate copyright owners – like record companies, book publishers and image libraries – with only one attendee speaking for individual creators, that being the boss of the Creators’ Rights Alliance.
In response, the UK’s Council Of Music Makers said: “We are hugely concerned that the government is forming a roundtable which only gives one single seat to a representative of all creatives across all media (including film, theatre, literature and music), but has three seats for executives from major record companies. This is profoundly unbalanced and tone-deaf”. The Council Of Music Makers is an umbrella organisation bringing together organisations representing the interests of artists, musicians, songwriters, studio producers and managers.
Much of the wider music community believes that AI companies should seek permission before using existing songs and recordings, and that there should be full transparency about how music is being used.
Should labels and publishers be able to grant permission to AI companies to use content, without seeking consent of artist and songwriters involved in the creation of that content?
There is a disagreement both within the UK music community, but more broadly, over whether or not record labels and music publishers which own the copyright in recordings and songs can unilaterally grant permission to AI companies to use those recordings and songs for training AI models without seeking the consent of artists and songwriters.
The CMM argues that music-maker consent should be sought.
Whether or not there is a legal obligation to get that consent may depend on each individual music-maker’s record and publishing contracts. However, many music-makers would argue that, given the nature and potential impact of generative AI, there is a moral obligation to seek consent as well.
The CMM also argues that labels and publishers should be fully transparent about any deals they have negotiated with AI companies, especially if those deals cover entire catalogues of music. This would allow music-makers to identify if their music has been used to train an AI model.
In September, the CMM published five fundamentals for music and AI setting out these arguments and making demands of the music industry as well as the AI sector.
Some independent labels and publishers have indicated that they agree with the CMM, both that music-maker consent should be sought and that there should be transparency about any AI deals they enter into.
However, the majors have so far resisted calls to provide a guarantee that they will seek consent from each music-maker before licensing any music to any AI companies, and that they will be fully transparent about their AI deals.
This is just the start of the conversation when it comes to AI and the music industry.
2023 has seen a lot of demands being made of both AI companies and music companies – most of which are yet to be met. Meanwhile, disputes continue over the copyright and other obligations of AI companies making use of music, and many of the legal questions posed are yet to be conclusively answered.
One things is certain – and that is that the debate around AI and music will continue and intensify in 2024, and beyond. Get ready for that with our Music + AI Masterclass.