On Sept. 4, the public learned of the first-ever U.S. criminal case addressing streaming fraud. In the indictment, federal prosecutors claim that a North Carolina-based musician named Michael “Mike” Smith stole $10 million dollars from streaming services by using bots to artificially inflate the streaming numbers for hundreds of thousands of mostly AI-generated songs. A day later, Billboard reported a link between Smith and the popular generative AI music company Boomy; Boomy’s CEO Alex Mitchell and Smith were listed on hundreds of tracks as co-writers. 

Related

(The AI company and its CEO that supplied songs to Smith were not charged with any crime and were left unnamed in the indictment. Mitchell replied to Billboard’s request for comment, saying, “We were shocked by the details in the recently filed indictment of Michael Smith, which we are reviewing. Michael Smith consistently represented himself as legitimate.”) 

This case marks the end of generative AI music’s honeymoon phase (or “hype” phase) with the music industry establishment. Though there have always been naysayers about AI in the music business, the industry’s top leaders have been largely optimistic about it, provided AI tools were used ethically and responsibly. “If we strike the right balance, I believe AI will amplify human imagination and enrich musical creativity in extraordinary new ways,” said Lucian Grainge, Universal Music Group’s chairman/CEO, in a statement about UMG’s partnership with YouTube for its AI Music Incubator. “You have to embrace technology [like AI], because it’s not like you can put technology in a bottle,” WMG CEO Robert Kyncl said during an onstage interview at the Code Conference last September.

Each major music label group has established its own partnerships to get in on the AI gold rush since late 2022. UMG coupled with YouTube for an AI incubator program and SoundLabs for “responsible” AI plug-ins. Sony Music started collaborating with Vermillio for an AI remix project around David Gilmour and The Orb’s latest album. Warner Music Group’s ADA struck a deal with Boomy, which was previously distributing its tracks with Downtown, and invested in dynamic music company Lifescore. 

Artists and producers jumped in, too — from Lauv’s collaboration with Hooky to create an AI-assisted Korean-language translation of his newest single to 3LAU’s investment in Suno. Songwriters reportedly used AI voices on pitch records. Artists like Drake and Timbaland used unauthorized AI voices to resurrect stars like Tupac Shakur and Notorious B.I.G. in songs they posted to social media. Metro Boomin sampled an AI song from Udio to create his viral “BBL Drizzy” remix. (Drake later sampled “BBL Drizzy” himself in his feature on the song “U My Everything” by Sexyy Red.) The estate of “La Vie En Rose” singer Edith Piaf, in partnership with WMG, developed an animated documentary of her life, using AI voices and images. The list goes on. 

While these industry leaders haven’t spoken publicly about the overall state of AI music in a few months, I can’t imagine their tone is now as sunny as it once was, given the events of the summer. It all started with Sony Music releasing a statement that warned over 700 AI companies to not scrape the label group’s copyrighted data in May. Then Billboard broke the news that the majors were filing a sweeping copyright infringement lawsuit against Suno and Udio in June. In July, WMG issued a similar warning to AI companies as Sony had. In August, Billboard reported that AI music adoption has been much slower than was anticipated, the NO FAKES Act was introduced to the Senate, and Donald Trump deepfaked a false Taylor Swift endorsement of his presidential run on Truth Social — an event that Swift herself referenced as a driving factor in her social media post endorsing Kamala Harris for president.

Related

And finally, the AI music streaming fraud case dropped. It proved what many had feared: AI music flooding onto streaming services is diverting significant sums of royalties away from human artists, while also making streaming fraud harder to detect. I imagine Grainge is particularly interested in this case, given that its findings support his recent crusade to change the way streaming services pay out royalties to benefit “professional artists” over hobbyists, white noise makers and AI content generators.

When I posted my follow up reporting on LinkedIn, Declan McGlynn, director of communications for Voice-Swap, an “ethical” AI voice company, summed up people’s feelings well in his comment: “Can yall stop stealing shit for like, five seconds[?] Makes it so much harder for the rest of us.” 

One AI music executive told me that the majors have said that they would use a “carrot and stick” approach to this growing field, providing opportunities to the good guys and meting out punishment for the bad guys. Some of those carrots were handed out early while the hype was still fresh around AI because music companies wanted to appear innovative — and because they were desperate to prove to shareholders and artists that they learned from the mistakes of Napster, iTunes, early YouTube and TikTok. Now that they’ve made their point and the initial shock of these models has worn off, the majors have started using those sticks. 

Related

This summer, then, has represented a serious vibe shift, to borrow New York magazine’s memeable term. All this recent bad press for generative AI music, including the reports about slow adoption, seems destined to result in far fewer new partnerships announced between generative AI music companies and the music business establishment, at least for the time being. Investment could be harder to come by, too. Some players who benefitted from early hype but never amassed an audience or formed a strong business will start to fall. 

This doesn’t mean that generative AI music-related companies won’t find their place in the industry eventually — some certainly will. This is just a common phase in the life cycle of new tech. Investors will probably increasingly turn their attention to other AI music companies, those largely not of the generative variety, that promise to solve the problems created by generative AI music. Metadata management and attribution, fingerprinting, AI music detection, music discovery — it’s a lot less sexy than a consumer-facing product making songs at the click of a button, but it’s a lot safer, and is solving real problems in practical ways. 

There’s still time to continue to set the guardrails for generative AI music before it is adopted en masse. The music business has already started working toward protecting artists’ names, images, likenesses and voices and has fought back against unauthorized AI training on their copyrights. Now it’s time for the streaming services to join in and finally set some rules for how AI generated music is treated on its platforms. 

This story was published as part of Billboard’s new music technology newsletter ‘Machine Learnings.’ Sign up for ‘Machine Learnings,’ and Billboard’s other newsletters, here.

If you have any tips about the AI music streaming fraud case, Billboard is continuing to report on it. Please reach out to krobinson@billboard.com