Reading time: three minutes
To some music fans, a collaboration between Drake and The Weeknd might be a welcome surprise. However, a recent viral track seemingly featuring vocals from both artists was recently pulled from streaming services by Universal Music Group (UMG) for copyright infringement. While the song might not sound out of place on the artists’ respective discographies, it was in fact produced using AI tools trained on their pre-existing commercial releases. UMG have since requested major streaming services to make it more difficult for users to train AI systems on their music.
Discussion of the creative value of the song itself aside, this story seems to be a long time coming. The practice of vocal cloning (plus deepfaking more broadly) has been an emerging, niche, trend up to now. Readers that consume a lot of short-form content online might be aware of comedic deepfake videos of US Presidents Biden, Trump, and Obama playing video games together, which have garnered millions of views. While this is fairly innocuous, it’s indicative of AI’s impressive capabilities – gone are the days of inhuman text-to-speech! These tools are becoming increasingly powerful, with ‘natural’ human speech now being replicated to an uncanny degree, virtually anyone can be cloned with sufficient training data.
An important point is that there remains a significant human element in these tracks. The current available systems aren’t so advanced as to simply pump out a fully produced song, from instrumentals to contemporary lyrics and vocals, without a huge level of human oversight.
Naturally, it raises some questions:
- Is this not a wholly new composition?
- Who owns the song and what aspects of it?
- Can an AI be credited?
- If the artists themselves didn’t actually sing it, can they claim copyright?
Naturally, the answer is that it depends on the many variables of the situation and differs between jurisdictions. For example, the US has laws protecting an individual’s ‘personality’ which in this situation were operated to control the commercial use of the artists’ likenesses; the UK lacks equivalent laws, so a similar claim would likely need to be argued on an ex-ante basis – in other words, that infringement took place before the final generated vocals were ever created.
This would mean infringement occurred at the point the AI algorithm was trained on the necessary data (in this case copyrighted vocals), as this would constitute an unlawful copy. As for authorship, in the UK Section 9(3) of the Copyright, Designs and Patents Act 1988 considers computer-generated works, and states the author is whoever made “arrangements necessary for the creation of the work are undertaken”. As AI becomes less reliant on human input, this definition may become redundant. As an overseas example, US authorities have already rejected claims of ownership over certain AI works, citing a lack of human involvement.
This is yet another case of technology significantly outpacing regulation. It leaves the use of generative AI in something of a legal grey area, and enforcements are being made on a somewhat piecemeal basis within existing copyright and data protection frameworks. Domestic and international regulatory organisations (eg, the US Copyright Office, the UK’s Information Commissioner’s Office, and the EU) are constantly updating their guidelines for AI use as it relates to intellectual property and data protection. Future regulation will also need to consider broader social concerns, including preventing the malicious use of deepfakes and voice cloning in politics and broadcasting.
Whatever your own view on these issues, it’s unlikely AI will disappear from the music industry –with past record label attempts at partly or fully artificial artists, it seems probable some companies will be strongly considering the commercial utility and viability of generative AI in the music industry. After all, can an artificial artist take a cut?