The UK yesterday joined the US and the European Union in signing the first legally binding treaty on artificial intelligence, which sets out some basic safeguards regarding the development and use of AI which all signatory countries will ensure are achieved through their national laws. On the same day, the Australian government published proposals for “introducing mandatory guardrails for AI in high-risk settings”.
Lawmakers all over the world are currently considering how to regulate AI, with the music industry particularly keen for legal protections in the context of generative AI. However, with AI being constantly used across borders, some kind of joined up thinking between countries is required. The new treaty signed by the UK, US and EU was coordinated by the Council Of Europe, though all countries are invited to participate.
Confirming it was a signatory, the UK government said that the treaty “commits parties to collective action to manage AI products and protect the public from potential misuse”. UK Justice Secretary Shabana Mahmood added that the agreement is “a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law”.
The treaty has pretty big picture objectives, with three overarching safeguards as follows:
Protecting human rights, including ensuring people’s data is used appropriately, their privacy is respected and AI does not discriminate against them. Protecting democracy by ensuring countries take steps to prevent public institutions and processes being undermined.Protecting the rule of law, by putting the onus on signatory countries to regulate AI-specific risks, protect its citizens from potential harms and ensure it is used safely.
Alongside the UK, a number of other Council Of Europe members that are not part of the EU have signed the treaty, including Andorra, Georgia, Iceland, Norway, Moldova and San Marino. Beyond Europe, Argentina, Australia, Canada, Costa Rica, Japan, Mexico, Peru and Uruguay are involved in negotiations on it.
When the music industry lobbies on AI it is generally looking for more specific protections in law, clarifying the copyright obligations of generative AI companies, ensuring AI cannot be used to replicate a performer’s voice or likeness without consent, and forcing AI companies to label AI-generated content and be fully transparent about what data has been used to train their models.
The Australian government has proposed ten mandatory guardrails that would apply to what it calls “high risk AI”. Those guardrails don’t specifically deal with copyright matters, though do include measures around data and transparency.
Welcoming the transparency measures, Dean Ormston, CEO of Australian collecting society APRA AMCOS, said, “If implemented, these measures have the potential to compel AI platforms and developers to disclose the origin and composition of datasets used to train their systems. With this level of disclosure, artists, creators and rightsholders can level the playing field and negotiate appropriate agreements and ensure that their intellectual property is only used with the appropriate levels of consent, credit and remuneration”.
In a document setting out the proposed guardrails, the Australian government notes that copyright issues are being separately considered, adding, “The Attorney General’s Department is leading Australia’s approach on copyright and AI. This work includes considering the intersection of the proposed mandatory guardrails and copyright laws”.
The proposed guardrails are now subject to a one month consultation.