UK IPO launches consultation on AI and copyright
In late December 2024 the UK IPO launched a three-month consultation on AI and copyright – closing on 25th February 2025 – which could have a significant impact on rights holders and AI developers.
It is widely accepted that UK law is not ready for the coming expansion of AI, and how it will affect the creative industries.
Key issues include:
- AI developers require access to large amounts of data to train their AI. However, creators of copyright works also have a right to control their work and be compensated for it.
- Who owns copyrighted works produced by AI – UK copyright law does cover ownership of some computer-generated works, but it is not clear if this is sufficient.
- The problems posed by deepfakes
Both the AI sector and creative industries are seen as key for the UK economy, and so the government is attempting to provide a balance that keeps both groups happy.
Highlights of what the government is asking about include:
1. A new exemption to copyright law for AI training
Currently, the law is not clear as to whether use of a copyrighted work in training AI constitutes copyright infringement. There are various test court cases ongoing in different countries about this matter.
Under the UK government’s proposed solution, the use of copyrighted works for training and data mining (TDM) would generally not be considered copyright infringement, so long as the AI developer has lawful access to it. There are some key features that would support this exemption:
Opting out
Right holders would be able to opt their work out of the exemption (also referred to as reserving their rights), meaning an AI developer would still need to acquire a licence to use the work. The exact system by which opt-outs could be made is not clear yet. Opting out would not completely prevent the work being used in AI training, but it would allow rights holder to negotiate licences and obtain remuneration.
Opt-out standards
The EU already has a TDM exemption in its copyright law. The experience of the EU countries has shown that there needs to be clarity on what constitutes an optout/reservation. In the consultation, the government has suggested that standards may need to be developed. Various suggestions include:
- Existing standards such as the Robots Exclusion Protocol (often referred to as robots.txt) standard or ai.txt standard. Many news publishers already use robots.txt to prevent their work being captured by AI web-crawlers.
- Flags in metadata. Each of these have associated advantages and disadvantages. Some larger AI developers (such as open AI) also allow users to notify them that they do not want their work to be used.
Transparency
AI developers would be required to disclose what data their model is trained on, to allow rights holders to check if their data has been used.
Licensing agreements
The consultation also seeks views on whether there should be any new legislation/guidance for good practice on licensing and the possibility of collective licensing for training and data mining purposes. This would be particularly helpful for solo/small rights holders.
2. Ownership of AI generated work
Current UK law does ensure that there is copyright in computer generated works. However, this is currently limited to literary, dramatic, musical and artistic works, not sound/film recordings and other types of copyright works. Furthermore, the law covers ‘AI-assisted’ works where there is still human input, but may not cover works without a human author. The consultation seeks views on how to clarify this (and if clarification is needed).
The consultation also seeks input on whether the AI developer and/or end user should be held responsible if an AI generated work is found to infringe copyright in existing work.
The consultation also touches on a number of issues outside of the direct field of copyright, including:
3. Labelling of AI outputs
The consultation asks whether respondents would agree that outputs form AI should be labelled. The EU AI act establishes a requirement of such labelling, and the EU AI Office is currently working on guidelines and a code of practice to ensure this obligation is met.
4. Deep fakes
Any protection or control of training data may help to prevent deep fakes of individuals. For example, if a recording artist were to opt their recordings out of the TDM exemption, using their recordings to train an AI would be copyright infringement.
The laws of passing off, data protection, and copyright in performances may also be used against deep fakes. One further suggestion is to introduce a ‘personality right’ to give individuals control over how their likeness or voice is used. Such a law has been proposed in the US but not implemented yet.
The government notes that personality rights would extend beyond just intellectual property, and so this is not the subject of the current consultation, but they have asked for views nonetheless.
What action should you take now?
Anyone wishing to respond to the consultation can find more information here.
The TDM exception is already in force in the EU, and it is likely it will come into force in the UK. Even though a standard for opt-outs has not yet been settled on, there are still various steps rights holders should take if they want to make sure their IP is opted out:
- Consider using one of the existing standards for indicating a reservation of rights on their website;
- Add in opt out/reservation statements to metadata;
- Review contracts, website terms and conditions and other agreements to ensure they highlight where copyright material is reserved;
- Consider notifying AI developers.
None of these on their own may be a definitive opt out, and so until the EU and UK have put a more formal position in place, rights holders should consider using multiple methods, where possible.
Barker Brettell will be keeping a close eye on any developments when it comes to the patentability of modern AI. If you would like to discuss this matter further, please do not hesitate to contact the author or your usual Barker Brettell patent attorney.