The UK Intellectual Property Office (UKIPO) has told Out-Law that it still intends to conclude its work on a new AI copyright code by the end of the year – despite recent ministerial comments suggesting that the timeline could drift into 2024.
A working group comprising representatives from the technology, creative and research sectors, established by the UKIPO, has been in talks for months over a new voluntary AI copyright code, which the government hopes will provide a non-legislative solution to the question of how best to balance the need for AI developers to access quality data on which to train their AI systems with the need for content creators to be adequately compensated for the use of their copyright works.
Out-Law reported in August that the working group was working towards an autumn agreement, but those timelines slipped, with the UKIPO confirming last month that it was intent on concluding its work “by the end of the year”. However, that revised timeline was cast into question last week by comments made by AI and IP minister Jonathan Berry before the Communications and Digital Committee in the House of Lords.
Berry said the government “had hoped” for agreement on the code by the end of 2023 but added that representatives from both the technology and creative industries had “made strong representations” asking government not to rush a conclusion.
In the aftermath of Berry’s comments, Out-Law reached out to the UKIPO seeking clarity on the timelines for the AI copyright code project. A spokesperson for the UKIPO responded and reiterated its intent to conclude its work before the end of the year.
Gill Dennis, an expert in IP law at Pinsent Masons, said: “It was always going to be difficult to find a satisfactory middle ground that genuinely balances the interests of both sides of the debate on access to copyright works. It is questionable if it is even possible to do so without one side being favoured over the other.”
“The code of practice is urgently needed, however. Until we have a decision in the Getty Images litigation, there is legal uncertainty around the extent to which AI developers may use copyright works as training data and that will impede AI development in this country,” she said.
In his evidence to the committee, Berry confirmed that while the government’s aim is that industry agree on a voluntary AI copyright code, legislating remains an option for the government if an agreement cannot be reached – it previously stepped back from extending the existing text and data mining exception in UK copyright law to support AI developers, following pushback from the creative industries.
“There are two particular problems that we need to solve: the first, is finding the appropriate balance – the landing zone – between two hotly competing sides in this debate,” Berry said. “The second problem is making whatever solution we come up with internationally operable, because what we can’t have is a set of strict rules over here that then allow people to go and train their models elsewhere – I don’t think that would help anyone.”
“Overall, the best outcome – and we are pushing very hard for this outcome – is a voluntary agreement between both sides that recognises the needs of both sides… Our current focus is on developing a set of principles around which we may or may not be able to operate and then turning that into a code of conduct. Ideally, that code of conduct would operate on a voluntary basis because legislation on this basis runs the risk of sending people to operate overseas in jurisdictions over which we have no control,” he said.
“We had hoped that by the end of this year we would have that code. Participants on both sides have made strong representations to us to say, ‘please do not go ahead until we have properly argued this out and that speed is a secondary consideration to getting it right’. That said, we are not going to get into an endless talking shop about this. Should it sadly emerge that there is no landing zone that all parties are going to agree to then we are going to have to agree to other means which may include legislation, but I hope very much that we won’t have to go there,” Berry added.
Berry said he thought it would be “very difficult” for the government to design a piece of legislation to balance the interests of AI developers and rightsholders, but while he acknowledged that pending court rulings in cases including Getty v Stability AI may provide some clarity on how the balance should be struck, he “didn’t want to give the impression” that the government is “waiting on the outcome of those cases” before taking action.
Asked by one peer if the government would endorse the creation of new transparency requirements for AI developers, which would allow rightsholders to check whether their copyrighted data is used in datasets, potentially via a third-party auditor, Berry said he’d “certainly consider looking at” such proposals. However, he said there are also potential technological measures – like “automated watermarking … that may be invisible to the human eye but visible to an AI” – that could also be part of an overall solution.
Berry said: “If you create work of the mind, you should continue to have expectation of reasonable reward for doing so. It is really important that that continues to be the case.”
“There are a number of avenues that we might go down… I don’t particularly feel comfortable with the idea that we would total rely on a technology solution that would solve all our problems, but I think the end game is a mixture of voluntary agreement around a landing zone, technology, and any legislation that we absolutely have to put in, but most of all goodwill. I don’t believe that a necessary precondition of developing successful AI globally is infringing the rights of copyright holders,” he said.
Source: Pinsent Masons