For a still nascent technology, generative artificial intelligence already has an impressive resume. It can compose music, summarise wads of legal documents in seconds and generate television adverts based on minimal descriptive input. To become even cleverer, weed out errors and broaden its uses, AI models will need to continuously ingest human-generated content to train on. But the legal framework required to facilitate this symbiosis between man and machine has fallen woefully behind. That puts the long-term development of the technology, and the individuals and companies who feed it with unique data and insights, in harm’s way.
Generative AI models owe their capabilities, so far, to the reams of text, sounds, images and videos posted online. Much of this has been scraped without the consent of the original creators. A lack of clarity over how copyright laws apply to gen-AI training has also fomented protests and litigation battles around the world. Model developers tend to argue that “fair use” exemptions, which allow the use of copyrighted material under specific conditions, for instance by researchers using short, cited excerpts, are applicable. Artists, musicians and the media strongly disagree. They allege that AI companies are breaching their rights to intellectual property protections, since they go beyond merely excerpting their data.
With legal cases ensuing across America and disagreements in Europe over how the EU’s AI Act applies, Britain has taken a welcome initiative to end the ambiguity. Last week it closed a consultation into plans for the future of copyright and AI. But the UK government is also caught between wanting to be attractive for AI companies to scale and drive economic growth, while also protecting its world-class creative industry.