
Mon 10 Nov 2025 10.00

Photo: Author Thomas Keneally appearing before a Senate inquiry into Artificial Intelligence (AI) effects on Australian artists, at Parliament House in Canberra, Tuesday, September 30, 2025. (AAP Image/Mick Tsikas)
Intuitively, from the perspective of writers and other creators, AI is built on theft. As Thomas Keneally has said about AI companies’ wholesale taking of literary works: “It’s not copy-charity…it’s copyright.”
When The Atlantic published the leaked LibGen, what it described as “the pirated books database that Meta used to train AI”, I checked and, sure enough, all three of my own published books were in there. Along with most other works ever published.
What we all by now understand about AI’s “large language models” is that they begin with a dredging operation; collecting from the internet unimaginable volumes of existing material, reading and absorbing it, then magicking it at light speed into the ability to create whole new worlds of words and pictures.
Because much of what it has read is protected by copyright, it sounds a lot like wholesale theft – in copyright law terms, infringement.
Lawsuits have ensued, as copyright owners try to protect their intellectual property from this predation. That cause has been dealt a heavy blow in a UK court decision: the case of Getty Images v Stability AI.
Getty, the owner of millions of photographs on the internet, sued Stability for infringing its copyright by scraping the images and co-opting them for its AI model. It lost.
I’ll get slightly technical here: copyright infringement hinges on the unauthorised making of a copy of the copyright work, or a substantial part of it. The way AI works is that it doesn’t store copies of the material it scrapes; it runs that material through an “embedding model” that converts them into “model weights” – digital code that is, in lay terms, gibberish. That’s what’s imported into the AI model.
The judge said “While it is true that the model weights are altered during training by exposure to Copyright Works, by the end of that process the Model itself does not store any of those Copyright Works; the model weights are not themselves an infringing copy and they do not store an infringing copy.”
Legally correct perhaps (Australia’s copyright law is very similar to the UK, so it’s a good precedent for here too) but – rationally – makes no sense at all. It begs a question: is copyright dead?
Many of our laws are derived from universal human values, biblical strictures or common sense. Copyright law is not one of these. It is a construct of commercial compromise, invented to solve a particular problem, in a particular time.
The first copyright law was made in Great Britain: the Statute of Anne, 1710. The long title of the parliamentary bill explains the problem it was seeking to address: “A Bill for the Encouragement of Learning and for Securing the Property of Copies of Books to the rightful Owners thereof”.
The printing press had enabled books to be reproduced, creating the publishing industry and the potential for authors to earn a living from the mass copying of their works. But of course, once anyone with a press had the text, they could make and sell copies without the author’s knowledge or consent. How could the author get a share of the proceeds?
That was the point of copyright law, creating a statutory monopoly for an author over all copies of their books, for a fixed period long enough to get a decent payday, after which it would go into the public domain. That law is essentially unchanged today.
The world, however, is completely changed. Copyright was made for physical books. It has had to repeatedly adapt, to photography, radio, moving images, the internet, social media and now AI.
Unsurprisingly, that adaptation has been clunky at best. Years ago, I ran a couple of leading copyright infringement cases, one dealing with DVD technology and one with the anti-piracy technology in gaming devices. Each involved trying to retrofit an 18th Century legal concept to 20th Century technology, and it was not a good fit at all.
What the Getty case illustrates perfectly is that copyright law, and its brutally simple conceptualisation that the essence of infringement is the making of a physical copy, is completely and utterly obsolete. It cannot cope with the reality or speed of technological development; it is seeking to answer the wrong questions.
I come back to the creator’s perspective: I write a book, or this article. I feel that I own it. I do not want it to be stolen, repurposed and used forever more as part of a database from which someone else makes money, with nothing coming back to me.
The law says that my copyright hasn’t been infringed by the AI process. Maybe so, but my property has been stolen all the same. Copyright law is the wrong law, and the urgency of our recognition of this is now existential.
Michael Bradley is the managing partner of Sydney law firm Marque Lawyers.