Wed 15 Apr 2026 01.00

Photo: AAP Image/Bianca De Marchi
The head of Artificial Intelligence firm Anthropic said last week that a tax on Artificial Intelligence (AI) is “inevitable”.
Ironically, his comment makes it less likely that such a tax will ever happen.
In politics, calling something “rational” or “evidence-based” is the kiss of death. “Inevitable” is even worse because it implies that nobody needs to fight to make it happen. Why waste scarce resources on a done deal?
When the “inevitable” never arrives, experts wring their hands and complain that reform in Australia is too hard. Commentators often blame three-year terms, as if the thing stopping Prime Minister Anthony Albanese from banning gambling advertising or introducing truth in political advertising is that an election interrupted his four years in power.
The assumption that sensible policies are inevitable reveals a stark difference in political thinking, what commentator Scott Alexander describes as mistake theory versus conflict theory.
Put simply, can we find win–win solutions to the evils of the world (“mistake theory”) or do these problems exist because powerful people benefit from them (“conflict theory”)?
Anthropic CEO Dario Amodei was speaking at an exclusive Parliament House event when he made the “inevitable” AI tax comment. He expects governments to develop “sophisticated” taxes so that all people enjoy the benefits of AI, but “it’s going to be the work of years to figure out what the structure of that tax should be and getting everyone behind it.”
This is textbook “mistake theory”.
AI promises better healthcare, greater productivity and better access to information, and its problems – like unemployment and social disruption – will be addressed by a well-structured tax, one that has taken “years” to get just right.
How do we know that the tax will happen? Well, because it solves a problem. It’s a great idea, and people would have worked really hard on it.
You know, like the mining super-profits tax that Australia doesn’t have. Or the price on carbon that got repealed. Or the petroleum profits tax that doesn’t work.
The graveyard of policy is littered with great ideas that people worked on for years.
Consider how a “conflict theorist” would approach the same question: how to ensure that the benefits of AI compensate those disadvantaged by the technology?
The industry is already powerful and by no means at its zenith, so a conflict theorist would not spend years developing something “sophisticated”. By that time, AI companies would be so influential and tightly integrated that they could damage, possibly fatally, a government that tried to regulate them.
Governments still haven’t found satisfactory ways to regulate Facebook, 22 years after it was created, or make Google contribute to the common resources it exploits, 30 years after it began crawling the web. Who, apart from the CEO of a major AI firm, thinks that regulating AI is going to get easier the longer countries delay?
Quick and flexible is better, like a tax on AI company revenues. If the AI bubble bursts, governments can always lower the tax again.
A tax can be effective and fair without being “sophisticated”. Indeed, sophisticated policy can be hard for policymakers to resist but easy for its targets to avoid. Take the Petroleum Resource Rent Tax, where the very nuances of the scheme give companies flexibility to minimise the taxable component of the gas they extract. Santos can sell almost $50 billion worth of gas and not pay a cent in tax.
Nor do academics need to invent a mechanism to compensate for job losses. It already exists: the Jobseeker unemployment benefit. We know how it is lacking: it is well below the poverty line and comes with wasteful and harmful ‘Work for the Dole’ requirements. We know how to fix it: the Coalition Government raised the dole during COVID, and there is nothing stopping a Labor Government from raising it now and ending work obligations.
With these reforms, the country would be better prepared for mass unemployment. It remains to be seen if the layoffs in tech occur in other industries. If not, more dignity for unemployed people is good in its own right.
I do not doubt that, years later, the mistake theorists will come up with better and more suitable policies. If competition keeps AI company fees low even as the technology transforms the economy, then broader taxes on wealth or super profits will work better than a tax targeting AI companies. If AI destroys the very concept of work, a Universal Basic Income should replace unemployment benefits.
Great. Bring in the conflict-based system now, and when the mistake theorists have lined up something better, Australia can switch over.
Mistake theorists are understandably offended when they are accused of being stooges for powerful interests. They work diligently for years trying to find the best, most sophisticated answers to the problems facing humanity, and if only people would listen to them, the world would be a better place.
But when mistake theorists pretend that reform will happen just because it is sensible and evidence-based, they aid and abet the enemies of reform.
Last week, Mr Amodei gave a textbook demonstration of how mistake theory plays into the hands of the powerful: make the problem complicated, say the solution will take years and promise that, when the day comes, good things will happen without a fight.
That’s how to make the “inevitable” impossible.
Bill Browne is the director of the Australia Institute’s democracy & accountability program.