The ease of creating deepfakes, paired with their potentially dangerous consequences, underscores the need for more widespread truth in political advertising laws.
Thu 16 Apr 2026 01.00

Photo: AAP Image/Dominic Giannini
Artificial Intelligence (AI) models can quickly and cheaply generate images, audio, and videos, including photorealistic images and videos. They can also clone a person’s voice and make them “say” anything. These simulations, called “deepfakes”, are the most dangerous form of AI-generated content because they can mislead and defame. Sometimes, they are obviously fake and intended to be satirical, humorous, or upsetting.
Even these more unrealistic simulations can be unsettling and damaging. This week, Donald Trump posted an AI-generated painting of himself as a Christ-like figure miraculously healing a sick man. President Trump later deleted the social media post, claiming he had mistaken it as depicting him as a doctor.
President Trump has been an enthusiastic user of AI. Last year, he posted a deepfake video of him wearing a crown and piloting a jet fighter that poured diarrhea on protesters, with the (unlicensed) song “Danger Zone” playing.
Australian politics has yet to see anything this extreme, but the use of AI in politics is already controversial; raising questions about how to better regulate the technology.
Political deepfakes have been used for at least six years now. Some of the most prominent examples were not intended to deceive the viewer, although they still used people’s likenesses in ways that could be distressing.
In the 2020 Queensland state election, right-wing campaigner Advance created the satirical “Pannastacia Alaszczuk” video. An imitation of the then-Queensland Premier Annastacia Palaszczuk said: “if you want to get rid of us, I completely understand.” It is visibly less sophisticated than later examples.
During 2024 ACT election campaigning, Liberals used AI to create an entire election ad, including video of a person called “Bob” who never existed and a computer-generated narrator. It included a fake video of Labor Chief Minister Andrew Barr cackling.
Ahead of the 2024 Queensland election, the Liberal National Party mocked then Labor Premier Steven Miles with an unconvincing TikTok deepfake of him dancing. Mr Miles complained – quite reasonably, except that Labor had done the same thing to then Liberal federal opposition leader, Peter Dutton.
The same year, independent Senator David Pocock created deepfakes of Prime Minister Anthony Albanese and Peter Dutton supporting a ban on gambling ads – to draw attention to Australia’s lack of deepfake regulation.
These examples were all labelled, although not always prominently.
Other deepfakes are clearly intended to deceive at least some viewers.
During the 2025 federal election campaign, a deepfake of Peter Dutton speaking Mandarin and proposing an Aboriginal flag ban circulated on Chinese social media app RedNote.
Even outside of the election period, social media accounts use AI-generated “people” to spread conspiracy theories and anti-immigrant messages. Using a simulated person allows the people behind these accounts to avoid being associated with the sometimes hateful and disproven things they say.
The mere existence of deepfakes can also lead people to question anything else that they see and hear. When a video apparently showed MP David Speirs snorting a white powder, he claimed it was a “deepfake”. That claim was not substantiated.
It has always been possible to distort sound and images.
In the early 2000s, Democratic presidential candidate John Kerry was “photoshopped” into a photo alongside North Vietnam sympathiser Jane Fonda. In 2019, US politician Nancy Pelosi was made to sound drunk just by slowing down genuine recordings of her speaking.
Sometimes it is enough to make a false claim, accompanied by a real (but out of context) photograph, video, or sound clip. Well before these new AI video programs, mining magnate Andrew Forrest criticised Facebook for allowing the use of his photograph in cryptocurrency scams.
AI-generated material nonetheless represents a special risk because it can be particularly sophisticated and realistic, and it is cheap and fast to create without requiring any specialist skills.
Since misleading deepfakes are just one form of political deception, it makes sense to address them through broader laws targeting lies and deception.
Truth in political advertising laws have worked in South Australia for forty years and were recently adopted by the ACT. SA has more recently specifically banned electoral ads that feature AI-generated depictions of people without their consent and required that political ads containing AI-generated material be clearly labelled.
Two years ago, the Albanese Government proposed truth in political advertising laws, modelled on the successful South Australian laws. As well as prohibiting misleading and inaccurate electoral material, the laws would have covered some visual deceptions (including deepfakes of political candidates.)
Unfortunately, Labor let their own proposal lapse without voting for it – but with deepfakes becoming more convincing and easier to make, the case for truth in political advertising laws is stronger than ever.