Thu 12 Mar 2026 00.00

Photo: AAP Image/Glenn Hunt
AI deepfakes used in online fraud are a global problem. In Australia, Hong Kong, Taiwan, the United States, the EU, and Southeast Asia, scammers often use deepfake images of public figures, such as actors, athletes, musicians, and wealthy individuals, to trick people into handing over money.
Australia has been at the forefront of many of the discussions in part because of Andrew Forrest’s lawsuit lawsuit against Meta in the state of California over scam crypto ads.
Governments in Australia, Singapore, and Canada are exploring similar platform-liability or “duty of care” models for online fraud and considering voluntary guidelines on disclosure of such adverts. Estimates of the amounts lost due to online scams each year are in the billions of dollars.
A new report we published by Data & Society maps the regulation of AI deep fakes and online scams around the world.
Attempts to educate the public about online financial scams are important but should not place the burden of responsibility on individuals to avoid scams. AI deepfakes are explicitly designed to deceive. Holding individuals entirely responsible for their losses seems unrealistic in a world where deepfakes and AI are increasingly sophisticated, it is more difficult than ever to tell the difference between a scam and a legitimate business opportunity, and an entire global apparatus exists to trick people into parting with their life’s savings.
We argue that it is far more efficient for gatekeepers — such as digital platforms like Meta, where scam advertisements circulate — to take measures to prevent scams than to expect individual users to recognise and avoid deception. Reporting by Reuters found that Meta serves more than 15 billion “high risk” advertisements every day, accounting for more than $7 billion US in advertising revenue per year. They even charge scam advertisers more than regular advertisers.
Governments do not expect consumers to identify poisoned aspirin or unsafe toys on their own. The foundation of consumer protection law is the principle that it is the government’s responsibility to prevent manufacturers and retailers from producing and selling harmful products in the first place.
Manufacturers, wholesalers, and retailers have often argued that liability should be governed by the old adage caveat emptor (“let the buyer beware”) which would leave them largely off the hook. For good reason, that approach has long been rejected.
The distinguished Yale legal scholar, later federal judge, Guido Calabresi articulated an alternative framework that is now widely accepted: responsibility and liability should rest with the actors best positioned to prevent harm. In this case, that means those best able to detect and stop scams before they cause damage.
We believe that the Calabresi principle of the “cheapest cost avoider” (CCA)—a foundational concept in modern tort and liability law—should be updated to include digital platforms.
In his seminal 1970 book The Costs of Accidents, Calabresi argued that liability should be assigned to the party best positioned to prevent harm at the lowest cost. Calabresi maintained that allocating responsibility in this way would lead to the most socially efficient and, ultimately, fairest outcomes.
By tying liability to the capacity to prevent harm, the CCA principle also shapes behavior. Actors who know they will be held responsible when they are the least-cost avoiders have strong incentives to take precautions; when they fail to do so, holding them accountable is justified.
Because common law evolves through precedent, it adapts only gradually to changing social and economic conditions. In a simpler commercial environment, where goods were sold directly by merchants to large numbers of consumers, it made sense to place responsibility for product safety on the seller. It was far cheaper and more effective for a single merchant to ensure safety than for each individual consumer to investigate risks independently, even if consumers theoretically had the capacity to do so. The CCA principle thus helped displace the already-eroding doctrine of caveat emptor, shifting responsibility away from buyers and toward sellers.
As New York University law professor Catherine Sharkey has argued, the digital economy requires a further evolution in how tort liability is understood; that shift is already underway.
In today’s marketplace, brick-and-mortar presence is no longer a prerequisite for being considered the cheapest cost avoider. Instead, online distributors and platforms may be “the party best able to ensure product safety.” Sharkey points to court rulings involving Amazon, as well as proposed state legislation, that increasingly treat electronic marketplaces as potentially liable actors.
Traditional criteria — such as whether an intermediary takes legal ownership of goods — are becoming less relevant, since platforms now perform nearly all the functions of selling, including product description, promotion, ratings, and customer interaction, without formally owning the products.
An electronic marketplace like Amazon has a unique capacity to identify safety issues regardless of whether a product is sold directly or by a third-party vendor. Consumer complaints flow through the platform, and Amazon has strong incentives to position itself as the primary interface, given the profits it derives from that role.
Although scams are not typically conceptualized as product sales, they function in much the same way. The “product” may be a fraudulent financial service promising high returns or a fabricated romantic relationship, but the transaction nonetheless involves a chain of intermediaries.
In the contemporary digital environment, multiple actors — or “agents” — participate in the sale of a scam: the originating scammer; firms that assist in producing scam infrastructure, such as deepfake technologies; digital platforms like Meta that connect scammers to victims; and financial institutions that facilitate the transfer of funds from victims to perpetrators.
The majority of AI-enabled scams circulate on Meta’s platforms, and the company possesses both extensive information and substantial technical capacity to prevent their spread. In many cases, Meta is the actor best positioned to avoid harm: it maintains complaint registries, observes behavioral patterns at scale, and controls the infrastructure through which scams propagate. As discussed below, there are concrete procedural steps the company could take to discourage scams.
From this perspective, the United States’ broad exemption from platform liability under Section 230 represented a step in the wrong direction. At the time of its passage, policymakers faced genuine uncertainty about how digital platforms would evolve, and industry lobbying framed liability as a threat to a fragile, emerging sector. Today, the technology industry is neither fragile nor nascent. It is extraordinarily wealthy and powerful, and its ability to target different individuals with highly tailored messages makes it far more dangerous than advertising in legacy media. The rise of deepfakes has only intensified these risks.
There also needs to be regulation of cryptocurrency, particularly the conversion of cryptocurrency into fiat currency, a crucial stage in laundering and financial institutions have a role to play too. While banks may not be able to prevent every small transfer from a victim to a scammer, they play a central role in detecting, freezing, and tracing the illicit gains generated by scam operations.
Arguments that the sheer volume of content and transactions makes such oversight impractical are unpersuasive. Platform profits are directly proportional to scale, meaning that even substantial investments in scam prevention would likely reduce profits only marginally. Moreover, those profits must be weighed against the significant social harms platforms generate, including the losses borne by scam victims and the broader societal costs of remediation.
Dr. Anya Schiffrin is co-director of the Technology, Policy and Innovation program at Columbia University’s School of International and Public Affairs.
Joseph E. Stiglitz is University Professor At Columbia University.
Schiffrin co-authored the report with Dr. Alice E. Marwick, Navya Sinha, Anusha Wangnoo, Kaylee Williams, Elnara Huseynova & Audrey Hatfield.