Government postpones big stick for big tech until after election amid Trump tariff fears

2025-02-03 15:08:00

Abstract: Govt stalls online safety rules for big tech. Expert suggests huge fines for Meta, Apple, Google, but no timetable for action. UK/EU have similar laws.

The federal government's plan to strengthen online safety regulations for large tech platforms has been stalled, due to its chosen expert recommending billions of dollars in fines for Meta, Apple, and Google. Communications Minister Michelle Rowland has withheld top public servant Delia Rickard’s advice since last November and will release it on Tuesday, without clarifying the government’s position.

The proposed fines would apply to tech platforms that breach a new “duty of care” enforced by the eSafety Commissioner. This duty would require these platforms to be proactive in preventing child exploitation, online hate, and the promotion of drug abuse or eating disorders. The government has publicly agreed to a duty of care and brought forward Ms. Rickard’s review to expedite the process, with indications that legislation will be introduced during this parliamentary term.

However, the government currently has no timetable for legislation before the federal election, nor has it published if it plans to adopt Ms. Rickard's fine regime. This regime would impose penalties of 5% of a non-compliant platform’s global annual turnover or $50 million, whichever is higher. The UK and EU have already taken similar approaches, and the government itself made similar proposals in its shelved disinformation and misinformation bill.

But now, promising fines could antagonize the Trump administration and its allies in the tech sector at a delicate moment when the government is trying to avoid hefty tariffs. The fines would be a last resort, only implemented after lengthy court processes, with smaller breaches incurring civil penalties of $10 million, and the hope that platforms would voluntarily fulfill their obligations, as at least some already do.

Ms. Rickard’s proposal would also broaden the scope of online safety regulation, establishing new codes of conduct to cover the “dehumanization” of groups based on “protected characteristics” such as sexual orientation, age, or religion. It would also cover child sexual exploitation and grooming, threats to “national security and social cohesion,” and “promotion of harmful behavior” such as suicide, eating disorders, or dangerous challenges.

The most stringent requirements, including annual monitoring reports, would apply to “high-risk” platforms designated by the eSafety Commissioner, or any platform or online service provider automatically included if used by at least 10% of Australians. For the online hate portion, Ms. Rickard recommended broad exemptions for any “ideas, concepts or institutions,” art, science, journalism, and reasonable political communication, to avoid the free speech concerns raised by the disinformation and misinformation bill.

Under the proposal, the eSafety Commissioner would have broader powers to demand the removal of content when it sees a safety threat. For adult cyber abuse and child cyberbullying, it could order content removal within a day of receiving a complaint, rather than the current two days. However, the possibility of platforms failing to comply with such notices, as Elon Musk’s X did when a video of the Wakeley stabbing circulated last year, raises questions about the enforceability of any code of conduct without strict penalties, especially if big tech continues its recent shift away from content moderation.

Last month, Meta’s Mark Zuckerberg followed Musk’s lead, announcing a “new era” for his platforms, including Facebook, Instagram, and WhatsApp. The CEO criticized governments around the world for “institutionalizing censorship,” calling the regulation of content under the guise of “potential harms caused by online content” a “blatant political act,” and directly named European legislators.