Sydney, December 3, 2025 – Australia stands on the cusp of a global experiment in digital parenting: a nationwide ban barring children under 16 from major social media platforms, set to activate on December 10, 2025 (just one week away). Platforms including Instagram, TikTok, Snapchat, YouTube, X, Reddit, Facebook, Threads, Kick, and Twitch must implement "reasonable steps" to block underage accounts or face fines up to A$50 million (about $32 million USD) for systemic

The legislation, dubbed the Online Safety Amendment (Social Media Minimum Age) Act 2024, targets the "dopamine drip" of addictive algorithms that fuel prolonged screen time and expose kids to harmful content. Meta has already begun notifying suspected underage users on Instagram, Facebook, and Threads to download their data before account deactivation starts December 4, with restoration possible at age 16 via ID or video selfie verification. Other platforms like Snap offer temporary deactivation up to three years, ending streaks and daily interactions just as Australia's summer holidays kick off.

Minister Wells Takes the Fight to New York

Communications Minister Anika Wells doubled down on the policy today in New York, calling out YouTube as "outright weird" for claiming its platform isn't inherently unsafe for kids. Speaking to U.S. tech leaders and policymakers, Wells positioned Australia's model as a blueprint for combating persuasive design in apps, demanding platforms prove their compliance through transparent reporting on underage account removals. "We will not be intimidated by Big Tech," she told Parliament earlier, amid a High Court challenge from teens and libertarians arguing it curtails free speech and parental rights.

“… the co-creator of the infinite scrolling feature, Aza Raskin, described his design as behavioral cocaine.”

Anika Wells

Full Minister’s speech in NYC today: https://www.youtube.com/live/9DCZAZy9jO4

The ban enjoys parental support but faces backlash over privacy risks from age-assurance tech and enforcement feasibility – platforms can't rely on self-reported ages or parental consent. About 96% of Australian kids aged 10-15 already use social media, with 350,000 on Instagram alone in the 13-15 bracket, per eSafety Commissioner data. Gaming sites like Roblox and Discord are preemptively adding verification, hinting at broader ripple effects.

Instagram post

Global Eyes on Canberra's Test Case

As Malaysia gears up for a similar under-16 ban in 2026, and the EU/UK mull age limits, Australia's move tests whether governments can force Silicon Valley to prioritize child safety over growth. Critics warn of future government encroachment, black markets for fake IDs or VPN workarounds, while proponents see it curbing cyberbullying, scams, and mental health woes. Platforms now race to comply, but the real proof will emerge in 2026: fewer harms, or just savvy kids slipping through?

Opportunities for Tech Founders and Investors

For investors and founders, there’s a new market opening around age‑assurance tech, wellbeing‑first platforms, and tools that help parents and teens navigate a world where social media is no longer default infrastructure for under‑16s. Similarly, if you work in product, trust & safety, or growth, Australia is quietly becoming a high‑stakes testbed for age‑verification, safety‑by‑design and transparency reporting. What “reasonable steps” looks like in practice will likely shape standards in Europe, the US and beyond.

More Reading

Reply

or to participate

Keep Reading