1. Home
  2. / Economy
  3. / With Total Ban and Mandatory Verification, Australia Bans Social Media for Under-16s with Million-Dollar Fines, Facial Recognition, Account Removals, and Global Pressure for Strict Rules Against Children’s Digital Risks
Reading time 8 min of reading Comments 1 comment

With Total Ban and Mandatory Verification, Australia Bans Social Media for Under-16s with Million-Dollar Fines, Facial Recognition, Account Removals, and Global Pressure for Strict Rules Against Children’s Digital Risks

Published on 23/11/2025 at 12:44
Austrália proíbe redes sociais para menores de 16 com verificação obrigatória de idade, multas milionárias e regras rígidas. Entenda o impacto para crianças e para o mundo digital.
Austrália proíbe redes sociais para menores de 16 com verificação obrigatória de idade, multas milionárias e regras rígidas. Entenda o impacto para crianças e para o mundo digital.
  • Reação
  • Reação
  • Reação
  • Reação
  • Reação
  • Reação
13 pessoas reagiram a isso.
Reagir ao artigo

New Australian Law Bans Social Media for Under-16s, Requires Age Verification with Advanced Technology, Imposes Million-Dollar Fines on Platforms, Raises Questions About Privacy, Effectiveness in Tackling Digital Risks for Children, and May Pressure Other Countries to Adopt Much Stricter Rules Aimed at Protecting Young People Online.

Australia has passed unprecedented legislation that bans social media for under-16s and shifts the entire responsibility of enforcement to digital platforms. Starting December 10, companies like Facebook, Instagram, TikTok, X, and others will have to remove existing accounts and prevent new registrations from teenagers, under the risk of million-dollar fines.

The government presents the measure as a direct response to data showing widespread use of social media by children and increasing exposure to violent, misogynistic content that encourages eating disorders, self-harm, and even suicide. At the same time, experts, technology companies, and digital rights advocates warn of technical failures, privacy risks, and potential side effects on teenagers’ social lives.

How the Law That Bans Social Media for Under-16s Works in Australia

Under the new rule, tech companies will have to adopt “reasonable measures” to ensure that those under 16 do not maintain active accounts. In practice, the law bans social media for this audience by requiring that profiles of teenagers be deactivated or removed and that new registrations be blocked.

The government’s stated aim is to reduce “pressures and risks” associated with the digital environment, especially mechanisms that keep children glued to screens while displaying potentially harmful content to their mental health, well-being, and safety.

Research commissioned by Australian authorities indicates that 96% of children aged 10 to 15 use social media, that 7 in 10 reported contact with harmful content or behaviors, and that a significant portion has already experienced cyberbullying, adult harassment, or exposure to online violence. These numbers are used as a political and moral basis to justify intervention.

Which Platforms Are Targeted and What Changes in Practice

The law initially lists ten services directly affected: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Kick, and Twitch.

All these platforms fall under the official definition: a focus on social interaction among users, messaging, and content publication.

Children will still be able to access some open content, such as videos on platforms that do not require login, but will not be able to maintain their own profiles on the banned social media.

Services like YouTube Kids, Google Classroom, and WhatsApp were excluded for not fully meeting the established criteria.

The government has already indicated that the list may be expanded over time, including to cover online games that function, in practice, as disguised social networks, allowing conversations, groups, and content sharing among strangers.

This regulatory threat has already prompted companies like Roblox and Discord to rush to strengthen age verification in some features.

Million-Dollar Fines and Total Responsibility of the Platforms

A central point of the legislation is that parents and children will not be fined. The responsibility falls entirely on the technology companies.

In cases of serious or repeated non-compliance, platforms may face fines of up to 49.5 million Australian dollars, equivalent to around R$ 170 million.

By requiring the private sector to bear the cost of enforcement, the government reinforces the message that those who ban social media for minors must also prove they are taking this ban seriously by investing in technology, staff, and monitoring systems.

Critics, however, remind us that giants like Meta earn similar amounts in a matter of hours, raising doubts about the real power of the penalties.

Nevertheless, the risk of reputational damage and additional lawsuits may push companies to enhance control, even with limited impact on their finances.

Facial Recognition, Age Verification, and Fear of Mass Surveillance

The law does not specify a single mandatory method but requires companies to adopt multiple forms of age verification, considered more robust than simply declaring a date of birth. Among the mechanisms mentioned are:

  • submission of official documents
  • systems of facial or voice recognition with age inference
  • behavior analysis and interactions to estimate whether the user is under or over 16 years old

According to the government, relying solely on the user’s word or parental permission has proven insufficient. The logic is to tighten the filter and prevent the automated registration of children’s profiles en masse.

On the other hand, privacy experts warn that storing faces, documents, and sensitive data of millions of people creates a gigantic risk surface.

The recent experience of Australia with high-profile data breaches fuels fears that information collected to protect children may end up being exposed or used for other purposes.

The government asserts that the law provides “strong protections”: data could only be used for age verification and must be destroyed after the process, with “severe penalties” for violations.

Still, digital rights organizations doubt the real capacity to ensure this level of security on an industrial scale.

Meta Anticipates, Other Platforms Hesitate and Question the Measure

Meta, owner of Facebook, Instagram, and Threads, announced that it will close accounts of teenagers even before the official date, starting December 4.

Users mistakenly removed will be able to attempt to reactivate access by sending an official document or a selfie video to prove their age.

Other affected companies, like TikTok, Snap, X, Reddit, and streaming services, maintain a more critical stance.

They argue that the law bans social media in a way that may push young people to less safe spaces on the internet, without moderation or reporting tools, in addition to imposing heavy bureaucracy on legitimate users.

Some platforms even question the very classification of “social network.” Snap and YouTube have publicly argued that they are something different, opening the door to legal disputes and attempts to redesign rules through court actions.

Even so, companies stated in parliamentary hearings that, despite disagreeing with the measure, they will comply to avoid sanctions and open conflicts with regulators.

Will It Work? Doubts About Effectiveness and Impact on Youth Mental Health

Theoretically, the law bans social media for under-16s and promises to reduce exposure to toxic content.

In practice, however, experts point to a set of challenges:

  • Flawed Technology: facial recognition systems tend to be less accurate for children and adolescents, which can lead to undue blockages or leave open gaps.
  • Obvious Loopholes: teenagers have already reported that they are creating accounts with false birth dates before the law takes effect, organizing shared profiles with parents, or using VPNs to circumvent geographic blocking.
  • Boomerang Effect: by closing doors on more regulated networks, the policy may encourage migration to less moderated sites, forums, and apps, where hate speech, abuses, and exploitation face even fewer barriers.

There is also an important debate about social isolation. For many teenagers, social media is the primary contact channel with friends, interest groups, and support communities.

Educators and psychologists fear that a strict ban, without complementary strategies for digital education, could trade one risk for another by cutting important social ties in delicate phases of life.

Data, Privacy, and the Ghost of a Super Registration of Children

To enforce a law that bans social media for a massive group of users, it is inevitable to deal with increased data collection. This raises uncomfortable questions: who holds this information, for how long, with what security, and under what auditing rules.

Australia has already witnessed major leaks involving banks, health companies, and telecommunications, with personal data sold, exposed, or used in scams.

Critics fear that a massive age verification system could create something akin to a “super registration” of young people and adults, with high value in the digital underworld.

The government assures that the legislation requires platforms to:

  • use the data solely for age verification
  • destroy the information after the process
  • offer alternatives to using official documents

Still, public trust cannot be rebuilt merely with letters of law, especially when the same companies involved have already been the subject of privacy scandals and leaks in other countries.

The Australian Law in the Global Context and What Brazil Is Doing

Although the Australian measure is the first to completely ban social media for under-16s on specific platforms, other countries are moving in similar directions, combining restrictions, fines, and enhanced protection requirements.

  • In the United Kingdom, new rules allow heavy fines and even imprisonment of executives if companies do not protect young people from illegal or harmful content.
  • European countries are discussing minimum ages and parental consent requirements for using social networks, as well as “digital curfews” for older teenagers.
  • France, Denmark, Norway, and Spain are evaluating or have already proposed age limits and formal requirements for parental authorization.
  • In the United States, state initiatives to restrict social media for minors have faced judicial resistance.

In Brazil, the movement is following a different path: the Digital Statute for Children and Adolescents, known as “ECA Digital,” holds companies responsible for protecting minors under 18 from harmful content and requires that accounts of those under 16 be linked to a legal guardian, instead of completely banning their use.

Additionally, recent decisions from the Supreme Federal Court have reinforced the obligation of platforms to remove criminal content, such as child pornography, encouragement of suicide, and attacks on democracy, under penalty of liability.

The detailed regulation of ECA Digital will be under the responsibility of the National Data Protection Agency, with implementation expected in 2026.

The Australian experience will be closely observed by governments, companies, and experts worldwide.

If the law can reduce fraud, abuse, and exposure to extreme content without causing major side effects, it is likely to inspire new international initiatives.

Conversely, if the result is an increase in digital clandestineness, more risks to privacy, and a teenage rush to less safe platforms, the measure may be seen as a warning about the limits of state control in a global, decentralized, and easily circumvented environment.

The fact is that, by betting on a law that bans social media for under-16s, Australia is ushering in a new phase of the clash between child protection, digital freedom, the economic interests of big techs, and the right of families to decide how children and teenagers will navigate the online world.

Do you think that banning social media for under 16s is the right path, or should the focus be on digital education and parental supervision?

Inscreva-se
Notificar de
guest
1 Comentário
Mais recente
Mais antigos Mais votado
Feedbacks
Visualizar todos comentários
L Anna
L Anna
25/11/2025 14:02

Censura e vigilancia no nivel maximo porque pais incompetentes não tem 10m pra vê o que a(o) filha faz na internet.

Parabéns aos incompetentes.

Source
Maria Heloisa Barbosa Borges

Falo sobre construção, mineração, minas brasileiras, petróleo e grandes projetos ferroviários e de engenharia civil. Diariamente escrevo sobre curiosidades do mercado brasileiro.

Share in apps
1
0
Adoraríamos sua opnião sobre esse assunto, comente!x