Toni Hassan is the author of ‘Families in the Digital Age: Every Parent’s Guide’ (Hybrid Press, 2019) and an advisor to Children and Media Australia.
Enforcing the new social media law limiting access to minors. What can we expect?
Introducing Australia’s world-leading Online Safety Amendment (Social Media Minimum Age) Act was relatively easy. What now matters is what comes next.
After the Act became law in November 2024, the Albanese government gave itself (or its successor) 12 months to prepare guidelines on the “reasonable steps” social media companies will be required to take to prevent children under the age of 16 from using their platforms.
Any that fail to take those reasonable steps after that time will be liable for fines of up to $50 million, which sounds like a lot until you consider the profits these companies actually make, and is nowhere near as much of a threat to them as would be suspending their licenses.
Who needs to comply? The Act doesn’t exactly say. Instead, it refers to “age-restricted social media platforms” to be defined by legislative rules drawn up by the Minister for Communications under advice from the regulator, the eSafety Commissioner.
Those decisions will be “disallowable instruments”, meaning the parliament can reject them if it doesn’t approve. Facebook, Instagram, Snapchat, TikTok, Reddit, and X (formerly Twitter) are likely to be among those targeted.
The platforms expected to be excluded are messaging apps (where children can be harassed), online gaming services (where children are increasingly groomed and gambling compulsively) and YouTube with its short-form videos (thought of as an educational platform but where binge-watching begins among young children and usage has been associated with increased risk of anxiety and poor inhibitory control).
It will be up to platforms to decide how they estimate or verify the age of their users and restrict access. The Act prevents them from using the information and data collected for age assurance for any other purpose unless the user has given voluntary, informed, current, specific and unambiguous consent. Once the information has been used for age assurance or any other agreed purpose, it must be destroyed by the platform or any third party contracted by the platform.
The government is trialling age assurance technologies. Among the options being considered are biometric markers and digital usage patterns.
Much of what ends up happening will be the responsibility of the eSafety Commissioner Julie Inman Grant, who ahead of the new law, expressed doubts, and compared banning children younger than 16 from social media to banning them from the ocean.
“We don’t fence the ocean or keep children entirely out of the water,” she told a parliamentary inquiry, “but we do create protected swimming environments.” Commissioner Grant has previously worked for Microsoft, Twitter and Adobe. Given that, can the regulator be expected to be a strong cop that parents want on this beat?
The re-election of President Donald Trump will make standing up to social media chiefs, some of the richest men in the world, and some of Trump’s newest friends, a good deal harder.
Getting and keeping kids off screens is the long advocacy passion of my parenting life. As I see it, much of the value of the new law is the cover it will give parents to set stronger boundaries. It’s about changing social norms. Parents will be able to say they have the law on their side. It’ll also put companies who profit from, rather than protect children, on notice.
But with the Act capturing only some platforms, it runs the risk of pushing young people onto other platforms (deemed outside the new Act, that is not having a ‘primary purpose of social interaction’), into different online environments that will become more focussed on social interaction and potentially just as damaging.
The next steps will be the legislative rules to be drawn up by the Minister and a legislated “Digital Duty of Care” that will place broad obligations on digital platforms to “keep users safe and help prevent online harms” as technologies and services evolve. Online harm should include blatant misinformation. Meta and other digital platforms must be compelled to adopt a clear and firm accountability regime (actively reviewing risks, removing illegal content and reversing a trend to drop fact-checking).
I, and many parents, would like a lot more action. Childhood is no longer ‘play-based’. It has become increasingly screen and phone-based and commercially-driven, damaging social and emotional development and healthy sexual development, especially given how easy it is for a child to access and get hooked into violent pornography.
I would like to see devices out of all schools, most certainly out of early learning centres and primary schools. Australia has more devices in schools than most other countries and some of the worst academic results and adolescence mental health problems in the OECD.
The government’s measures are a good start. How they play out will depend on how they are enforced.
*************************************
0 Comments