The bill represents the first ban on social media for children under 16 in the world, marking a significant step in the debate on child safety online
A ban on social media for children under 16, acting on a promise to better protect minors online
On 21 November 2024, the Australian Government introduced the Online Safety Amendment (Social Media Minimum Age) Bill 2024 in Parliament, advancing its promise to prohibit access to social media for children under the age of 16. The bill, which would amend the updated Online Safety Act passed in 2021, proposes adding a new category of “age-restricted social media platforms” to the law, which would capture services with the “sole or significant purpose of enabling online social interactions between 2 or more users”. When announcing the bill, Prime Minister Anthony Albanese acknowledged that some underage users will inevitably find ways to access restricted services, but that he “wants parents to know the Government is in their corner”. The bill, which has widespread political support, is pending before the Senate Environment and Communications Legislation Committee which will release a report on the matter on 26 November 2024. If passed, platforms would be required to comply with the provisions of the bill within one year.
The Government acknowledged that it’s unreasonable to expect platforms to prevent every attempt to workaround restrictions
The bill would require captured platforms to take “reasonable steps” to prevent underage users from holding accounts for their services. While the Government adopted a slightly expanded definition of “social media services” as set out in the Online Safety Act, it also proposed exempting platforms that are valuable or important to children’s lives, including:
Messaging apps;
Online gaming services; and
Services with the primary purpose of supporting the health and education of end-users.
The Government has repeatedly acknowledged that, like in the instance of cigarettes or alcohol, it is not reasonable to expect platforms to be able to prevent underage users from accessing services through workarounds or attempts to undermine age restrictions. The bill, therefore, only introduces penalties for firms that have a “systemic failure to take action to limit such circumventions” and not for individual instances of underaged users gaining access. The Government also clarified that underage users and their parents cannot be penalised under the bill, noting that responsibility for protecting minors through this measure lies entirely with platforms.
The bill leaves the question of how to conduct age assurance open but promises more guidance following the Government’s trial
Acknowledging that some age assurance mechanism would be required to implement the age restriction on platforms, the bill also proposes strengthening the privacy protections for users required to verify their age. Platforms would be banned from using personal data collected to complete age assurance checks for any other purpose, unless a user gives their voluntary, informed, current, specific and unambiguous consent. Any data collected for age assurance (and other agreed purposes) would also be required to be destroyed by the platform and any third parties with access following the completion of those purposes. While the Government did not specify a preferred method of age assurance in the text of the bill, it did make reference to its ongoing Age Assurance Trial as a source of information for the Government and eSafety Commissioner. The trial, run in coordination with the Age Check Certification Scheme, will include a technical test of presently available age assurance methods, a survey of consumer attitudes towards these methods and a consultation with key stakeholders on community, expert and industry attitudes. Beyond restricting social media for under-16s, the Government is also working towards more effective restrictions on access to pornographic content for under-18s through the trial.
Maximum fines for breaches of the Online Safety Act would also be increased to mirror the situation in the EU and UK
The bill would also raise the maximum penalty for breaches of all provisions of the Online Safety Act, including those related to a new age restriction from A$9.9m (£5.1m) to A$49.5m (£25.4m). The Government notes that this increase in maximum fines would bring Australia more closely in alignment with the EU, Ireland and UK, where maximum fines for breaches online safety laws are:
EU’s Digital Services Act: 6% of worldwide annual turnover;
Ireland’s Online Safety and Media Regulation Act 2022: 10% of turnover or €20m (£16.7m); and
UK’s Online Safety Act: 10% of worldwide revenue or £18m.
Together with the Government’s recently announced plans to introduce a Digital Duty of Care for platforms, this increase in fining powers would amend one of the world’s earliest online safety regimes to more closely mirror the regulatory consensus reached between the UK and EU in recent years.