Age verification requirements under the Online Safety Act comes into effect…
July 27, 2025 |
From 27 December 2025 in Australia there will be mandatory age verification on search engines used in Australia, such as Google. The failure to do so will result in fines of almost $50 million per breach. Those under the age of 18 search engines will filter out pornography, high impact violence and other content. As to how successful is yet to be seen. Filters have a dismal history on the internet. They have been too light touch, too heavy handed or had a poor interpretation of what pornography or high impact violence is. The amendments have largely been implemented without much notice. The Government has legislated age restriction on the use of social media with Part 4A of the Online Safety Act 2021.
In the UK, effective 25 July 2025 sites and apps must implement “age – gating” methods to protect children from accessing harmful content. The regulator is, with no doubt unintended Orwellian undertones, the Office of Communications (Ofcom). The age-gating methods are required to identify which users are children and then prevent them from accessing pornography, as well as self-harm, suicide, and eating disorder content amongst others.
The age verification requirements are provided under the Ofcom Codes of Practice developed pursuant to the Online Safety Act. Ofcom highlighted that the biggest and most popular adult service providers, plus thousands of smaller sites, have committed to deploying age checks across their services. According to Ofcom, other online platforms have now announced they will deploy age assurance.
Ofcom explained that, beginning July 25, 2025, it would actively check compliance and enforce against any company that allows pornographic content and does not comply with age-check requirements by the deadline.
Additionally, Ofcom stated that it had launched a monitoring and impact program, primarily focused on the biggest platforms where children spend most of their time. Ofcom noted that the program will include:
- how effectively they have configured their algorithms so that the most harmful material is blocked in children’s feeds.
- a review of the platforms’ efforts to assess risks to children, which must be submitted to Ofcom by August 7, 2025;
- tracking children’s online experiences to judge whether safety is improving;
- swift enforcement action if evidence suggests that platforms are failing to comply with their child safety duties; and
- scrutinizing the platforms’ practical actions to keep children safe, including:
- whether they have effective means of knowing who their child users are;
- how their content moderation tools identify types of content harmful to children.
While the UK regulator will focus on UK companies and compliance with UK legislation there are lessons and guidance Australian practitioners can take from how the legislation is regulated in the UK. Similar restrictions will attend upon Australian companies in the near future. Being prepared is a good thing.