Online Safety
As online interactions expand and digital services become ever more embedded in daily life, online safety has moved to the forefront of regulatory agendas. Policymakers are concentrating on how platforms safeguard children, address harmful content, and deploy age assurance in ways that respect privacy.
Through 2026, this focus is translating into more assertive oversight. In the EU, guidance under the Digital Services Act is tightening expectations around age assurance and protections for minors. Australia is setting new benchmarks through strengthened platform duties and under-16 account restrictions, while the UK is preparing for an escalation of safety requirements under the Online Safety Act.
For digital services, the direction is clear - online safety is no longer a compliance task but a core operational responsibility. Platforms should be reviewing their systems, strengthening safeguards, and preparing for a regulatory environment where expectations will only continue to rise.
Online safety
Australia’s online safety regulations continue to evolve and affect a wide range of online platforms, with the eSafety Commissioner proactively monitoring compliance.
The Social Media Minimum Age (SMMA) legislation under Part 4A of the Online Safety Act 2021 (OSA), which came into force on 10 December 2025, requires social media platforms to take reasonable steps to prevent children under 16 years-old from holding accounts. Phase 2 industry codes (Phase 2 Codes) registered under the OSA, regulating legal but potentially harmful content, will also be implemented in December 2025 and March 2026.
With SMMA legislation in force from 10 December 2025, online businesses must assess their services to determine if they are subject to the SMMA obligations, and if so, implement reasonable, privacy-preserving age verification measures. Although the eSafety Commissioner has initially focused compliance efforts on platforms with a high number of Australian users, we expect this focus will expand to those with smaller Australian user bases throughout 2026.
The Phase 2 Codes apply to a wider range of online services, with precise obligations depending on the particular service. Businesses might have to undertake risk assessments, update end-user terms, implement age assurance measures for access to 18+ content, provide online safety tools and a complaints process, and comply with reporting and continuous improvement requirements.
Online safety is becoming a central focus of EU regulatory initiatives, driven by growing political momentum toward stronger protections for children against online risks and harms, alongside increased scrutiny of platforms’ compliance with these obligations.
In July 2025, the European Commission published guidelines under Article 28 of the DSA on ensuring privacy, safety and security for minors. Subsequently, in October the Commission requested information from a number of large platforms about their child safety measures, while also indicating that the European Board for Digital Services’ Working Group for the protection of minors will assess compliance by smaller platforms in coordination with other authorities.
These actions coalesce with the European Commission President's State of Union speech in September 2025, stating that the Commission is watching the implementation of Australia’s U16 social media ban closely to see what next steps can be taken in the EU. In parallel, several Member States are advocating for further measures, such as an EU-wide 'age of digital adulthood'. Meanwhile, national-level measures on online safety have also been implemented under the AVMSD across the EU/ EEA.
In November 2025, the Commission unveiled its 2030 consumer protection agenda, which includes the previously announced upcoming Digital Fairness Act (DFA). One of the focus areas of the DFA is to strengthen online protections for children to reduce their exposure to harmful practices and features in digital products. The DFA is expected to be proposed by end of 2026. The European Data Protection Board (EDPB) is also set to release guidelines on the processing of children’s data shortly.
Now is the time for online platforms to put in place appropriate safeguards in accordance with Article 28 DSA requirements. With the increasing focus on children’s issues and online safety at EU-level and at national level in Member States, digital services should begin reviewing their approach to online safety issues (particularly regarding the protection of minors using their services), regardless of whether they fall within the scope of Article 28. We expect that regulatory scrutiny in this area will only continue to intensify in 2026.
2025 was the year safety duties under the UK’s Online Safety Act (“UK OSA”) first started to bite on service providers; 2026 will be the year duties escalate for the largest/riskiest (“Categorised”) services, and Ofcom zeros in on its key enforcement priorities.
On in-scope providers’ 2026 “watch” lists should be:
- Categorisation - although delayed by Wikimedia’s Judicial Review, Ofcom is finally set to publish its register of Categorised Services in July 2026;
- Safety measure areas of focus - minor safety, age checks, CSAM/grooming, terror/illegal hate, safety of women & girls, and risk oversight - Ofcom has highlighted all of these as the main focus areas of its industry scrutiny and enforcement work in 2026;
- New proactive tech safety measures from Autumn? - Ofcom is due to publish an updated on its additional safety measure consultation in Autumn 2026, which could recommend services implement further proactive moderation tools;
- Risk assessments - Ofcom has highlighted areas for improvement in its Report on Year 1 risk assessments. Services should factor these into their 2026 risk assessment review (and note that Ofcom has flagged it will copies of certain providers’ 2026 risk assessments between 1 May-31 July 2026); and
- Fee notification - providers meeting the “Qualified Worldwide Revenue” threshold of £250m+ will need to notify Ofcom through its online fees portal by 11 April 2026.
Providers launching new user-to-user or search services must also factor online safety scoping, risk assessment and safety measures into compliance workstreams.
Age assurance
The importance of age assurance for protecting children from online risks and harms is gaining significant traction across the EU, with a push towards harmonised solutions and increasing regulatory attention under the Digital Services Act (DSA) in particular.
During the past year, the European Commission issued guidelines under Article 28 DSA, which amongst other things, address how age assurance obligations might arise for online platforms. Separately, from a data protection compliance perspective, the European Data Protection Board (EDPB) adopted Statement 1/2025, clarifying how age assurance measures should align with GDPR principles.
The Commission is also working towards an EU-harmonised approach to age assurance through the development of a unified age verification solution. Designed to support compliance with Article 28 DSA and interoperability with the framework underpinning the future EU Digital Identity Wallets, the technology is currently in its pilot phase and is expected to be rolled out in 2026.
From a regulatory perspective, in October 2025, the European Commission launched its first investigative steps towards enforcement of Article 28 DSA, requesting information from a number of large platforms about their age verification systems, amongst other things.
It is expected that EU regulators will intensify their scrutiny of age assurance measures, across a range of regulated areas, as part of the increasing EU policy and regulatory emphasis on protection of children online, during 2026. So, companies operating online platforms should prepare for increased regulatory scrutiny of their age assurance systems, ensuring alignment with GDPR and DSA requirements amongst others. They should also monitor the pilot outcomes of the EU age verification solution to inform their approach to age assurance.
Age verification is transitioning from voluntary best practice to a mandatory obligation under online safety, content and privacy regulation.
Across the APAC region, more jurisdictions are introducing age assurance requirements, embedding them within online safety frameworks, platform and content regulations, as well as privacy and data protection laws.
For example:
- Australia’s social media minimum age requirements under the Online Safety Act requires social media providers to take “reasonable steps” to prevent users under the age of 16 from holding an account.
- In Singapore, major app stores will be required to implement mandatory age verification under the Code of Practice for Online Safety for App Distribution Services. The government is now turning its attention to social media platforms, studying how these services should implement age assurance measures.
- In India, the new Digital Personal Data Protection Act will require “verifiable consent” of a parent to process children’s personal data, and prohibits tracking or behaviour monitoring of and targeting advertisements at children.
Expect stricter, mandatory age-assurance rules that may not just be for app stores and social media platforms, but for any organisations processing children’s data or offering products or services to children. Now is the time for organisations to identify their obligations and put in place proportionate, privacy preserving age-assurance measures that reflect current technology and industry best practice.







