This year Ofcom begins its enforcement of the Online Safety Act – legislation that is unique in both its depth and flexibility. These two characteristics will also make the success of the framework ultimately dependent on the capability and capacity of the regulator charged with its implementation.
With implementation well underway, Ofcom appears committed to its plans to finalise regulation quickly but with the potential for changes and additions. This approach is noteworthy in light of criticism the regulator received last year over the perceived slowness of its implementation.
The Online Safety Act places additional obligations on a larger group of regulated firms than the EU’s Digital Services Act (DSA) does, with upwards of 40 platforms being identified as a categorised service. Ofcom’s recommendation to include functionality of a service as part of the designation decision takes a more nuanced approach to identifying and mitigating risk online than other regimes.
Ofcom’s recommendation on how it intends to set and collect fees has already received criticism from big tech but appears broadly aligned with the EU’s regime, if not better set up to distribute the financial burden across more firms.
The Online Safety Act adopts a deeper but narrower approach than the DSA and serves as a more flexible framework that is highly dependent on the capacity of the regulator to enforce it. The EU’s wider definition of regulated services and prohibited practices may be a better reflection of a systemic approach to online safety, even if it fails to capture harm among smaller platforms.
Ofcom must grapple with its place in the world as one of many regulators attempting to address safety problems that are global in nature, and the potential for incoherence that brings. The election of President Donald Trump in the US and the emboldenment of tech firms to push back on regulation has set up the potential for more conflict rather than cooperation with industry.
Ofcom is balancing timeliness and responsiveness in implementing the law
Since receiving Royal Assent in October 2023, Ofcom has been tasked with implementing and enforcing the UK’s Online Safety Act. Now more than one year into that work, the regulator is about to begin enforcing the binding set of regulations it has written to implement the law. With digital services set to face new obligations aimed at making the UK the safest place to go online in the world, we detail Ofcom’s progress to date as well as questions that remain, beginning with what we know for certain about the timeline on which implementation is progressing (see Figure 1).
In December 2024, Ofcom published the first of three core codes of practice, the codes of practice on illegal content, that will define the regulator’s work of enforcing the Online Safety Act. The code has been approved by Parliament and became enforceable as of 17 March 2025. In anticipation of enforcement , regulated platforms were expected to conduct risk assessments for illegal content on their services, which were due to Ofcom by the end of Q1 2025. This same process will unfold for the code of practice on protecting children and, eventually, for the code of practice for categorised services facing added obligations. By the end of 2025, we expect the codes of practice on illegal harms and child protection to be in force alongside the existing guidance on age assurance for platforms hosting pornographic content. Though the timeline for the framework that will govern categorised services is more extended and has already been delayed, we expect to learn the platforms that will be designated and to have a draft code of practice by the end of the year (pending secondary legislation from Parliament on designation thresholds).
True to its intentions (as set out during a series of events in 2024), Ofcom has stated its plans to move ahead with finalising the initial three codes of practices while concurrently working on developing additional binding measures for public consultation. Having previously relayed its plans to make these codes iterative and responsive to changes in the market, the regulator plans to release a consultation on added measures related to illegal content and protecting minors in Q2 2025. In its most recent update on implementing the Online Safety Act, Ofcom referenced the acts of rioting and violence across the UK in the summer of 2024 – widely understood as having been fueled by online content – as evidence of the need to move quickly in implementing the law. The regulator received added scrutiny and some criticism through this period for a perceived delay in enforcing the new law, even as implementation largely proceeded as scheduled at that time. We expect that Ofcom will continue to consider and reconsider measures within these codes of practice and issue periodic updates through public consultation, as provided for in the law, in response to developments in the market and in the broader online ecosystem.
Upwards of 40 firms are likely to be subject to additional duties as categorised services, significantly more than under the DSA
Though Ofcom’s outputs range in how close they are to being finalised, we can approximate the proposed administrative framework underpinning online safety regulation based on their publications to date. Ofcom has suggested that as many as 100,000 services could be captured by the Online Safety Act. To support these firms, the regulator has launched a suite of tools, including an interactive compliance tool and a record-keeping template, to assist in navigating the new regulatory burden.
Among regulated platforms, only a small portion are likely to be subject to the regulator’s new fee scheme for online safety and a smaller portion yet are likely to be captured by the categorised services regime. In relation to the categorised services framework, Ofcom estimates that between 12-16 services would qualify as Category 1 firms, 2 services would qualify as Category 2A and between 25-40 would qualify as Category 2B, based on their recommendations for thresholds to the Secretary of State (see Table 1). For comparison, 25 services have been designated for additional obligations under the EU’s Digital Services Act (DSA). While we’ve offered our best predictions on what platforms are likely to be categorised based on that advice to the Government within Table 1, Ofcom’s future publication of a register of services will offer important clarity, particularly on how the regulator defines some of the functionality standards it has set as well as how firms are expected to segment their user numbers based on the different functionalities of their services.
Despite some criticism, Ofcom’s fees structure appears to align with other approaches around the world
On fees, Ofcom estimates that at least 20 but possibly as many as 40 firms might be required to pay towards its online safety work. The regulator is currently developing a recommendation to the Government and has preliminarily suggested that firms making at least £250m in qualifying worldwide revenue and £10m in UK revenue from regulated services be required to fund its online safety work. A size-based fees scheme is common among other legislative frameworks and has been implemented by the EC and Ireland’s Coimisiún na Meán under the DSA – see Table 2.
Ofcom’s proposed fee structure has received some criticism for its basis in global revenue, despite this UK-based requirement, from firms including Google, Meta and Uber, suggesting that such a fee scheme would limit their plans for investment in the UK. Specifically, firms argue that the fee basis in worldwide revenue from regulated services, as opposed to revenue attributable to the UK, is not justified as a simpler measure for fees payers and would disincentivise the launch of new services in the UK for fear of additional captured revenue for fees calculation. In contrast, the implementation of the DSA is funded through a levy paid by platforms designated as Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) and determined first through a calculation of expected costs to the regulator divided by the number of fees-paying firms. While Ofcom proposes a 0.02% levy on worldwide revenue, the EC sets a cap on fees paid by any platform at 0.05% of worldwide revenue and notes that fees are to be adjusted to ensure they are proportionate to the size of each platform. Though the UK is a distinctly smaller market than the EU, the EC is also supported in its DSA efforts by a competent authority in each Member State, while Ofcom is the sole authority responsible for a work programme, and its related budget, that is similarly labor-intensive to the DSA. While recognition of the misalignment between the definition of categorised services and fees-paying platforms is fair especially in comparison to the EU, Ofcom’s approach to fees appears well-aligned with EU standards, if not more apt to spread burden more broadly across industry as was advocated for by tech firms.
The UK law engages more deeply with perhaps a narrower scope of online safety than the EU’s Digital Services Act
Through both its administrative structure and the content of its obligations, the Online Safety Act presents a more thorough and adaptable, though perhaps narrow, project in online safety than the EU’s DSA. In structuring its thresholds for designation around service functionality, Ofcom is presenting a refined position on risk in the online world that widens the net on which platforms, sites and services are presumed to present increased risk of direct and personal harm to users. As we’ve said previously, the design of the law also acknowledges the harms of legal forms of content and attempts to engage more deeply with the greater negative impact those harms may have on certain user groups, including children but also women and girls (although that engagement is limited to guidance, as opposed to a binding code, in the case of women and girls). Outside of Ofcom’s ability to impose larger fines for noncompliance, UK authorities are also able to prosecute individuals criminally for noncompliance, both if they personally engage in some illegal acts online and if they are implicated professionally in a regulated firm’s noncompliance. Read alongside the regulators’ commitment to ongoing consultation on emerging harms, the Online Safety Act appears to diverge from the DSA in electing more nuanced (though complicated) thresholds for regulation. This approach, valuing flexibility and affording regulators the ability to evolve enforcement to meet market forces, is increasingly emblematic of the UK’s broader approach to platform regulation. Such a deference to regulatory expertise is also written into the Digital Markets, Competition and Consumers Act, which similarly hangs the success of the whole legislative project on the capability and capacity of the regulator in charge of it.
As an apparent trade-off of this deeper and adaptive approach, however, the Online Safety Act largely fails to address a number of online harms taken on by the DSA. The EU regulation captures a broader range of firms, including intermediary and hosting services with an arguably more distant relation to user-generated content. The DSA also includes provisions on illegal harms related to e-commerce, such as the use of dark patterns, and disinformation and has already been extended to encompass greater regulation of elections-related harm mitigation. These risks, though less likely to cause immediate and severe harm to an individual as some other forms of user-generated content may be prone to, contribute to a broader undermining of trust and safety in shared online spaces. To an extent, the broader scope of the DSA reflects back its function in systematic mitigation, reinforcing that regulation aims to secure an entire ecosystem while limiting the potential for costly intervention on the level of the individual user that could also violate fundamental rights. However, the breadth of the DSA is made possible through succinct and simplified thresholds and prohibitions for regulated platforms. While the law also offers a mechanism to incorporate new obligations and codifies obligations in a way that is likely to make regulation more durable in the face of political backlash, the DSA nonetheless maintains a degree of a blindspot for the serious harms that take place on smaller platforms.
Ofcom will have to contend with its place in the world as one of many regulators operating within a rapidly shifting political environment
Given the proliferation of online safety regulation around the world (and the differences between frameworks), the issue of global cooperation in regulating a fundamentally global sector remains a valid question. As the regulator chairing the Global Online Safety Regulators Network (GOSRN), Ofcom has already directed the forum to adopt thematic priorities on regulatory coherence as well as coordination to promote compliance. The regulators leading GOSRN and the global effort to regulate online safety have found a number of broadly compatible principles within their frameworks, including the ability to issue fines, a focus on mitigating harms from illegal content such as incitements to violence and a duty to enforce safeguards for preventing children from accessing age-inappropriate content. However, differences in legislative frameworks do result in differences in what platforms get regulated how and where. These differences could be exacerbated as the market continues to develop, especially in the context of regulating generative AI or enforcing specific age assurance standards. While the UK remains a large and important market, Ofcom will nonetheless need to grapple with its place as one of many regulators working towards a safer online world, especially as a number of other jurisdictions consider adopting new regulation or amending existing frameworks, as is the case in Australia.
That aim of collaboration in service of coherence is likely to be complicated by how the regulation of big tech is affected by politics. Following the election of President Donald Trump in the US and the near immediate reaction from US-based big tech platforms, platform regulation of all kinds, including online safety regimes, have come under more vocal criticism from the firms they capture. Beyond this criticism, some platforms have announced changes in policy or plans for compliance that openly defy the standards set out by regulators. While limited to the US at present, Meta’s January 2025 announcement that it would be scaling back content moderation terms and discontinuing fact-checking garnered significant attention, including a response from the French Government expressing concern for the future of information integrity and the importance of online safety laws. More directly, Google alerted the EC in January 2025 that it would not comply with recommendations about implementing fact-checking in a forthcoming (and voluntary) DSA Code of Conduct on Disinformation for both its search and YouTube services. With the prospect of Westminster or Brussels-inspired online safety regulation in the US likely dead for the foreseeable future, the increased intensity in debate around these frameworks will underscore a year of implementation work and colour the first enforcement actions to be taken in the name of online safety. In this context, the political support offered by members of the Government, including DSIT Secretary Peter Kyle, for Ofcom’s work will weigh as well on the cooperation and potential conflict between regulator and industry in the future.