The UK Online Safety Act marks a turning point in how digital platforms are expected to manage user safety. No longer limited to social media giants, the act applies broadly to any online service that hosts user-generated content or enables user interaction, including messaging apps, forums, marketplaces, gaming platforms, and even niche community features embedded in larger products.
At its core, this act requires platforms to proactively identify, prevent, and respond to illegal and harmful content, especially content that puts children or vulnerable users at risk. These are not best-practice recommendations; they are legal obligations, enforceable by Ofcom, with severe penalties for non-compliance.
For many platforms, especially those that have never considered themselves “social” or high-risk, compliance can feel complex and overwhelming. But understanding your responsibilities under the act and putting the right systems, processes, and safeguards in place, is now essential to doing business in the UK.
This blog breaks down how to comply with the Online Safety Act: what the law requires, how to assess your risk, and the practical steps you can take to meet your obligations and build a safer platform for your users.
Understanding the scope of the Online Safety Act
The key objectives of the Act are to:
01.
Prevent the spread of illegal content such as child sexual abuse material (CSAM), terrorism, hate speech, and fraud.
02.
Reduce exposure to harmful content, especially for children
03.
Hold platforms accountable through risk assessments, transparency reports, and clearly defined safety duties.
To manage enforcement and tailor obligations, the Act introduces a tiered system of regulated services:
Category | Parameter |
---|---|
Category 1
| These are services with high-reach and high-risk, typically large social media networks. These platforms have the most stringent requirements, including duties to protect users from both illegal and certain legal-but-harmful content, especially for children.
|
Category 2A
| These are services like search engines, which must implement measures to prevent users from being exposed to harmful search results.
|
Category 2B
| These are services with smaller or lower-risk to users. They are still subject to core safety duties, though with fewer requirements than Category 1 services.
|
Core obligations under the Online Safety Act
1. Take reasonable steps to prevent exposure to harmful content
Platforms must implement proportionate systems and processes to reduce users’ risk of encountering harmful or illegal content. This includes using content moderation tools, keyword filtering, proactive detection technologies, and human moderation where appropriate. "Reasonable" will depend on the nature of the platform, its audience, and its risk level—but passive inaction is no longer acceptable.
2. Duty to remove harmful and illegal content
Services are required to swiftly remove content that constitutes a criminal offence or breaches the platform’s safety duties. This includes:
Cyberbullying and online abuse
Child sexual abuse material (CSAM)
Terrorist content
Hate speech
Fraud and scams
The act imposes mandatory takedown timeframes for certain types of content, particularly when flagged by users or authorities. Failing to act promptly can result in serious penalties
3. Reporting obligations to Ofcom
Platforms must report serious safety incidents and submit periodic updates to Ofcom, the designated regulator. These reports should detail what measures have been taken to detect, mitigate, and respond to harmful content, and whether any changes are being made to improve safety practices.
4. Age verification and parental controls
If your platform is likely to be accessed by children, you must take robust steps to protect underage users. This may include:
01.
Age verification measures
02.
Age-appropriate content restrictions
03.
Parental controls
04.
Child-specific risk assessments
Platforms must also consider how algorithms, autoplay features, and content recommendation systems could impact child safety.
5. Risk assessments and safety-by-design
Before launching new features or as part of ongoing operations, platforms must conduct regular risk assessments. These assessments should cover:
The types of harm users may encounter
Which user groups are most at risk
How platform design may contribute to that risk
Based on these findings, services are expected to take “safety-by-design” steps, integrating protective measures into the structure and functionality of their product.
6. Transparency and public reporting
High-risk services (especially Category 1) are subject to transparency reporting obligations, including:
01.
Publishing annual transparency reports
02.
Disclosing takedown volumes, moderation practices, and enforcement data
03.
Demonstrating how user complaints are handled
These reports must be made publicly available and submitted to Ofcom, ensuring accountability and industry-wide comparability.
Online safety act compliance checklist
Navigating the requirements of the Online Safety Act can feel complex, but breaking it down into actionable steps helps.
1. Conduct a risk assessment of your platform
Identify how users might be exposed to illegal or harmful content, which user groups are most at risk (e.g., children), and how your platform’s features may contribute to that risk.
2. Update or implement clear content moderation policies
Ensure your policies explicitly cover prohibited content (e.g., abuse, CSAM, terrorism), enforcement procedures, and escalation pathways. These should be visible to users and enforced consistently.
3. Set up internal processes for complaint handling and content removal
Have workflows in place to receive, triage, and act on user reports. Define and adhere to takedown timeframes, especially for illegal content.
4. Enable user tools for reporting, blocking, and muting
Give users the ability to easily report harmful content or behavior, and tools to protect themselves, like muting or blocking other users.
5. Design with safety-by-design principles
Proactively build in safeguards, such as content filters, algorithmic controls, and restricted interactions based on user age or behavior patterns.
6. Establish a contact point for Ofcom
Provide a direct contact for Ofcom (the UK’s online safety regulator) for compliance queries, enforcement actions, or serious incident notifications.
7. Publish a transparency report
If you fall into a higher-risk service category (e.g., Category 1), you are required to publish annual reports on your moderation practices, enforcement actions, and user safety outcomes.
8. Provide age-appropriate access and controls
Implement age verification and ensure underage users are shielded from harmful or adult content. Offer parental controls where relevant.

Aarathy Sundaresan
Content Marketer , CometChat