Expert comment: How the Online Safety Act aims to clean up social media

The UK’s Online Safety Act 2023 has recently entered its first active phase, bringing in tough new legal requirements for social media companies. Platforms like Facebook, Instagram, and TikTok are now obliged to remove illegal content—such as hate speech, terrorism-related material, and image-based abuse—or face substantial fines.
This marks a major shift in how online platforms are held accountable for harmful content. But how will the law work in practice—and will it achieve its ambitious goals?
Chara Bakalis, Reader in Law at Oxford Brookes University and a leading expert on online hate speech, shares her perspective on what this new era of digital regulation means for tech companies, users, and the future of free expression online.
What are the key responsibilities for social media companies under the Online Safety Act, and how will the Act be enforced?
Chara says: “The Act takes a phased approach to implementation, starting with an initial obligation to remove illegal material. Before this, social media companies often promised to remove illegal material from their platforms, but they were under no legal obligation to do so and removal was patchy. There is now a legal duty to remove illegal material, and social media companies face consequences if they do not comply. Future phases will broaden the scope of responsibilities for social media companies, to include extra protection for children, and new responsibilities to give adults greater choice of what sort of material they encounter online.
“At its core, the act holds social media companies to account in three key ways:
- They must remove illegal material from their platforms.
- They are required to protect children not only from illegal content but also from content that is harmful but not necessarily illegal.
- They must enable adults to protect themselves from harmful but legal content through customisable tools.
“Failure to comply with these obligations could result in serious consequences for social media platforms. Ofcom, the UK’s communications regulator, has been tasked with overseeing the Act and has the power to impose hefty fines of up to £18 million or 10% of a company’s global revenue, whichever is greater.”
Is there a real risk that the Act could harm people’s right to freedom of speech?
Chara says: “In exercising their first duty towards illegal material, social media companies could be overly cautious and remove material that ‘might’ be illegal, but which turns out not to be, thus removing perfectly legal speech.
“Interestingly, when voluntary codes of conduct were imposed by the EU a few years ago, and also when Germany created a law in 2017 which required social media companies to remove illegal hate speech, there was no evidence to suggest that these companies were being overly cautious and, in fact, were relatively moderate in their approach. This does make sense: social media companies were restrained in removing material because being overly censorious affects their business model. It will be interesting to see if the same happens with the implementation of the Online Safety Act.
“However, we need to remember that the concept of free speech on the internet is complex. Unbridled free speech for some can often silence the free speech of others as many people are reluctant to speak out for fear of being attacked, particularly those who have protected characteristics that make them more vulnerable. We also know that the algorithms used by social media companies are not ‘neutral’ and can be used to push particular ideologies or material without us seeking it out, so it is questionable when using social media to suggest that everyone has equal access to free speech.
“The real risk to free speech is the lack of regulation rather than regulation itself. This might seem surprising, especially for those of us who are used to thinking that too much government control could lead to censorship. But the way the internet has evolved has challenged that way of thinking. The harms emanating from social media have become much more apparent in the last 5 to10 years. Some examples include increased mental health problems in young people, the spreading of hate and violence against minority groups, misinformation and disinformation, cyberbullying, ‘revenge porn’ and deepfakes, image-based sexual abuse, gambling and porn addiction, and even the influencing of elections.
“We also now understand more about the impact of the algorithms used by social media companies, and how these can be used to direct public opinion. It is no surprise that Elon Musk bought Twitter/ X as he understood the important role it could play in shaping opinions to fit his own world view, and this is a really good example of why we need regulation - as without it - it gives too much power to social media companies and their owners. The rise in the power of social media companies has meant that we have to rethink the default position of deregulation, and to find ways of ensuring that social media companies are held to account.”
What are the biggest challenges in making sure the Act is properly enforced?
Chara says: “The sheer quantity of material which appears on platforms will make it very difficult to ensure that all illegal material is picked up, even with the use of AI.
“Trying to determine what is ‘illegal’ is also very nuanced and complex, particularly where the criminal law is concerned. Ordinarily, to find that something is illegal you would need to prove this in court with evidence heard on both sides. In order to comply with the Online Safety Act, moderators will need to make decisions about illegality with very little time - perhaps only a few seconds - and with little by way of legal expertise.
“A particular difficulty would be proving the perpetrator had a ‘guilty mind’. For instance, in order to show that someone committed the offence of stirring up hatred against a religion, you would need to prove that the perpetrator ‘intended’ to stir up hatred. Some people worry that it will be difficult to see how this could be determined accurately without cross-examining the perpetrator. Moderators will be sent material and, with very little evidence, will have to make quick decisions about whether an offence has been committed and whether the content should stay on the platform or be removed.”
How will we know if the Act is actually working?
Chara: “Ofcom will require social media companies to report on how they are implementing the Act, and Ofcom will be monitoring this. If Ofcom starts finding violations and fining companies, we will know whether the Act has real impact.
“However, it’s likely that interested organisations will be monitoring social media companies to ensure that it’s being implemented. Victim groups, such as TellMama - which monitors Islamophobia online - and CST - which monitors anti-Semitism - will be checking for under-implementation (material that is illegal but which hasn’t been removed) and free speech organisations such as ‘Article 19’ will be on the lookout for over-implementation (examples of legal speech being removed).
“Ultimately, we do need to be realistic about what the Act can achieve. It is not going to transform the internet overnight, and it will undoubtedly attract criticism from both sides of the debate.
“For the last three decades, the US approach to free speech has dominated the debate in this area, and has led, until now, to a largely unregulated Wild Wild West approach to internet governance. This is exemplified by the US’s section 230 of the Communications Decency Act 1996 which gives immunity to social media companies for the material that they host. The Online Safety Act 2023, along with the Digital Safety Act 2023 (the EU equivalent), represent a real shift towards a more European approach to regulation, and is about taking back control from the social media platforms. This point must not be forgotten in any debates about the efficacy of the Act.”