We use some essential cookies to make this website work.
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
We also use cookies set by other sites to help us deliver content from their services.
You can change your cookie settings at any time.
Departments, agencies and public bodies
News stories, speeches, letters and notices
Detailed guidance, regulations and rules
Reports, analysis and official statistics
Consultations and strategy
Data, Freedom of Information releases and corporate reports
New measures include tougher and quicker criminal sanctions for tech bosses and new criminal offences for falsifying and destroying data
Internet users are one step closer to a safer online environment as the government’s new world-leading online safety laws are brought before parliament today.
The Online Safety Bill marks a milestone in the fight for a new digital age which is safer for users and holds tech giants to account. It will protect children from harmful content such as pornography and limit people’s exposure to illegal content, while protecting freedom of speech.
It will require social media platforms, search engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.
The regulator Ofcom will have the power to fine companies failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites.
Today the government is announcing that executives whose companies fail to cooperate with Ofcom’s information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.
A raft of other new offences have also been added to the Bill to make in-scope companies’ senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices.
Digital Secretary Nadine Dorries said:
The internet has transformed our lives for the better. It’s connected us and empowered us. But on the other side, tech firms haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platforms. Instead they have been left to mark their own homework.
We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving. Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.
Since taking on the job I have listened to people in politics, wider society and industry and strengthened the Bill, so that we can achieve our central aim: to make the UK the safest place to go online.
In the UK, tech industries are blazing a trail in investment and innovation. The Bill is balanced and proportionate with exemptions for low-risk tech and non-tech businesses with an online presence. It aims to increase people’s trust in technology, which will in turn support our ambition for the UK to be the best place for tech firms to grow.
The Bill will strengthen people’s rights to express themselves freely online and ensure social media companies are not removing legal free speech. For the first time, users will have the right to appeal if they feel their post has been taken down unfairly.
It will also put requirements on social media firms to protect journalism and democratic political debate on their platforms. News content will be completely exempt from any regulation under the Bill.
And, in a further boost to freedom of expression online, another major improvement announced today will mean social media platforms will only be required to tackle ‘legal but harmful’ content, such as exposure to self-harm, harassment and eating disorders, set by the government and approved by Parliament.
Previously they would have had to consider whether additional content on their sites met the definition of legal but harmful material. This change removes any incentives or pressure for platforms to over-remove legal content or controversial comments and will clear up the grey area around what constitutes legal but harmful.
Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets.
Minister of State for Security and Borders Damian Hinds said:
As a society and as individuals, the internet has broadened our horizons and given us new opportunities to connect globally. But alongside this, the most depraved criminals have been given fresh avenues to exploit vulnerable people and ruin lives, whether that be by stealing the innocence of children or destroying finances.
Our utmost priority is to protect children and ensure public safety. The trailblazing Online Safety Bill will ensure social media companies are finally held to account and are taking ownership of the massive effect they have on all of our lives. Fraudsters will have fewer places to hide and abusers will be ardently pursued to feel the full force of the law.
The Bill will be introduced in the Commons today. This is the first step in its passage through Parliament to become law and beginning a new era of accountability online. It follows a period in which the government has significantly strengthened the Bill since it was first published in draft in May 2021. Changes since the draft Bill include:
Dame Melanie Dawes, Ofcom Chief Executive, said:
Today marks an important step towards creating a safer life online for the UK’s children and adults. Our research shows the need for rules that protect users from serious harm, but which also value the great things about being online, including freedom of expression. We’re looking forward to starting the job.
Ian Russell, Molly Rose Foundation, said:
The Molly Rose Foundation and Molly’s family urge Parliamentarians to deliver a safer internet for all, especially our young. The first reading of the Online Safety Bill in Parliament is another important step towards ending the damaging era of tech self-regulation. Increasingly, we are all reminded of the appalling consequences created by harmful online content.
Even nations and governments can struggle to protect themselves from the damaging use of digital technology, so we must do more to safeguard the lives of our young and vulnerable. It is time for the laws, regulations, and freedoms of our offline democracies to be reflected in the digital domain.
Further improvements to the Bill confirmed today:
Criminal liability for senior managers
The Bill gives Ofcom powers to demand information and data from tech companies, including on the role of their algorithms in selecting and displaying content, so it can assess how they are shielding users from harm.
Ofcom will be able to enter companies’ premises to access data and equipment, request interviews with company employees and require companies to undergo an external assessment of how they’re keeping users safe.
The Bill was originally drafted with a power for senior managers of large online platforms to be held criminally liable for failing to ensure their company complies with Ofcom’s information requests in an accurate and timely manner.
In the draft Bill, this power was deferred and so could not be used by Ofcom for at least two years after it became law. The Bill introduced today reduces the period to two months to strengthen penalties for wrongdoing from the outset.
Additional information-related offences have been added to the Bill to toughen the deterrent against companies and their senior managers providing false or incomplete information. They will apply to every company in scope of the Online Safety Bill. They are:
Falling foul of these offences could lead to up to two years in imprisonment or a fine.
Ofcom must treat the information gathered from companies sensitively. For example, it will not be able to share or publish data without consent unless tightly defined exemptions apply, and it will have a responsibility to ensure its powers are used proportionately.
Changes to requirements on ‘legal but harmful’ content
Under the draft Bill, ‘Category 1’ companies – the largest online platforms with the widest reach including the most popular social media platforms – must address content harmful to adults that falls below the threshold of a criminal offence.
Category 1 companies will have a duty to carry risk assessments on the types of legal harms against adults which could arise on their services. They will have to set out clearly in terms of service how they will deal with such content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to say so.
The agreed categories of legal but harmful content will be set out in secondary legislation and subject to approval by both Houses of Parliament. Social media platforms will only be required to act on the priority legal harms set out in that secondary legislation, meaning decisions on what types of content are harmful are not delegated to private companies or at the whim of internet executives.
It will also remove the threat of social media firms being overzealous and removing legal content because it upsets or offends someone even if it is not prohibited by their terms and conditions. This will end situations such as the incident last year when TalkRadio was forced offline by YouTube for an “unspecified” violation and it was not clear on how it breached its terms and conditions.
The move will help uphold freedom of expression and ensure people remain able to have challenging and controversial discussions online.
The DCMS Secretary of State has the power to add more categories of priority legal but harmful content via secondary legislation should they emerge in the future. Companies will be required to report emerging harms to Ofcom.
Proactive technology
Platforms may need to use tools for content moderation, user profiling and behaviour identification to protect their users.
Additional provisions have been added to the Bill to allow Ofcom to set expectations for the use of these proactive technologies in codes of practice and force companies to use better and more effective tools, should this be necessary.
Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any technologies they develop meet standards of accuracy and effectiveness required by the regulator. Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content.
Reporting child sexual abuse
A new requirement will mean companies must report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency.
The CSEA reporting requirement will replace the UK’s existing voluntary reporting regime and reflects the Government’s commitment to tackling this horrific crime.
Reports to the National Crime Agency will need to meet a set of clear standards to ensure law enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content.
In-scope companies will need to demonstrate existing reporting obligations outside of the UK to be exempt from this requirement, which will avoid duplication of company’s efforts.
Don’t include personal or financial information like your National Insurance number or credit card details.
To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.