top of page

Social Media Under 16s

As Liberal Democrats, we have long-been raising the alarm on social media harms as a public health crisis. That’s why we put forward and voted in favour of legislation to raise the digital age of data consent to 16 to stop tech companies profiteering from our children’s attention. We also voted to ban phones in schools and have called for health-style warnings on social media, as well as caps on doom-scrolling.


That is also why we have tabled an amendment to the Children’s Wellbeing and Schools Bill calling for a ban on harmful social media for under 16s. Our amendment calls for film-style age ratings of up to 18 for the most harmful platforms, and a default rating of 16 for social media.


Our age-rating model would echo the film and video classification system established in the 1980s, adapting a futureproof harms-based approach with the teeth to back it up.   This harm-based approach is also supported by children’s charities like the NSPCC and establishes a pioneering age-appropriate standard for the online world, just like we have in the offline world.


The Liberal Democrats’ position is clear: if a platform spreads harmful content or relies on addictive and harmful algorithms, it should not be allowed anywhere near our children.

This means that access to social media platforms would be restricted based on the risk they pose to young people. As they stand, we think social media like TikTok, Snapchat, Instagram, and Facebook would all be legally classified as unsuitable for under-16s, and those such as X that continue to host extreme content could be rated 18+.


Our age-rating model ensures that children can continue to use Wikipedia to learn and explore for schoolwork - a site that could be banned for children under a blanket under 16 ban. We also think it’s important that children are able to access online crisis services such as Childline forums which are caught up in Online Safety Act user-to-user services.


Our harm-based approach would protect children from the worst of the online world and provide a strong incentive for platforms to remove addictive algorithmic features and harmful content. It would further futureproof online safeguarding, allowing for emerging harms such as chatbots, and new social media platforms, to be quickly categorised based on addictive design and harmfulness of their content.


This is a pivotal moment to establish new online safety standards like we have offline and ensure that tech giants are truly held to account. We must learn from what hasn’t worked in Australia, where new social media has emerged in a game of whack-a-mole.


This is just the beginning. This is the opportunity to tackle harmful gaming apps, and future proof for online harms from emerging technologies like AI - something our amendment does.


Tech companies have for far too long treated children as data to be mined rather than young people to be protected. They have built addictive algorithms designed to keep children endlessly doom-scrolling at the expense of their mental and physical health, their sleep and their concentration.


This is why we have been supportive of the Online Safety Act to keep children safe online.


However, while the Online Safety Act is an important step towards ending the wild west of online content, much more needs to be done to address the harms that children are currently exposed to on social media platforms.


I will continue to hold the government accountable for protecting our children online. I want to thank you again for taking the time to write to me about this important issue, and do not hesitate to get in touch if you have any more questions.

 
 
Wendy Chamberlain MP for North East Fife

Unit G1, Granary Business Centre

Coal Road

Cupar

KY15 5YQ

  • alt.text.label.Facebook
  • alt.text.label.Twitter
  • alt.text.label.Instagram
  • Youtube

Constituency:   01334 656361

Parliament:       0207 219 4409

Privacy Notice: Details about our data protection and privacy policies are in Wendy Chamberlain’s Privacy Policy.

bottom of page