Professor Sonia Livingstone OBE writes:
To ban (social media) or not to ban – that is the question. The fact that many experts and children’s organisations think it’s the wrong question is no longer the point. Perhaps the fact that the House of Lords has resoundingly voted to ban children under 16 from social media platforms has sent a warning shot to the Government that its plans are insufficient, in the rhetoric of the campaigners, to “give them their childhood back.”
It’s been a tumultuous week for researchers trying to fact check the many competing claims being made – about the pros and cons of children’s social media use, and about whether bans would make a difference. There is lots of evidence for both the pros and the cons, which is why many researchers do not favour a simple ban. There is no evidence that bans work to make children’s lives better, which is why many researchers prefer evidence-based alternatives.
For children’s organisations, too, it’s been challenging – most have come out firmly against a ban, on the grounds that bans are too crude, restricting children from the benefits of online participation – especially for more vulnerable or marginalised children – while potentially pushing children to the darker edges of the internet – again, especially for more vulnerable or marginalised children.
There is no evidence that bans work to make children’s lives better, which is why many researchers prefer evidence-based alternatives.
Moreover, a ban leaves the exploitative algorithmic and data practices, and the “move fast and break things” business model of the big platforms untouched. As politicians and pundits pile into the debate, stimulated by public and parents’ enthusiasm for a ban, little is said about how this would improve the predatory activities of platforms which affect us all.
What should the Government do?
On Monday, the Government took the bull by the horns and announced “a national conversation.” This acknowledged the force of public feeling that “enough is enough” – more must be done to protect children from harm on social media platforms.
The Government promises to double down on restricting smartphones at school (though this is not quite what the outcry is about), a revised curriculum (though this is already in Becky Francis’ review), screen time guidance to parents (though parents are exhausted by constantly having the onus of responsibility placed on them, rather than businesses), and ministers will visit Australia to learn how their ban is working out (though the results of the Australia ban will soon be published for all to read).
The Government’s key promise – “Restrictions on addictive features, a ban on social media access for children and better age checks among measures to be considered” – is the most significant. Better digital design, enforcement of existing regulation, timely development of new regulation – for generative AI, most notably – all these are vital. But clearly, “considering” these options failed to reassure the Lords.
How did we get here?
In the sphere of child online protection, two developments coincided in the past year, escalating calls for action. The first has been a rising crescendo of concerns from civil society that the regulator, Ofcom, is failing effectively to implement and enforce the Online Safety Act 2023, precisely designed to protect children. Whatever the rights and wrongs of Ofcom’s actions, it is unclear that social media are much safer for children, though there have been improvements. The 82 per cent of 10–12-year-olds (still) using social media under-age, and contra the platforms’ terms and conditions, seem not to have had their accounts removed. Nor is there evidence that teenagers are exposed to less harmful content or contacts.
The second was the decision of the Australian government to ban children under 16 from accessing social media. On 10 December 2025, 4.5 million children’s social media profiles were terminated. We do not yet know if those children are happier or lonelier, more or less supported, not whether they have found their way to even riskier platforms. The evaluation results will be forthcoming soon and surely it would be sensible to wait for them before enforcing a similar policy in other countries. Nonetheless, many countries have been impressed by Australia’s bold move, the UK included.
Do we need a(nother) consultation?
To those who have followed the painful ins and outs of the Online Safety Bill’s passage towards an Act and then its implementation, it feels like everything has already been said. As the Online Safety Network and others have clearly set out, if we had already enforced the Act, children’s digital lives would already be in a better place.
To parents, teachers, medics and the wider public, it must feel like we just had the national conversation these past weeks. It’s less clear that we have heard from children – beyond a few polls and vox pops. Since they are the most directly affected, their participation – inclusive, thoughtful, properly heard – is important.
On 10 December 2025, 4.5 million children’s social media profiles were terminated. We do not yet know if those children are happier or lonelier, more or less supported, not whether they have found their way to even riskier platforms.
More generally, when loud voices mobilise behind a cause, it’s hard to tell which voices have been silenced. And when a debate gets so polarised, crucial arguments, evidence and proposals are pushed off the table.
Although it is widely agreed that something must be done, there is no agreement on what that should be. And despite the sense that a ban is now inevitable, many important questions remain unresolved. So, despite this conversation not being new, a focused consultation to report by the summer is welcome.
Four pressing and practical questions
Banning what exactly?
Which social media will be banned? Australia got the initial list wrong and has had to revise it several times. It dismayed many to realise the ban only applies when the user is signed in to a profile – they can still watch TikTok and YouTube! Gaming platforms seem to be left out. Do we have a list of platforms where people can engage, scroll, chat, share? Assuming each carries different level and types of risk, expect pushback if they are all treated the same.
Banning whom – why is 16 the right age?
Under American and UK law, the age of 13 is currently in force – though little complied with. The age of 13 tallies with the UK’s Gillick principle, and with scientific evidence that early teens are at peak risk. Yes, older teens are also at risk, no doubt. But the case for their social, health, civic and participatory rights online are, arguably, stronger. On the one hand, we are debating lowering the vote to 16. On the other hand, we propose to ban them from today’s space of political debate, where they currently develop their civic knowledge, interests and identities, however unsatisfactory though that space is.
Is age assurance fit for purpose?
We already have a de facto ban for under 13s on social media. But we’re not enforcing it – Ofcom reports that 82 per cent of 10–12-year-olds use social media. Last year, we also banned under-18s from pornographic sites, possibly more effectively. What is different now? If the idea is finally to enforce the law, let’s be clear how, given that we have not effectively enforced age-related restrictions online thus far. Australia evaluated 61 methods of age assurance – some are better than others, but none got a clean bill of health. Is this acceptable in the UK?
What positive provision will society offer children?
There’s something very elitist in recent exhortations to children to play outside in nature or stay home and peacefully read a book before having an early night. As policy reports have shown, children’s lives are more constrained that in the past, less resourced, more unequal and, for some, extremely difficult. There are multiple causes of mental health problems, of loneliness and ill-health and the many other problems that result in British children having among the lowest wellbeing in the world’s wealthiest countries. This is not all down to social media. So, we do need a national conversation – about what, realistically, effectively and sustainably, Britain is going to provide for its children.
The best interests of children
There is a child rights-respecting way forward. It’s to be used precisely when complex arguments and evidence are pointing in different directions, when there are multiple policy options on the table whose consequences are unclear, and when the stakes for children are high. That is: to act in the best interests of children. This is a fundamental principle of the UN Convention on the Rights of the Child. It sets out a clear process for government, including listening to all arguments, weighing independent evidence, and listening to children. It is increasingly being used for and even by Big Tech. But it should not be left to them. I suggest that acting transparently and in the best interests of children should be an even higher rallying cry than the middle-class nostalgia of giving children their childhood back.
We already have a de facto ban for under 13s on social media. But we’re not enforcing it – Ofcom reports that 82 per cent of 10–12-year-olds use social media.
Making our digital world safe for children will be tough. The UN has set out a roadmap in its General comment No. 25 on children’s rights in the digital environment. We should make that the centre of our national conversation. But it will take more than nice words. It will take political will. And that political will is now caught up in an unfolding geopolitical drama of its own. The US has repeatedly criticised the UK’s Online Safety Act for censoring free speech, and the £31bn technology “prosperity deal” between the US and UK has been paused. On the world stage, children’s needs and rights are easily sidelined.
No comments:
Post a Comment