The Children’s Codes under the Online Safety Act officially come into force today, but serious loopholes still endanger young lives.
- Ellen Roome
- 6 days ago
- 2 min read

That means, for the first time in UK law:
✅ Platforms must introduce highly effective age verification to block children from accessing adult content, including pornography, suicide, self-harm, and eating disorder material.
✅ Algorithms must be changed to stop harmful content from being pushed into children’s feeds if the platform knows the user is a child.
✅ Risk assessments must be tailored by specific age groups, not just “13+”.
✅ New types of harm — like body shaming, hopelessness, and compulsive scrolling — must now be recognised and mitigated.
✅ Platforms must name a senior executive who is responsible for child safety.
It sounds like progress — and it is, on paper. But here’s the truth
❌ Age Verification — rigorous on paper, weak in practice
Yes, platforms must now apply “highly effective” age assurance for the most harmful content. But what does that really mean?
Self-declared ages (e.g., ticking a box) no longer qualify.
Ofcom now requires reliable methods, such as facial age estimation, official ID matching, mobile provider checks, or digital identity services.
Platforms can be fined up to £18 million or 10% of their global revenue, with possible criminal penalties for senior executives.
BUT these checks only apply when accessing harmful material directly (like explicit sites or flagged posts), not when a child creates an account or scrolls through algorithm-driven content.
And here’s the kicker:
👉 There’s no obligation for platforms to go back and verify existing users.
That means millions of children who signed up using a fake age are outside the protection of the new rules.
❌ Algorithms still addictive, still targeting the wrong eyes
Platforms are supposed to stop harmful content being shown to under-18s but only if they know the user is under 18.
So what happens when they don’t check?
The algorithms that addict, isolate, and expose children to graphic material continue to run.
The law targets new uploads, not systemic design.
And existing harmful content? It’s still out there, pushed into feeds based on behaviour, not age.
❌ Accountability — still absent for bereaved families
Even now, if the worst happens, families like mine have no legal right to access their child’s account, content, or digital history. We’re still forced to beg social media companies for answers, and often, we’re ignored or stonewalled.
We need more than a starting point. We need real change.
✔️ Real age checks — at sign-up, not just content access.
✔️ Full audit of existing users — not just new ones.
✔️ Platform designs built for safety, not engagement.