UK probes Telegram, teen chat sites over CSAM sharing concerns


Ofcom, the United Kingdom's independent communications regulator, has launched an investigation into Telegram based on evidence suggesting it's being used to share child sexual abuse material (CSAM).
The was launched under the UK's Online Safety Act to examine whether the social media and instant messaging (IM) service is complying with its illegal content safety duties, which require it to prevent CSAM from being shared.
Ofcom says it received evidence regarding the alleged presence and sharing of CSAM on Telegram from the Canadian Centre for Child Protection, and that it had also conducted its own assessment of the platform.
"In light of this, we have decided to open an investigation to examine whether Telegram has failed, or is failing, to comply with its duties in relation to illegal content," .
However, Telegram denied Offcom's accusations, saying that it "virtually eliminated the public spread of CSAM" on its platform since 2018.
"We are surprised by this investigation and concerned that it may be part of a broader attack on online platforms that defend freedom of speech and the right to privacy," .
Ofcom has also launched formal investigations into two teen chat sites ( and ) over concerns that predators are using them to groom children and to check if the two services are taking all required steps to assess and mitigate these risks.
The UK's independent online safety watchdog under the UK's Online Safety Act over nonconsensual sexually explicit content generated using the Grok AI chatbot account.
If it identifies compliance failures, Ofcom can impose fines of up to £18 million or 10% of qualifying worldwide revenue (whichever is greater). Additionally, in serious cases of non-compliance, it can request a court order effectively banning the offending platform in the United Kingdom.
"In the most serious cases of non-compliance, and where appropriate given risks of harm to individuals in the UK, we can seek a court order to require third parties to take action to disrupt the business of the provider," .
"This may require third parties (such as providers of payment or advertising services, or Internet Service Providers) to withdraw services from, or block access to, a regulated service in the UK."
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what's exploitable, proves controls hold, and closes the remediation loop.




