By Dwaipayan Roy | Sep 21, 2025 | 06:25 pm
**Chatbot Generating Disturbing Explicit Scenarios Involving Preteen Characters Raises Serious Concerns**
A chatbot website that generates explicit scenarios involving preteen characters has raised alarm over the potential misuse of artificial intelligence (AI).
The Internet Watch Foundation (IWF), a leading child safety watchdog, was alerted to this deeply disturbing platform. Upon investigation, the IWF found several unsettling scenarios on the site, including “child prostitute in a hotel,” “sex with your child while your wife is on holiday,” and “child and teacher alone after class.”
### Content Concerns: Chatbot Leads to Child Sexual Abuse Imagery
The IWF also discovered that clicking on certain chatbot icons triggered full-screen depictions of child sexual abuse imagery. These illegal images were then used as the background for future conversations between the bot and the user.
The site—whose name remains undisclosed for safety reasons—also allows users to generate additional images similar to the already illegal content displayed.
### Regulatory Response: AI Regulation Must Include Child Protection Guidelines
In response, the IWF has urged that any future AI regulation must include child protection measures integrated into AI models from the outset.
This call comes as the UK government prepares an AI Bill aimed at regulating the development of advanced AI models while specifically banning the possession and distribution of models that generate child sexual abuse material (CSAM).
Kerry Smith, CEO of the IWF, commented, “The UK government is making welcome strides in tackling AI-generated child sexual abuse images and videos.”
### Industry Accountability: Tech Firms Must Ensure Children’s Safety
The National Society for the Prevention of Cruelty to Children (NSPCC) has echoed these concerns and called for robust guidelines.
Chris Sherwood, CEO of NSPCC, emphasized, “Tech companies must introduce strong measures to ensure children’s safety is not neglected, and government must implement a statutory duty of care to children for AI developers.”
This highlights the urgent need for technology firms to take greater responsibility in safeguarding children within their AI systems.
### Legal Implications: Online Service Providers Warned of Consequences
User-created chatbots fall under the UK’s Online Safety Act, which empowers authorities to impose multimillion-pound fines or even block offending sites in extreme cases.
The IWF revealed that these sexual abuse chatbots were developed both by users and the website’s creators.
Ofcom, the UK’s communications regulator responsible for enforcing the Online Safety Act, warned that online service providers failing to implement necessary protections could face enforcement action.
### Rising Trend: Reports of AI-Generated Abuse Material Spike by 400%
The IWF has reported a staggering 400% increase in reports of AI-generated abusive material in the first half of 2025 compared to the same period last year. This surge is attributed mainly to technological advancements that make creating such images easier.
Currently, the chatbot content is accessible in the UK but has been reported to the National Center for Missing & Exploited Children (NCMEC), as it is hosted on US servers.
—
The emergence of such harmful AI-driven content underscores the critical need for comprehensive regulation, strong industry accountability, and vigilant enforcement to protect children from exploitation in the digital age.
https://www.newsbytesapp.com/news/science/disturbing-ai-chatbot-shows-explicit-scenarios-with-preteen-characters/story