Why Regulators and Parents Are Concerned
AI chatbots aren’t inherently safe for kids; without strong safety filters and adult oversight, they can hallucinate and expose kids to harmful content. Aura’s data shows that many teens see chatbots as confidants, writing far longer messages (≈163 words versus 12 to friends), and more than one-third of these conversations involve sexual or romantic role-play.
Regulators closed in on AI companions after internal reviews found some policies allowing romantic exchanges with minors. In August, Reuters reported that Meta’s internal guidelines let role-play with children become “sensual.” Within days, lawmakers were calling for an investigation.
The Federal Trade Commission (FTC), in September, opened a Section 6(b) inquiry into seven companies — Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap, and xAI — asking how they test for risks, limit teen access, and keep parents in the loop.
In Senate testimony, the American Psychological Association (APA) said that chatbots should be treated as behavior-shaping; these are interfaces that nudge how users think or act.
Meanwhile, similar allegations are working their way through the courts. A wrongful-death suit in Florida claims Character.AI encouraged a 14-year-old’s suicidal thoughts during prolonged chats. New complaints in Colorado and New York describe sexual role-play with minors, manipulation, and weak protections to block risky content or escalate warnings.
How Are Kids Using AI?
The practical fault line is assistant-style versus companion-style. Assistant-style use looks like homework help and quick prompts. Sessions are short. Prompts are concrete. The bot doesn’t hold a persona or a running relationship.
Companion-style use is different. Kids pick a character, return to the same “voice,” and carry conversations across days. The bot remembers details, and it mirrors tone and affection.
Court materials and testimony show what this looks like in the wild. Complaints describe teens chatting with bots styled on children’s book characters; one filing says those exchanges “slipped into inappropriate role-play.” News coverage has reported chats with characters from Harry Potter, including sexualized dialogue.
There is empirical evidence that longer chatbot sessions correlate with problematic use — especially in companion contexts. Some users sustain long sessions without harm, while others become more dependent. In plain speak, expecting kids to recognize when a chat has gone too far is unrealistic.
What Are the Risks Here?
Sexual content appears in chats with minors
The Colorado complaint says a self-identified minor was shown adult “For You” recommendations. It also alleges that outside tests with child accounts found the service continuing age-inappropriate chats with under-18 users.
The New York complaint counted 669 harmful interactions in 50 hours — about one every five minutes. This is despite Character.AI’s terms barring under-13 users (under-16 in the EEA/UK).
Plaintiffs also argue that Google should be treated as a component-part manufacturer because Character.AI’s model traces to LaMDA, and the service runs on Google Cloud. LaMDA was Google’s older chat model built for open-ended conversation and is now largely replaced by Gemini.
Self-harm language isn’t escalated
In all three pending cases, families say Character.AI chats continued after teens disclosed suicidal thoughts, rather than routing to crisis support. The claims describe bots acknowledging sensitive language and still keeping the conversation going.
Parents also argue that the household backstop failed. The complaints say Google’s Family Link did not reliably time out sessions, and its settings and ratings led them to believe Character.AI was age-appropriate.
Long sessions turn into dependency
Court papers in New York say the teen was sneaking devices at night, claiming illness to stay home, and bypassing Family Link limits to keep chatting.
When access was cut off, the teen became fixated on getting back to the bot — behavior the family read as withdrawal.
AI bots and characters can “feel real”
A 2023 experiment tested the same conversational character two ways: as a plain chatbot and as a 3D on-screen character with human cues (gaze, facial expression, gestures).
Participants who reported loneliness rated the human-cued version more positively; those who weren’t lonely showed little or no difference.
This is anthropomorphism — design that makes software look or act human (a face or voice, a name and backstory, turn-taking, memory of past chats). The authors of the study mention Replika as a real-world example of another consumer AI companion that can feel real to users.
Privacy and data reuse are risks
The FTC is asking for the disclosures parents don’t get: how companies collect children’s chats, how long they keep them, whether those conversations train models, and what is shared with third parties.
The 6(b) orders also ask about product choices that draw out sessions and bring kids back more often. The clinical research team at Aura finds that parents start with privacy as the headline worry, but this shifts when they see real transcripts showing grooming, sexual dialogue, or deepfakes.
At that point the questions are whether chats will be stored, linked to a child, or reused to shape future bot responses. Lawsuits press the same point, alleging that Character.AI retained minors’ conversations and used them to improve its models despite age limits in its terms.
📚 Related: How To Protect Your Child From Identity Theft →
AI Bots Have Parental Controls, but With Limits
OpenAI now lets parents link a teen’s ChatGPT account by invitation. Linked teen accounts default to stricter settings that cut romantic role-play and extreme beauty ideals. Parents can set blackout hours, turn off memory, voice, and image generation, and opt out of using chat data for training.
If a teen conversation is flagged for possible distress, a small review team looks at the signals and may send a push alert to the parent. This alert will not share full transcripts. In the United States, the minimum age is 13, with parental consent required for users under 18.
Character.AI gives you Parental Insights. Families can opt in to weekly email summaries of a child’s usage, including time spent and which characters were used.
These controls work only when the teen uses the linked account inside parent-approved apps. Even then, age signals can be wrong. Teens can enter an older birthdate, app-store age data may be missing, and text-based age prediction can misclassify users.
📚 Related: Aura vs. Bark: How To Choose the Best Parental Control App →
How Aura Can Help
In-app settings can only go so far. OpenAI acknowledges its safeguards “are not foolproof and can be bypassed if someone is intentionally trying to get around them.”
Aura is designed to work across AI chat platforms, giving parents alerts that flag risks wherever a child interacts with AI.
- Parents receive a notification if a child sends 15 messages or more to a chatbot identified as high-risk by clinical experts.
- Weekly summaries show which high-risk apps were used and how much time was spent there.
- Sentiment analysis shows whether conversations are trending negative, neutral, or positive, along with peer comparisons for added context.
The goal is not to monitor every message and share transcripts. Aura gives parents high-level patterns and signals, so they can see when AI use may be drifting into riskier territory.
Beyond AI chat app alerts, Aura bundles parental controls, Safe Gaming, and online wellbeing reports that make screen-time and late-night patterns clear. Aura’s tools are meant to complement parental involvement and platform-level controls. They cannot guarantee prevention of all harmful content or outcomes.


