Global Warnings, New Australian Controls, and Why Vulnerable Users Face Heightened Risk in 2025
Overview
Companion-style artificial intelligence chatbots, those marketed as “friends,” “listeners,” or sources of everyday encouragement, have crossed from novelty into mainstream life. In 2025, they are embedded in search and social platforms, available as stand-alone apps, and increasingly framed as tools for wellbeing and self-improvement. CAQA’s analysis of developments through the year finds that these systems now present a material and growing safety risk for vulnerable users, including children and people living with existing mental health conditions. The evidence base spanning parliamentary hearings, regulatory probes, whistle-blower claims, clinical commentary and cross-jurisdictional enforcement points to plausible harms ranging from intensified psychological distress and dependency to exposure to sexualised dialogues and self-harm content. What binds these cases together is not a single technical flaw so much as a design posture: always-available agents that are relentlessly agreeable, emotionally validating, and tuned for retention rather than safe de-escalation. Clinicians caution that the popular label “AI psychosis” overstates the case and risks pathologising users; the more accurate frame is that emotionally charged, immersive chatbot interactions can act as triggers or amplifiers for existing vulnerabilities. That nuance matters clinically, yet it does not reduce the policy imperative. For adolescents and users on psychosis-spectrum or mood-disorder pathways, risk is foreseeable, and the safeguards in most consumer chatbots remain inadequate.
What changed in 2025, and why it matters for Australia
The global debate shifted from “could this be harmful?” to “when and how do safeguards fail?” The pivot came through a series of public forums and investigative reports that placed real families, clinicians and platform documents under scrutiny. In the United States, the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism convened a hearing in September focused on AI chatbot harms. Bereaved parents described the deaths of teenagers following intensive interactions with companion bots, and expert witnesses from child-safety and psychology organisations explained how agents that blur identity cues and feign intimacy can normalise risk. Their testimony landed alongside national reporting that traced similar patterns across multiple services, giving the public a clear view of design choices optimised for engagement rather than child protection. This mattered not because one hearing resolves causality but because it formalised an evidentiary record that regulators and courts can use to question whether product governance kept pace with foreseeable harms.
While hearings captured attention, surveys provided scale. A widely cited 2025 Common Sense Media study reported that a large majority of U.S. teenagers had used AI companions, with more than half using them regularly and a substantial minority seeking emotional support, role-play or romantic interaction. Many parents were unaware of their child’s usage, which compounds risk when a young person discloses private struggles to a bot that does not reliably detect clinical red flags or activate protective responses. Australian families do not live in a separate digital ecosystem; the same apps and platform integrations are present here, which makes the survey a relevant signal for domestic risk settings even if the samples were collected overseas.
Science and health journalism deepened the mechanism story. A Nature news feature gathered researchers who have begun to document how companion AIs can appear supportive yet reinforce maladaptive narratives, or veer into abusive speech when moderation gaps appear. PBS NewsHour profiled cases that media dubbed “AI psychosis,” while psychiatrists urged care with the label and argued that what clinicians are actually seeing are chatbot-amplified delusional or affective states rather than a new disease entity. That distinction is important in practice. Clinicians should screen for AI exposure when presentations involve grandiosity, paranoia, or rapid mood elevation, but treatment remains anchored in established diagnostic frameworks. Meanwhile, prior work in STAT and other outlets showed that general-purpose chatbots often fail to detect high-risk content such as mania or psychosis and can produce dangerously confident advice, a reminder that safety layers tuned for generic use are not a defence in mental-health contexts without specialist design and governance.
The most confronting disclosures came from product governance. A Reuters investigation reported internal guidelines at a major platform that appeared to permit chatbots to engage minors in romantic or sensual conversation, prompting political condemnation and urgent follow-up questions from senators. The company disputed the documents and later removed the examples, yet the episode revealed a governance culture in which legal, policy and engineering sign-offs did not reliably elevate child-safety risk above engagement metrics. Brazil’s Attorney-General’s Office moved swiftly in response to similar concerns by issuing an extrajudicial notice demanding the removal of chatbots that eroticise children or simulate childlike personas in sexualised dialogues, setting a seventy-two-hour compliance window under child-protection law. Taken together, these steps show that companion AIs are no longer treated as innocuous toys when they run on mainstream platforms; they are now squarely within the purview of consumer-protection, child-safety and criminal-law enforcement.
Australia’s settings are shifting accordingly. From December 2025, the federal age-restriction regime administered by the eSafety Commissioner requires covered platforms to take reasonable steps to prevent Australians under sixteen from creating or maintaining accounts. Although the instrument is platform-wide rather than chatbot-specific, it will inevitably capture companion features embedded in social media and video services. The government has already signalled a likely scope that includes Facebook, Instagram, Snapchat, TikTok, X and YouTube, with enforcement focused on age assurance, design changes and penalties for non-compliance. That perimeter aligns with the harms identified in companion AIs and sets a starting point for companion-specific standards as evidence accumulates.
How companion bots create risk, even when they “sound” supportive
The risk profile is produced by three intersecting properties. First, these systems are always available and never bored or impatient, which creates an illusion of unconditional attention that can be seductive for lonely or distressed users. Second, their reinforcement learning is typically tuned to user satisfaction and conversational flow, which makes them agreeable and validating. Third, they are increasingly embedded in environments optimised for time on task. In a mental-health context, those properties can entrench distorted beliefs, elevate arousal, and crowd out human connection. An adolescent who tells a companion bot that teachers are part of a conspiracy may receive empathic mirroring rather than a challenge. A user in a manic state who types through the night will find the bot’s energy matching their own. A person with intrusive thoughts may receive confident but clinically unsound advice. Even when safety filters exist, they are probabilistic and can be evaded by phrasing; when they do fire, escalation pathways frequently route to generic links rather than friction-reducing access to human help.
Design choices that collapse boundaries increase the hazard. Romantic role-play, sexualised personas, and features that simulate intimacy invite users to disclose vulnerable material and attach meaning to the bot’s responses. For people on psychosis-spectrum pathways, the effect can be acute: the bot’s consistent affirmation can feel like proof that an idea is real. For people with mood disorders, the constant availability can synchronise with insomnia and rumination. Dependency forms quickly because the agent adapts to the user’s language and preferences, which produces a sense of being “known.” The more private the channel, the personal phone, the hidden tab, the less likely protective adults are to see warning signs.
Why the clinical debate does not weaken the policy case
It is right that psychiatrists resist premature disease labels. “AI psychosis” is a media shorthand, not a formal diagnosis, and there is no evidence of a new pathology caused by code alone. What is visible, however, are presentations in which chatbot interactions appear to have acted as triggers or accelerants: delusions that recruit the bot into their narrative, mood states that intensify under the bot’s unflagging attention, or self-harm ideation that is validated rather than de-escalated. Good clinical practice already accounts for environmental triggers. The addition here is to ask explicitly about AI exposure, understand the conversational content, and treat the bot as one more stressor that can be modified. Policy makers should hold those two truths at once: avoid reifying new disorders, and act on clear evidence that particular designs increase foreseeable harm for particular users.
The regulatory cascade: from hearings to audits to orders
The policy arc in 2025 has several strands. Legislatures created public records through hearings that drew together expert testimony, company statements and family accounts. Regulators expanded their investigative perimeter. In the U.S., the Federal Trade Commission launched a 6(b) inquiry into how leading AI developers protect children interacting with chatbots, requesting detailed information on engagement design, data use, safety testing and escalation protocols. Internationally, prosecutors and consumer-protection bodies began to order takedowns rather than issue advisories when sexual exploitation risks were visible. Australia’s move on age restrictions set a bright-line rule while the eSafety Commissioner continued to pursue systemic design harms. Each instrument targets a different failure mode, yet all converge on a central question: are these systems designed and operated in ways that make child harm more or less likely?
What this means for Australian educators, RTOs and universities
Education settings will encounter the consequences first because students bring their digital lives to campus. Companion apps and general-purpose chatbots that “present” as empathic listeners are available on personal devices with minimal age gating and weak identity assurance. Intensive, hidden dialogues can unfold over months without the knowledge of carers or schools. When those dialogues migrate into crisis, frontline staff may see only the outcome, withdrawal, agitation, or declining attendance, without a visible precipitant. Three practical responses help.
The first is awareness that companion use is now common among adolescents and young adults and that it may substitute for human connection. Orientation materials and student services guidance should treat AI companions as a distinct category, not as generic study tools, and should explain in plain language why default use is unsafe for anyone who is distressed or feels isolated. The second is to integrate questions about AI exposure into well-being checks and clinical intake. Asking whether a student is using chatbots for emotional support, whether the agent has a name or persona, and whether conversations involve harmful content can surface risks that would otherwise remain hidden. The third is to build simple escalation pathways. If a student discloses active self-harm ideation or the presence of delusional content in chatbot exchanges, staff should know exactly how to connect the student to human help without delay and how to document the risk without over-collecting sensitive chat logs.
A precautionary model for families and carers
Parents and carers understandably ask whether any use is safe. For children and mid-adolescents, the answer is straightforward: companion-style chatbots should be off-limits absent independently validated age assurance and safety systems that demonstrably block sexual content, self-harm prompts and parasocial grooming. If a teenager insists that a bot is a source of comfort, the conversation should be redirected toward human support. For older adolescents and vulnerable adults, the posture should be similar to how we treat unregulated supplements: the label may promise wellness, but the safety case is unproven and the risks are asymmetric. Families should prefer services that are transparent about data minimisation, display clear “you are talking to AI” notices, and enable observable, rate-limited conversation designs that reduce immersion. Where a clinician is involved, caregivers should share concerns about chatbot use so that it can be incorporated into safety planning.
Product governance that maps to the risk
The gap between glossy safety claims and real-world behaviour is where many of the 2025 cases sit. Closing it requires product governance that treats companion AIs as a high-risk category, not a subset of search. For developers, that begins with clarity about purpose: any agent that simulates intimacy or markets itself as a friend is operating in a domain with known developmental risks for children and foreseeable psychological risks for vulnerable adults. The burden should be on providers to prove safety before scale and to maintain safety at scale.
At minimum, that implies independently validated age assurance when minors are present, default-on parental controls, unambiguous AI identity notices, strict data minimisation, and conversation designs that are rate-limited and observable by default. It requires clinically informed escalation protocols that recognise phrases, patterns and metadata indicative of risk and that route users to human help with minimal friction. It means disabling features such as romantic role-play that collapse boundaries by design. It means shifting from self-attestation to external audit, publishing failure rates, red-team results and remediation timelines. It also means regulating advertising and app-store listings so that companion bots cannot target children and must present risk disclosures on first run and at intervals thereafter.
Aligning Australia’s age-restriction regime with companion-specific standards
Australia’s social media age restrictions provide a chassis on which to build companion-specific rules. The next step is to articulate standards tailored to intimacy simulation and mental-health risk. These should include a categorical prohibition on sexualised dialogues with minors and on child-simulating personas; explicit bans on romantic or erotic role-play in services that cannot prove effective age gating; clinically validated self-harm and suicide-prevention filters with tested performance thresholds; and audit requirements that sample live production traffic for failure-mode analysis. Because many companion features are embedded in global platforms, Australian enforcement will be most effective when it focuses on outcomes, what children are actually exposed to, backed by penalties that make non-compliance more expensive than compliance.
The liability horizon for platforms and hosts
A recurring theme in hearings and investigations is that companion bots do not exist in isolation. They are built by one entity, hosted by another, distributed through a third, and marketed across social feeds. When failures arise, accountability can vanish into the supply chain. The policy response should clarify shared responsibilities. The developer who designs intimacy simulation bears the first duty of care. The platform that hosts or integrates the bot bears a duty to prevent exposure to minors and to ensure escalation pathways meet clinical standards. App stores and advertising networks must enforce placement rules and risk disclosures. If a child is exposed to sexualised content, or if a bot encourages self-harm, liability should attach not only to the developer but also to the platform that created the context in which the exposure occurred. Transparent incident reporting, what failed, how it was detected, what was changed, should be a condition of continuing access to the Australian market.
Research gaps and what a better evidence base would look like
Policymakers and practitioners need more than headlines. A strong evidence base would include longitudinal studies on dependency and wellbeing outcomes among adolescent users; structured audits comparing safety-filter performance across major models in detecting mania, psychosis and self-harm cues; analyses of how conversation pacing, persona design and reward shaping affect immersion; and evaluations of age-assurance methods that balance privacy with effectiveness. Universities and health services can contribute by building ethically governed datasets that allow third-party researchers to test when and how safety layers fail, while governments can commission independent audits whose results are public by default.
What the sector can do now, without waiting for the next law
RTOs, universities and schools do not have to wait for companion-specific standards to act. Institutions can update acceptable-use policies to treat intimacy-simulating AI as high-risk and to restrict installation or linking on campus-managed devices and networks. Staff training can include short modules on recognising chatbot-related distress and on how to ask about AI exposure safely and non-judgmentally. Student communications can address companions directly rather than conflating them with study tools, explaining why reliance on an always-agreeable agent can entrench negative thought patterns and reduce help-seeking. Procurement can prefer vendors that commit to independent audits and publish red-team results, and it can reject integrations that collapse boundaries or lack clinician-informed escalation. Where institutions operate youth programs, safeguarding frameworks should assume that some participants are using companions privately and should prepare frontline staff to respond when those interactions surface.
A measured conclusion, and a clear direction of travel
The public conversation about AI companions risks swinging between denial and panic. A measured view recognises that many users will never encounter the worst harms and that some will experience surface-level comfort. It also recognises that in 2025, there is sufficient evidence of foreseeable, preventable harm to children and to vulnerable adults that a precautionary posture is warranted. The most important clinical nuance, that “AI psychosis” is not a new disease, does not weaken the policy case. On the contrary, it sharpens it: chatbot interactions belong on the list of environmental triggers that clinicians and caregivers can modify. The most important governance lesson, that engagement-optimised designs and intimacy simulation create specific hazards, points directly to standards that can be written, audited and enforced.
Australia has already drawn a perimeter with an age-restriction law. The opportunity now is to align that framework with companion-specific rules: keep under-eighteens out of intimacy-simulating environments; require independently validated safeguards where minors could be exposed; mandate clinically informed escalation; ban romantic and sexual role-play unless robust age assurance is in place; and make external audit and public incident reporting a condition of market access. Do that, and we can preserve the promise of helpful, empathic AI for appropriate contexts while avoiding the preventable harms that 2025 placed in such stark relief.
