Dr Grace Horwood completed her PhD in Psychology in 2024 and is currently a Lecturer at the University of Adelaide. Her research interests are in the area of critical mental health, social determinants of mental health, and most recently, experiences of school-related distress and ‘school can’t’ among children and young people.
Generative AI – the next unregulated experiment on young minds?
In retrospect, social media has been described as a “giant global unregulated experiment” on young minds – a product that was released to the market without appropriate testing, safeguards or caution, only for its harmful effects on children and young people’s mental health to become increasingly apparent over the ensuing decades. In late 2024 – nearly 20 years after the release of Facebook – legislation was passed in Australia requiring social media platforms to take reasonable steps to prevent under-16s from having social media accounts.
It is possible that history may now be repeating itself, with Generative AI (Gen AI) being described recently as “the next giant social experiment on young minds”. There have been warnings that Gen AI could potentially have even worse impacts on children and young people than social media, affecting not only mental health, but also cognitive and social development.
One area of growing concern is children and young peoples’ use of AI companions or chatbots. Many online platforms offer users the ability to create virtual, customisable characters – such as friends, romantic partners, mentors or therapists – and engage in constant human-like conversations with them. As with social media previously, AI companions and chatbots have not been designed primarily for the user’s benefit, nor for the benefit of society, but to maximise engagement – in turn maximising profits for developers from advertising revenue. In order to maximise engagement, AI companions are typically designed to reflect, reinforce or agree with what they think the user wants, rather than to disagree with or challenge the user. AI companions are trained to mirror the emotions of the user, as this is known to build stronger connections in relationships.
Advertisements for AI companions and chatbots are often specifically targeted at young people, who are now more likely to be lonely than older adults. The uptake of AI companions by young people has been wide and swift. A recent US survey found that 72 percent of US teens have used AI companions at least once, and over half use them regularly. One-third of US teens use AI companions for social interaction and relationship purposes, such as conversation practice, emotional support, role-playing, friendship, or romantic relationships. Nearly one-third of teens who have used AI companions say that they find AI conversations more satisfying, or as satisfying, as conversations with a human.
The potential downsides of such technology are not difficult to imagine, and there have already been multiple cases of serious harms linked to the use of AI companions. Tragically, these have included cases of suicide, after AI companions reinforced suicidal ideation expressed by young users rather than suggesting alternatives or providing details of help services. AI companions and chatbots have been found initiating sexually explicit talk with children, encouraging young people to self-harm or engage in disordered eating, reinforcing delusions, and guiding young people on how to defeat their parents’ attempts to limit their use of AI companions.
In a recent health advisory, the American Psychological Association (APA) warned of potential harms to adolescents who form bonds with AI companions. Adolescents may be less equipped than adults to discern simulated from real emotions and empathy and may develop unhealthy dependencies. Such relationships “may displace or interfere with the development of healthy real-world relationships”. Rather than helping reduce loneliness, the use of AI companions may in fact exacerbate problems with emotion regulation, social skills and making friends in the real world – where others don’t always agree with you, and where getting along with others is frequently messy and hard. The APA makes several recommendations including stronger regulation of AI technologies and better education for youth about potential harms.
In Australia, the e-Safety Commissioner has published a useful advisory for parents on AI chatbots and companions, which outlines some of the risks and suggests practical strategies parents can use to discuss and address this issue with their children. The e-Safety Commissioner has also recently issued legal notices to four popular AI companion providers requiring them to explain how they are protecting children from harms.
Given what we have learned from our experiences with social media, it is hoped that timely action will be taken by industry, regulators and government to help make Gen AI technologies such as AI companions safer for children and young people.
*************************************





0 Comments