Children are forming emotional bonds with AI chatbots, report says by Paul Hill
AI chatbots are rapidly becoming a fixture in children’s digital lives thanks to their integration into platforms kids already use such as search engines, games, and social media. A report from the internet safety organization, Internet Matters, has found that two-thirds of kids in the UK, between the ages of nine and 17 in the UK, have used AI chatbots; usage has significantly grown in the last 18 months.
The most popular chatbots used by children, according to the Me, Myself and AI report, were ChatGPT (43%), Google Gemini (32%), and Snapchat’s My AI (31%). The group of kids that were more likely to AI chatbots (71%) were vulnerable children, this is compared to 62% of non-vulnerable peers. Vulnerable kids were nearly three times as likely to use companion bots such as Character.ai and Replika.
The rise of AI among children is similar to the rise of social media back in the 2000s. One difference is that governments seem a bit more on top of things with AI and have pushed companies to focus on AI safety with most bots including guardrails - though, they’re not perfect.
We are frequently told that AI is a tool to speed up your work, but children (as well as adults) are using AI in more emotional ways such as for friendship or for advice. The report states that a quarter of the kids had gotten advice from bots and a third said that chatting to AI feels like talking to a friend. Again, these numbers rise to half among vulnerable children.
Among all the kids, quite a large one in eight use AI chatbots because they have nobody else to talk to. This figure rises to one in four among vulnerable kids. One of the most concerning aspects of AI use is that children may get inaccurate or inappropriate responses - 58% of the kids asked think that using an AI chatbot is better than manually searching for information on Google, bringing up concerns that it’s being overly relied upon.
User testing found that Snapchat’s My AI and ChatGPT sometimes provided explicit or age-inappropriate content. It also found that filtering systems could be bypassed by users, potentially exposing kids to information they shouldn’t have access to.
The report cites experts who warn that as AI gets more human-like, children may spend more time interacting with them, especially those that are more vulnerable. This could lead to them becoming more emotionally reliant on these bots, which could be unhealthy.
One of the expected, but concerning, points is that children are often left to explore AI chatbots on their own or with limited input from adults. While most of the kids had been spoken to about AI by their parents, specific concerns had not been explained like the accuracy of AI-generated information - only a third of parents had discussed accuracy, despite two-thirds of parents wanting to.
Additionally, despite children supporting the idea of schools teaching them about AI chatbot use, including risks like inaccuracy, over-reliance, and privacy, just 57% of kids reported speaking with teachers or their school about AI. Just 18% had multiple conversations about the matter.
To address the issues, the report calls for a system-wide approach that involves industry, government, schools, parents, and researchers to safeguard children. It said that industry needs to provide parental controls and literacy features, and the government needs to ensure regulations keep up with the technology.
For schools, the report says AI and media literacy should be embedded at all key stages, teachers need to be trained on AI, and there should be guidance on appropriate AI use. Parents need to be supported to guide their child’s AI use and have conversations about AI, when to use it, and when to seek real-world support. Image via Depositphotos.com