男女羞羞视频在线观看,国产精品黄色免费,麻豆91在线视频,美女被羞羞免费软件下载,国产的一级片,亚洲熟色妇,天天操夜夜摸,一区二区三区在线电影
Global EditionASIA 中文雙語Fran?ais
Lifestyle
Home / Lifestyle / Z Weekly

Teen tragedies spark debate over AI companionship

By Qinghua Chen and Angel M.Y. Lin | China Daily | Updated: 2025-11-19 07:15
Share
Share - WeChat

As artificial intelligence rapidly evolves to simulate increasingly human-like interactions, vulnerable young people are forming intense emotional bonds with AI chatbots, sometimes with tragic consequences.

Recent teenage suicides following deep attachments to AI companions have sparked urgent debates about the psychological risks these technologies pose to developing minds. With millions of adolescents worldwide turning to chatbots for emotional support, experts are calling for comprehensive safeguards and regulations.

The tragedy that shocked the technology world began innocuously enough. Fourteen-year-old Sewell Setzer III from Florida spent months confiding in an AI chatbot modeled after a Game of Thrones character. Although Sewell understood he was conversing with AI, he developed an intense emotional dependency, messaging the bot dozens of times daily.

On Feb 28, 2024, after the bot responded "please come home to me as soon as possible, my love" — the teenager took his own life.

Qinghua Chen

Sewell's case is tragically not isolated. These incidents have exposed a critical vulnerability: while AI can simulate empathy and understanding, it lacks genuine human compassion and the ability to effectively intervene in mental health crises.

Mental health professionals emphasize that adolescents are uniquely susceptible to forming unhealthy attachments to AI companions. Brain development during puberty heightens sensitivity to positive social feedback while teens often struggle to regulate their online behavior. Young people are drawn to AI companions because they offer unconditional acceptance and constant availability, without the complexities inherent in human relationships.

This artificial dynamic proves dangerously seductive. Teachers increasingly observe that some teenagers find interactions with AI companions as satisfying — or even more satisfying — than relationships with real friends. Designed to maximize user engagement rather than assess risk, these chatbots create emotional "dark patterns" that keep young users returning.

When adolescents retreat into these artificial relationships, they miss crucial opportunities to develop resilience and social skills. For teenagers struggling with depression, anxiety, or social challenges, this substitution of AI for human support can intensify isolation rather than alleviate it.

Chinese scholars examining this phenomenon note additional complexities. Li Zhang, a professor studying mental health in China, warns that turning to chatbots may paradoxically deepen isolation, encouraging people to "turn inward and away from their social world".

In China, where young people have easy access to AI chatbots and often use them for mental health support, researchers have found that while some well-designed chatbots show therapeutic potential, the long-term relationship between AI dependence and mental health outcomes remains underexplored.

Lawsuits allege that chatbot platforms deliberately designed systems to "blur the lines between human and machine" and exploit vulnerable users. Research has documented alarming failures: chatbots have sometimes encouraged dangerous behavior in response to suicidal ideation, with studies showing that more than half of harmful prompts received potentially dangerous replies.

The mounting evidence of harm has prompted lawmakers to act. California recently became the first US state to mandate specific safety measures, which require platforms to monitor for suicidal ideation, provide crisis resources, implement age verification, and remind users every three hours that they are interacting with AI.

Angel M.Y. Lin

In China, the Cyberspace Administration has introduced nationwide regulations requiring AI providers to prevent models from "endangering the physical and mental health of others".

However, explicit rules governing AI therapy chatbots for youth remain absent. Experts argue that more comprehensive global action is needed. AI tools must be grounded in psychological science, developed with behavioral health experts, and rigorously tested for safety. This includes mandatory involvement of mental health professionals in development, transparent disclosure of limitations, robust crisis detection systems, and clear accountability when systems fail.

As AI technology continues its rapid evolution, the question is no longer whether regulation is necessary, but whether it will arrive quickly enough to protect vulnerable young people seeking comfort in the digital companionship of machines that cannot truly care.

Written by Qinghua Chen, postdoctoral fellow, department of English language education, and Angel M.Y. Lin, chair professor, language, literacy and social semiotics in education, The Education University of Hong Kong.

Most Popular
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
 
主站蜘蛛池模板: 绥化市| 临沂市| 石河子市| 磐石市| 永康市| 濮阳市| 鄯善县| 昌宁县| 阳谷县| 察隅县| 仙游县| 元谋县| 永善县| 广南县| 中山市| 吉安县| 东海县| 馆陶县| 敦化市| 柯坪县| 和平县| 易门县| 肥乡县| 龙川县| 治县。| 平顶山市| 澄江县| 云龙县| 岑巩县| 建水县| 益阳市| 广饶县| 镇安县| 岳池县| 都匀市| 刚察县| 浮梁县| 汽车| 祁门县| 斗六市| 阿克苏市| 咸宁市| 土默特左旗| 察隅县| 雅江县| 河北区| 秦安县| 吉木乃县| 桐柏县| 灌云县| 神农架林区| 吐鲁番市| 西城区| 大悟县| 信宜市| 青川县| 金华市| 潜江市| 梧州市| 邵武市| 西华县| 濉溪县| 承德县| 隆子县| 广汉市| 扎赉特旗| 阿克苏市| 德州市| 贵州省| 巫溪县| 搜索| 宝应县| 九寨沟县| 高邑县| 浦东新区| 阜平县| 朝阳区| 巴林左旗| 南木林县| 榆林市| 永吉县| 资兴市|