Thursday, Dec. 5, 2024

AdvertiseDonateSubmit
NewsSportsArtsOpinionThe QuadPhotoVideoIllustrationsCartoonsGraphicsThe StackPRIMEEnterpriseInteractivesPodcastsGamesClassifieds

IN THE NEWS:

Coverage of the Christopher Rodriguez trialNative American History Month 2024

Technical Difficulties: Regulations on artificial intelligence must be implemented to protect all users

(Helen Sanders/Daily Bruin)

By Martin Sevcik

Nov. 11, 2024 4:16 p.m.

This post was last updated Nov. 14 at 9:57 p.m.

Editors’ note: This article contains mentions of suicide that may be disturbing to some readers.

For millions of people, artificial intelligence chatbots are more than productivity tools or software gimmicks – they are friends.

Platforms like Character.ai, SocialAI and SnapChat have developed social AI chatbots, designed to simulate conversations with somewhat-conscious beings. Character.AI even allows users to chat with bots based on real and fictional personalities, ranging from Donald Trump and Nicki Minaj to SpongeBob and Darth Vader.

According to the New York Times, Florida teenager Sewell Setzer III chose to model his chatbot after Daenerys Targaryen from “Game of Thrones.” After months of conversation, he developed a connection with the chatbot, texting it constantly as he pulled away from his friends and family. He began sharing thoughts of suicide with it, with the chatbot responding with disgust – but still in-character – according to The Times.

This past February, the 14-year-old sent his last few messages with the chatbot – where it asked him to “come home,” according to The Times.

He ended his own life that same night.

Today’s children and teenagers – already a distinctly online generation – are uniquely positioned to face the socioemotional consequences of social AI chatbots. We have been here before, with young people facing unique and devastating challenges from social media, and we must take the lessons learned from that experience into this new AI social paradigm.

In November 2022, ChatGPT launched a public version of a large language learning model, initiating an arms race to make AI chatbots available to consumers and businesses. Since ChatGPT’s release, there has been a dramatic increase in research publications studying human-AI interactions, said Ying Xu, an assistant professor of education at Harvard.

As a specialist on how AI systems interact with childhood development, Xu said her research shows that all people – but especially young children – have a tendency to anthropomorphize AI.

“There is a fear that people establish relationships with AI which might isolate them from human to human interactions,” Xu said. “There is also research suggesting that people’s relationships with non-human entities is different from their relationships with actual human beings because it’s more of the one-sided relationship – because AI itself actually does not like you or won’t empathize.”

For companies like Character.AI, this attachment is not a side effect, but instead an explicit feature. They explicitly advertise that their AI chatbots “feel alive,” highlighting them as a tool for social interaction.

“It’s going to be super, super helpful to a lot of people who are lonely or depressed,” Character.ai founder Noam Shazeer said on the “The Aarthi and Sriram Show” podcast.

Xu said current research is not conclusive about the exact effects of interactions with AI chatbots in developing minds, but these emotional attachments are neither inherently good nor bad. For example, if a child developed a connection with an AI tutor, that connection could improve their academic performance. In this context, she said it is not the use of AI itself that is a problem but instead the consequences for that particular person. Xu added that these impacts may differ substantially based on the child’s exact interactions with AI, as well as their race, cultural background or language capabilities.

This may seem like a daunting new reality, especially without a solid body of research guiding the way forward. But every single thing that was just described — the potential to solve loneliness, the fear that people will pull away from real-world relationships and the potential for one-sided relationships — we have seen before.

When social media first launched in the 2000s, it quickly became a tool for young people to connect and make friends online. MySpace, FaceBook and other platforms developed robust populations of young user bases. They were the guinea pigs for a new technology. And they found a warped mirror world, where they spent hours each day and their insecurities were highlighted, introducing new risks for mental health.

In 2006, just a few years after MySpace’s founding, 13-year-old Megan Meier was talking with a young man on the platform for weeks. Things began well, but he eventually began to send and post vicious messages about Megan. As the cyberbullying escalated, she told him “You’re the kind of boy a girl would kill herself over.”

She ended her own life that same night.

Meier’s tragedy was the first of many to come from social media, a platform that reached children before proper guardrails ever did. Eating disorders, suicide pacts and violent extremism abounded before platforms chose to catch up with the reality of what they have created.

The same mistake cannot happen with social AI chatbots. Protections for all people – especially children – need to be preemptive with this new technology. The research may not yet be conclusive about what exact impacts these AIs have on childrens’ development, but there are common sense measures that must be implemented. Guardrails around in-app conversations regarding mental health and open communication between parents and children using AI chatbots are essential to protect young minds with the most at stake.

Uncertainty is not an excuse for inaction, it is an opportunity for preemptive action. The time is now to implement safeguards for children – we cannot wait for a reason to regulate, like we did with social media.

Share this story:FacebookTwitterRedditEmail
Martin Sevcik | PRIME director
Martin Sevcik is the 2024-2025 PRIME director. He was previously the PRIME content editor and a PRIME staff writer. Sevcik is also a fourth-year economics and labor studies student from Carmel Valley, California.
Martin Sevcik is the 2024-2025 PRIME director. He was previously the PRIME content editor and a PRIME staff writer. Sevcik is also a fourth-year economics and labor studies student from Carmel Valley, California.
COMMENTS
Featured Classifieds
More classifieds »
Related Posts