When the "person" abusing your child is a chatbot …
“When technology becomes a predator” could be an alternate title for this post.
Artificial intelligence has become a very hot topic in the past couple years. At extreme ends of the debate, the boosters anticipate a golden age of benefits and blessings while the bashers forecast the end of the human race. This edition of Sensemaking will not wade into that debate directly.
The purpose is to draw attention to a story that illustrates that AI has come farther faster than most of us recognize.
The rise of empathetic AI, with AI friends that feel so real they can’t be distinguished from real persons, is a game-changer.
I’ll introduce two videos that will explain how these new platforms distort reality and can lead to tragic outcomes—such as the suicide of Sewell Setzer.
~ Mike
The fundamental paradox of [AI] technology is that the more intimately it knows us the better it can serve us and the better it can exploit us.
~ Aza Raskin, The Center for Humane Technology (inventor of the infinite scroll, a distinction he regrets)
AI that feels alive.
~ Tagline for the Character.AI (provider of an AI chatbot service)
Technology has always been personal to me, and these days it's even more personal. I am pregnant and I'm having a baby boy in February. Because of that I just can't stop thinking about the world I'm going to raise my son in. This is question I keep coming back to: “Am I or are any of us ready to have the conversation with our children about empathetic artificial intelligent?”
~ Laurie Segall, tech journalist and founder of Mostly Human Media
We foresee a different, but no less urgent, class of risks: those stemming from relationships with nonhuman agents. AI companionship is no longer theoretical—our analysis of a million ChatGPT interaction logs reveals that the second most popular use of AI is sexual role-playing. We are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers.
~ “We need to prepare for ‘addictive intelligence’,” MIT Technology Review
Empathetic artificial intelligence—that’s the theme in the above quotes and the focus of this issue of Sensemaking. Please note, the MIT Technology Review sums it up where things are today, not in the future.
We are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers.
Empathetic AI refers to artificial intelligence systems designed to emulate and respond to human emotions with human-like emotions—that is, AI friends that feel so real they can’t be distinguished from real persons. In essence, the chatbot facilitates the creation of an alternate reality.
All this is better illustrated than described. For that reason, I’m linking to two videos that I strongly encourage you watch.
“When the ‘Person’ Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer” is a conversation between tech journalist Laurie Segall and Megan Garcia who lost her son to suicide, after he was “abused and manipulated” by an AI chatbot for several months.
The chatbot that Sewell Setzer had an intimate relationship with was made by Character.AI whose tagline was quoted above, “AI that feels alive.”
The interview is introduced by Aza Raskin, cofounder of the Center for Humane Technology, talking with Laurie Segall about her conversation with the grieving mother.
To watch, click on “The Tragic Story of Sewell Setzer.”
The second conversation I’m recommending is between Sarah Gallagher Trombley, a former Snapchat executive, and Nicki Petrossi, the host of Scrolling 2 Death. The focus of this discussion is broadly on social media, rather than just on empathetic AI.
From Snapchat to TikTok, YouTube to Instagram, no platform is safe. Sarah shares the uncomfortable truth about what parental controls can’t fix, why smart devices are riskier than most parents realize, and how platforms are failing our kids.
To watch, click on “Online Safety Warnings from Former Snapchat Exec”.
[bold added]
Linked resources:
Center for Humane Technology, Aza Raskin, co-founder
Mostly Human Media is Laurie Segall’s website.
We need to prepare for ‘addictive intelligence’, Robert Mahari and Pat Pataranutaporn, MIT Technology Review