Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Emotional AI: More harm than good?
Generative AI mimics human-generated text, images, and video, and it's got huge implications for geopolitics, economics, and security. But that's not all - emotionally intelligent AI is on the rise.
And sometimes the results are ugly. Take the mental health nonprofit, KOKO, which used an AI chatbot to support counselors advising 4,000 people who were seeking counseling. The catch: The patients didn't know that a bot was generating the advice they were receiving. While users initially rated the bot-generated responses highly, the therapy lost its effectiveness once the patients were informed that they'd be talking to a fancy calculator.
The real question is: When does emotionally intelligent AI cross the line into emotionally manipulative territory?
This is not just a concern for virtual therapists -- politics could be impacted. And who knows, maybe even your favorite TV host will use generative AI to convince you to keep watching. Now there's an idea.
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- Ian Explains: The dark side of AI ›
- AI's search revolution: How ChatGPT will be your new search engine ›
- How robots will change the job market: Kai-Fu Lee predicts ›
- Is AI's "intelligence" an illusion? - GZERO Media ›
- Podcast: Getting to know generative AI with Gary Marcus - GZERO Media ›
- New AI toys spark privacy concerns for kids - GZERO Media ›
- AI & human rights: Bridging a huge divide - GZERO Media ›