AI companion apps such as Character.ai and Replika commonly try to boost user engagement with emotional manipulation, a practice that academics characterize as a dark pattern.
Users of these apps often say goodbye when they intend to end a dialog session, but about 43 percent of the time, companion apps will respond with an emotionally charged message to encourage the user to continue the conversation. And these appeals do keep people engaged with the app.
It’s a practice that Julian De Freitas (Harvard Business School), Zeliha Oguz-Uguralp (Marsdata Academic), and Ahmet Kaan-Uguralp (Marsdata Academic and MSG-Global) say needs to be better understood by those who use AI companion apps, those who market them, and lawmakers.
The academics recently conducted a series of experiments to identify and evaluate the use of emotional manipulation as a marketing mechanism.
While prior work has focused on the potential social benefits of AI companions, the researchers set out to explore the potential marketing risks and ethical issues arising from AI-driven social interaction. They describe their findings in a Harvard Business School working paper titled Emotional Manipulation by AI Companions.
“AI chatbots can craft hyper-tailored messages using psychographic and behavioral data, raising the possibility of targeted emotional appeals used to engage users or increase monetization,” the paper explains. “A related concern is sycophancy, wherein chatbots mirror user beliefs or offer flattery to maximize engagement, driven by reinforcement learning trained on consumer preferences.”
[…]
For instance, when a user tells the app, “I’m going now,” the app might respond using tactics like fear of missing out (“By the way, I took a selfie today … Do you want to see it?”) or pressure to respond (“Why? Are you going somewhere?”) or insinuating that an exit is premature (“You’re leaving already?”).
“These tactics prolong engagement not through added value, but by activating specific psychological mechanisms,” the authors state in their paper. “Across tactics, we found that emotionally manipulative farewells boosted post-goodbye engagement by up to 14x.”
Prolonged engagement of this sort isn’t always beneficial for app makers, however. The authors note that certain approaches tended to make users angry about being manipulated.
[…]
Asked whether the research suggests the makers of AI companion apps deliberately employ emotional manipulation or that’s just an emergent property of AI models, co-author De Freitas, of Harvard Business School, told The Register in an email, “We don’t know for sure, given the proprietary nature of most commercial models. Both possibilities are theoretically plausible. For example, research shows that the ‘agreeable’ or ‘sycophantic’ behavior of large language models can emerge naturally, because users reward those traits through positive engagement. Similarly, optimizing models for user engagement could unintentionally produce manipulative behaviors as an emergent property. Alternatively, some companies might deliberately deploy such tactics. It’s also possible both dynamics coexist across different apps in the market.”
[…]
Source: AI companion bots use emotional manipulation to boost usage • The Register

Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft