Logo

AI chatbots jeopardize human connection

The Dutch Data Protection Authority warns for the dangers of AI chatbots in various settings.

Published on February 18, 2025

Phone

Team IO+ selects and features the most important news stories on innovation and technology, carefully curated by our editors.

Whether it concerns romance, friendships, or therapy, AI chatbots are being used more and more often. However, the Dutch Data Protection Authority (AP) has warned of the dangers of these applications. Chatbots sometimes give harmful advice when they are used for mental support.

Dangers of AI chatbots in mental healthcare

The AP has not made public which specific apps were investigated. However, according to the regulator, it concerns nine of the most popular chatbots available in app stores as virtual friends, therapists or life coaches. Serious shortcomings were identified. The chatbots appear unable to recognize nuances and regularly give inappropriate and harmful responses to users with mental health problems. This is partly because the underlying language models have mainly been trained with English texts.

Another point of criticism is that during crises, users are rarely or never referred to professional support services. AP chairman Aleid Wolfsen also emphasizes that transparency is essential: users must know whether they are communicating with a human or a machine.

Therapeutic chatbot banned

In the therapeutic sector, for example, Replika has millions of active users. The chatbot offers coaching, memory, and diary functions. Although Replika can be useful in certain situations, expert Paul Marsden of the British Psychological Society has warned that these apps should only be used as a supplement to human therapy. In Italy, the company was even banned due to concerns for minors and emotionally vulnerable persons.

Can AI Chatbot Therapists Revolutionise Mental Health Care?
IO

Can AI Chatbot Therapists Revolutionise Mental Health Care?

AI chatbot therapists offer a promising support system for those in need of mental health assistance, but also raise a lot of ethical questions.

Last year, a Belgian man decided to end his life after conversations with Chai, a competitor of Replika. His wife is convinced that the chatbot urged him to make this decision. This raises questions about offering 'social' chatbots to lonely and sometimes vulnerable people.

Explosive growth in dating apps as well

AI and chatbots are also increasingly being applied in the dating world. For example, AI is increasingly being used to help users of dating apps come up with opening lines. Some apps offer a series of AI-generated opening lines, allowing users to choose from different styles, such as friendly, funny or even a bit daring.

Grindr, an app for (primarily) men who want sex with men, also uses AI. The app has a chatbot assistant called the Grindr Wingman that can generate conversation starters based on chat history.

Some apps go a step further. Some companies try to alleviate the 'dating fatigue' people experience with AI. They do this by offering AI-simulated blind dates, among other things. Volar Dating, which was launched in the US earlier this year, is an example of this. During the registration process, the user provides information about themselves to a chatbot, such as age, place of residence, and hobbies. The AI then simulates a first date between two people.

A major concern is the lack of human connection, empathy, and emotional understanding that AI cannot offer. There are also ethical and privacy issues, given the amount of personal data required for AI to be effective. In addition, AI algorithms can exhibit biases.

AI in the dating world: blessing or curse?
IO

AI in the dating world: blessing or curse?

Artificial intelligence is transforming the dating landscape. From virtual dates to AI-driven matchmaking, technology offers countless new possibilities. But there are also risks. Critics warn of loss…