Logo

AI is everywhere. But how far do we want to go?

Has AI gone too far? Insights from two AI experts.

Published on August 20, 2025

data

Our DATA+ expert, Elcke Vels, explores AI, cyber security, and Dutch innovation. Her "What if..." column imagines bold scenarios beyond the norm.

AI is everywhere and helps society move forward. However, due to the growing influence of technology, it is important to regulate it properly. The AI Act is intended to ensure this, but the law has been criticized from various quarters. It is said that there is insufficient attention to the protection of intellectual property rights and copyright. How can we use AI responsibly? Has the technology now become too far-reaching, and where do we draw the line? We spoke to two AI experts: Patricia Jaspers and Jos van Dongen.

AI Pitch Competition

This story was written in response to the AI Pitch Competition. This is a competition that showcases the most innovative AI solutions. The competition will take place on September 29. You can register here until August 29.

AI is rapidly changing the world; the way we work, learn, and communicate. Like the US, China is fully committed to accelerating AI innovations. Europe has opted for regulation. The AI Act is a European law that aims to ensure the safe, fair, and transparent use of AI within the EU.

However, such legislation does not protect us completely. Recently, a coalition of more than 40 European creators issued formal criticism of the AI Code of Conduct, which is intended to help companies comply with the AI Act. The code is said to offer insufficient protection for intellectual property rights and copyrights, and primarily serves the interests of large AI companies.

Much remains to be done worldwide to get a grip on the technology. But startups are springing up like mushrooms in every conceivable domain, including healthcare, defense, energy, and transportation. Can we even control these developments anymore? And how far should we want to go with AI applications?

Autonomy and control

Patricia Jaspers is director of operations at the Eindhoven Artificial Intelligence Systems Institute (EAISI, TU/e). She sees many positive aspects to the use of AI in many sectors, such as healthcare. “For example, I was very impressed when I first heard about how AI is being used to detect new proteins. This will significantly accelerate the development of promising medicines.”

“Nevertheless, I am also concerned about certain applications,” she continues. These concerns relate to data freedom and privacy, among other things. She cites the Meta chatbot as an example. The company is under fire for highly controversial internal guidelines for chatbots. An internal document shows that the company allowed its AI chatbot to flirt with children, according to Reuters.

Jos van Dongen also regularly deals with the subject of AI and data freedom. He is the director of the Erasmus Data Collaboratory - House of AI at Erasmus University, where data and AI come together. Under the leadership of scientist João Gonçalves, a small language model has been developed at the university: the Erasmian Language Model. This model has been trained on data from within Erasmus University: all master's theses and dissertations from the past 50 years. The language model has been fine-tuned using these specific university documents. This makes it possible to ask very specific questions about a particular field of research, such as whether a certain topic has been researched before and what the results were.

“This model runs on our internal infrastructure and is independent of external cloud services,” he explains. "We want to prevent this data, such as theses and dissertations, from ending up in foreign cloud environments. As soon as you move data to a cloud environment such as Microsoft Azure, there is a risk that you will lose control over your data.

These concerns are not unfounded. It was recently revealed that the Israeli military intelligence service uses Microsoft Azure to store and process huge amounts of tapped telephone conversations. This incident led to widespread concerns worldwide about data sovereignty and the risk of misuse of cloud infrastructure for surveillance.

AI makes us smarter and less savvy at the same time

In addition to the issue of data and autonomy, both experts mention the risk of “de-skilling” due to AI. As AI takes over more and more of our tasks, we can focus on things we are good at as humans, which is a big plus. However, at the same time, we will lose certain skills.

Jaspers cites a recent study on doctors who use AI as an example. “Doctors who regularly rely on AI become less effective at recognizing diseases themselves. International research shows that they detect far fewer early stages of colon cancer during visual examinations when they assess the images without AI.”

A second example: smartwatches that measure your sleep quality. “Do you still dare to judge for yourself whether you have slept well, or do you rely completely on your watch? If we are not careful, we will lose important skills without even noticing.”

Van Dongen agrees. “A personal example: recently, I was configuring a server environment with colleagues. Everything was going smoothly until the system suddenly crashed. What happened? My colleague had blindly followed the instructions of an AI assistant, which were incorrect.” In short, relying too much on AI can undermine our ability to think critically.

Applying AI responsibly

The AI Act identifies unacceptable risk systems: applications that go too far according to European regulations. These include AI systems that profile people on a large scale and make discriminatory decisions, such as the use of facial recognition in public places without consent, or AI that assigns social scores and thereby restricts a person's access to essential services.

But caution is also needed with AI applications that do enter the European market. Among other things, we need to take a more critical look at which applications are really necessary, Van Dongen emphasizes. He refers to a study on healthcare in Europe. It shows that of all AI applications developed in recent years, only 2% are actually in use. “Billions have been invested, and only a small portion of that is being used effectively.” Do we really need an application? Or are we developing a product or service just because of the hype? It's not just about saving costs, but also about preventing unnecessary dependence on technology.

And when it comes to data sovereignty, small-scale AI applications are also important, according to Jaspers. “We need to let go of the idea that only AI from big tech giants is valuable. It is by no means always necessary or useful to collect huge amounts of data; it is precisely with targeted, high-quality datasets that we can work more effectively and ethically.”

Aevai-Health_photo.png

A sneak preview of the AI Pitch Competition: what Aevai Health has in store for the world

The AI Pitch Competition spotlights the most innovative AI solutions, offering startups the opportunity to present their ideas and accelerate their growth.