Big opportunities, big responsibilities: AI in startup land
AI offers opportunities for startups, but also entails risks and responsibilities. How far does the responsibility of startups extend?
Published on October 22, 2025
.png&w=2048&q=75)
As editor-in-chief, Aafke oversees all content and events but loves writing herself. She makes complex topics accessible and tells the stories behind technology.
At the end of November, young entrepreneurs will present their companies during the AI Pitch Competition. In addition to innovative strength and entrepreneurship, another important topic is how startups manage the opportunities and risks associated with artificial intelligence. IO+ interviewed Mickey Beurskens, PhD candidate in the field of Safe Artificial Intelligence at Eindhoven University of Technology (TU/e), but also a former AI startup founder. We spoke to him about the role of regulation, social responsibility, and finding the right balance between ambition and safety.
From medical engineering to an AI entrepreneur
Beurskens (30) studied mechanical engineering at TU/e and entered the world of AI through his fascination with robotics. “I saw one cool application after another popping up online, but little was happening on campus. I missed the presence of AI. Together with a few other students, I started the student team Serpentine AI, with which we participated in programming competitions and other contests.”
After graduating, he worked as an AI engineer but decided to start his own company a year later. With Forge Fire AI Engineering, he developed AI prototypes for companies. “I enjoyed doing that for three years, but during that time I also became increasingly concerned. AI offers enormous opportunities, but the realization that there are also serious risks involved never left me.”
Beurskens sees a lot of discussion about the promises of AI, but at the time, there was only a small group of people with whom he could discuss the existential risks and societal impact of AI. When a PhD position in Safe Reinforcement Learning became available at TU/e, he didn't have to think twice: it offered him the opportunity to fully immerse himself in the development of ethical and responsible AI.
The AI Pitch Competition
On November 13, selected AI startups will pitch their most innovative solutions. The Brabant initiative offers them a platform to present their ideas, connect with market leaders, and accelerate their growth. More information can be found here.
Reinforcement Learning: Does AI get the carrot or the stick?
For his PhD research, he is looking at methods to better guide and control AI models during the learning process itself. “There are roughly two ways to train AI,” explains Beurskens. With Supervised Learning, you give an algorithm a large number of images or descriptions and train it to make connections based on that information. With Reinforcement Learning, the algorithm tries out actions itself, instead of being presented with everything ready-made. When an action is successful, the algorithm receives a reward.
In general, these techniques are used in combination with each other. Beurskens focuses specifically on the question of how we can give Reinforcement Learning better guarantees that the algorithm is trained correctly."
Enthusiasm and reality check
In hindsight, he believes that, as an engineer, he was blind to the risks for quite some time. "Until I realized: we barely understand human intelligence, so how can we predict how AI will develop? That realization really shocked me at the time. It was a real reality check and led me to decide to shut down my company.”
He now thinks differently about this. “For impactful AI, you ultimately need large models – and therefore large companies. These tech giants train, maintain, and offer the models. In my view, the responsibility for keeping control of the AI they offer lies with them, not with small startups.”
Beurskens believes young AI startups must be aware of their responsibility. “It's remarkable that startups are thinking about how we can use AI in the best possible way. As long as they take privacy and data security seriously, there's a good chance we'll end up in a good place.”
A complex reality
According to Beurskens, the current AI world is much more than just smart algorithms. “Training a large language model requires millions in investment, electricity, and computing power. It is a complex operation that startups often cannot afford.” This makes the comparison with software engineering interesting. “You're not buying a black box, you're bringing technology into your organization. Just as with software, you must be critical of your supplier: what data has been used, how the model is maintained, and how you explain this to your customers. Startups are not only users, but also links in the AI supply chain. In this way, they contribute to the responsible implementation of this technology.”
European context and AI regulation
An undeniable factor for every AI startup is regulation. The upcoming EU AI Act is a much-discussed topic. The legislation affects all AI applications: from medical applications—where the bar is set extremely high—to more everyday products. Beurskens: “It's a complex piece of legislation, so if you're a startup entrepreneur in the scoping phase, I would definitely advise bringing in a legal expert so that you have a clear understanding of the obligations that apply.”
He also sees a broader challenge for Europe: “We are lagging behind the US and China, especially in terms of infrastructure and ecosystem. At the same time, I appreciate the ethical stance that Europe is taking. However, if we don't play on a global level, that good intention will mean little. So we will have to create an attractive climate for talent and investment, as well as establish clear frameworks for responsible use.”
Tips for AI startups
What would he have liked to know when he was just starting on his entrepreneurial adventure? “As cool as it is to develop your own AI model, it takes a huge amount of time and money. It's smarter to use existing models and build valuable applications on top of them.”
Tip number two: don't get lost in the technology. “It's tempting to keep tinkering, but the key is to create something that delivers value quickly. Validate, refine, and only then think about custom models.”
The public debate is an opportunity
Beurskens is optimistic about the growing public debate. “Two or three years ago, you were dismissed as an internet nutcase if you talked about the risks of AI. Now there is much more room in the public debate to discuss the risks and ethical issues that AI entails. People are not only blinded by the promise, but also see the potential dangers.”
He believes that continuing to have these kinds of conversations is just as important as technical breakthroughs. “With my work as a researcher, I can only address a small part of the bigger picture. That's why it's so important that everyone involved – startups, researchers, policymakers, larger companies – engage in the discussion together. Only then can we ensure that AI is used not only intelligently, but also safely and in a people-oriented way.”
Sponsored
This story is the result of a collaboration between AI Pitch Competition and our editorial team. IO+ is an independent journalism platform that carefully chooses its partners and only cooperates with companies and institutions that share our mission: spreading the story of innovation. This way we can offer our readers valuable stories that are created according to journalistic guidelines.
Want to know more about how IO+ works with other companies? Click here