TU Delft becomes international advisor on AI in healthcare
The Delft research center is now officially a partner of the WHO for Ethics and Governance of AI in Healthcare.
Published on March 8, 2025
.jpg&w=3840&q=75)
Team IO+ selects and features the most important news stories on innovation and technology, carefully curated by our editors.
The TU Delft Digital Ethics Centre will advise the World Health Organization (WHO) on the ethical aspects and laws and regulations surrounding AI in healthcare. Last week, the Delft research centre received accreditation, making it an official WHO partner in the field of Ethics and Governance of AI in Healthcare.
Healthcare is under pressure worldwide, and the use of AI in healthcare offers opportunities. But the integration and implementation of AI in healthcare is not without its challenges. In fact, research shows that only two percent of all AI innovations are actually used. Innovations often do not fit in well with practice, healthcare personnel do not embrace them, ethical dilemmas arise or AI still contains biases that can cause discrimination.
Ethical principles
When using AI in healthcare, it is extremely important to uphold ethical principles as well as healthcare standards and values. International guidelines have been drawn up for this purpose, but they have yet to be translated into practice. The TU Delft Digital Ethics Centre will now help with this.
“AI has the transformative power to reshape healthcare and enable people to monitor their own health,” says Alain Labrique, Director of the Department of Digital Health and Innovation at the World Health Organization (WHO). “The technical and academic partnership with the Digital Ethics Centre at TU Delft is crucial to ensure that the benefits of AI reach everyone worldwide through ethical governance, equitable access and collective action.”
.jpg&w=3840&q=90)
Michel van Genderen, internist-intensivist and assistant professor at Erasmus MC, adds: “AI can only improve healthcare if we have a good ethical foundation. We need to get that right. Thanks to the special collaboration between the WHO, TU Delft and Erasmus MC, as well as software company SAS, we can apply AI responsibly and transparently in the clinic. One example is an ongoing project at Erasmus MC in which AI helps determine when a patient can be safely discharged after major oncological surgery. If we meet all the preconditions, such an AI model can not only ensure safer discharge of patients, but also that they can go home on average four days earlier and that readmissions are halved.”
AI Ethics Lab
The TU Delft Digital Ethics Center is collaborating with Erasmus MC and software company SAS in the AI Ethics Lab (REAiHL). Pilots conceived in Delft can be tested in practice. Stefan Buijsman, associate professor of Responsible AI at TU Delft: “It is important that we can see whether what we come up with also works in the daily practice of the hospital. We can develop the ethical frameworks and come up with technological solutions that fit within them. At Erasmus MC, they can validate these and identify needs in practice.” The AI ethics lab was created on the initiative of internist-intensivist Michel van Genderen. The goal is to develop a general framework for how AI can be applied safely and ethically throughout the hospital.