{"id":482284,"date":"2024-07-04T10:33:17","date_gmt":"2024-07-04T08:33:17","guid":{"rendered":"https:\/\/innovationorigins.com\/?p=482284"},"modified":"2024-07-04T10:33:17","modified_gmt":"2024-07-04T08:33:17","slug":"new-research-project-aims-to-make-ai-explainable-to-humans","status":"publish","type":"post","link":"https:\/\/ioplus.nl\/archive\/en\/new-research-project-aims-to-make-ai-explainable-to-humans\/","title":{"rendered":"New research project aims to make AI explainable to humans"},"content":{"rendered":"\n<p>Researchers at the University of Amsterdam (UvA) are working on a <a href=\"https:\/\/www.uva.nl\/en\/shared-content\/subsites\/responsible-digital-transformations\/en\/news\/2024\/07\/developing-a-method-to-make-ai-explainable-to-humans.html?origin=kUP%2Byx6UTZqvuJiCJKnnEQ\">new method<\/a> to make AI models understandable and explainable to humans. While AI models can solve many tasks, but they are also becoming increasingly complex. The field of Explainable AI (XAI) is concerned with unpacking the complex behavior of these models in a way that humans can understand. In a <a href=\"https:\/\/rdt.uva.nl\/research\/research-projects\/hue-bridging-ai-representations-to-human-understandable-explanations\/hue-bridging-ai-representations-to-human-understandable-explanations-article.html\">new project<\/a>, HUE: bridging AI Representations to Human-Understandable Explanations,  researchers Giovanni Cin\u00e0 and Sandro Pezzelle are developing a method that will make it possible to \u2018x-ray\u2019 AI models and make them more transparent.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Confirmation bias<\/h2>\n\n\n\n<p>&#8220;Many AI models are&nbsp;black boxes,&#8221; explains Pezzelle. &#8220;We can feed them with a lot of data, and they can make a prediction \u2013 which may or may not be correct \u2013 but we do not know what goes on internally.&#8221; This is problematic, because we tend to interpret the output according to our own expectations, also known as confirmation bias.<\/p>\n\n\n\n<p>Cin\u00e0: &#8220;We are more likely to believe explanations that match our prior beliefs. We accept more easily what makes sense to us, and that can lead us to trust models that are not really trustworthy. This is a big problem, for instance, when we use AI models to interpret medical data in order to detect disease. Unreliable models may start to influence doctors and lead them to misdiagnose results.&#8221;<\/p>\n\n\n<div class=\"vlp-link-container vlp-layout-basic wp-block-visual-link-preview-link advgb-dyn-713eb838\"><a href=\"https:\/\/ioplus.nl\/archive\/en\/ai-model-outperforms-current-skin-cancer-detection-methods\/\" class=\"vlp-link\" title=\"AI model outperforms current skin cancer detection methods\"><\/a><div class=\"vlp-layout-zone-side\"><div class=\"vlp-block-2 vlp-link-image\"><img loading=\"lazy\" decoding=\"async\" style=\"max-width: 150px;\" width=\"150\" height=\"105\" src=\"https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2022\/04\/cmzUOyli-skin-gd0f20c581_1920-1.jpg\" class=\"attachment-150x999 size-150x999\" alt=\"\" srcset=\"https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2022\/04\/cmzUOyli-skin-gd0f20c581_1920-1.jpg 1920w, https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2022\/04\/cmzUOyli-skin-gd0f20c581_1920-1-300x210.jpg 300w, https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2022\/04\/cmzUOyli-skin-gd0f20c581_1920-1-1024x717.jpg 1024w, https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2022\/04\/cmzUOyli-skin-gd0f20c581_1920-1-768x538.jpg 768w, https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2022\/04\/cmzUOyli-skin-gd0f20c581_1920-1-1536x1076.jpg 1536w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/div><\/div><div class=\"vlp-layout-zone-main\"><div class=\"vlp-block-0 vlp-link-title\">AI model outperforms current skin cancer detection methods<\/div><div class=\"vlp-block-1 vlp-link-summary\">A group of scientists developed an AI model capable of predicting skin cancer risk based on a face photograph. The Erasmus Medical Center (EMC) in Rotterdam researchers&#8217; model outperforms current assessment methods.  <\/div><\/div><\/div>\n\n\n<h2 class=\"wp-block-heading\">Examining explanations<\/h2>\n\n\n\n<p>The researchers are developing a method to mitigate this confirmation bias. &#8220;We want to align what we think the model is doing with what it is doing,&#8221; Cin\u00e0 says. &#8220;\u2018To make a model more transparent, we need to examine some explanations for why it came up with a certain prediction.&#8221; To do this, the researchers create a formal framework that allows them to formulate human-understandable hypotheses about what the model has learned and test these more precisely.<\/p>\n\n\n\n<p>Pezzelle: &#8220;Our method can be applied to any machine learning or deep learning model as long as we can inspect it. Therefore, a model like ChatGPT is not a good candidate because we cannot look into it; we only get its final output. The model has to be open source for our method to work.&#8221;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">A more unified approach<\/h2>\n\n\n\n<p>Cin\u00e0 and Pezzelle, who come from different academic backgrounds \u2013 medical AI and natural language processing (NLP), respectively \u2013 have joined forces in order to develop a method that can be applied to various domains. Pezzelle: &#8220;Currently, solutions that are proposed in one of these disciplines do not necessarily reach the other field. So our aim is to create a more unified approach.&#8221;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers at the University of Amsterdam (UvA) are working on a new method to make AI models understandable and explainable to humans. While AI models can solve many tasks, but they are also becoming increasingly complex. The field of Explainable AI (XAI) is concerned with unpacking the complex behavior of these models in a way [&hellip;]<\/p>\n","protected":false},"author":2589,"featured_media":494923,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"advgb_blocks_editor_width":"","advgb_blocks_columns_visual_guide":"","footnotes":""},"categories":[84026],"tags":[10373,69735,85326,54528],"location":[6763],"article_type":[36684],"serie":[],"archives":[],"internal_archives":[],"reboot-archive":[82795],"class_list":["post-482284","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data","tag-ai","tag-ai-nl-3","tag-explainable-ai","tag-university-of-amsterdam","location-netherlands","article_type-news","reboot-archive-data"],"blocksy_meta":[],"acf":{"subtitle":"Researchers at the University of Amsterdam (UvA) are working on a new method to make AI models understandable and explainable to humans. ","text_display_homepage":false},"author_meta":{"display_name":"Team IO","author_link":"https:\/\/ioplus.nl\/archive\/author\/erikdevries\/"},"featured_img":"https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2023\/12\/innovationorigins_a_patient_using_an_AI-driven_chatbot_to_repor_4e864485-ceed-4c4b-af9e-60b4aec0d2cd.png","coauthors":[],"tax_additional":{"categories":{"linked":["<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/data\/\" class=\"advgb-post-tax-term\">DATA+<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">DATA+<\/span>"]},"tags":{"linked":["<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/data\/\" class=\"advgb-post-tax-term\">AI<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/data\/\" class=\"advgb-post-tax-term\">AI<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/data\/\" class=\"advgb-post-tax-term\">explainable AI<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/data\/\" class=\"advgb-post-tax-term\">University of Amsterdam<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">AI<\/span>","<span class=\"advgb-post-tax-term\">AI<\/span>","<span class=\"advgb-post-tax-term\">explainable AI<\/span>","<span class=\"advgb-post-tax-term\">University of Amsterdam<\/span>"]}},"comment_count":"0","relative_dates":{"created":"Posted 2 years ago","modified":"Updated 2 years ago"},"absolute_dates":{"created":"Posted on July 4, 2024","modified":"Updated on July 4, 2024"},"absolute_dates_time":{"created":"Posted on July 4, 2024 10:33 am","modified":"Updated on July 4, 2024 10:33 am"},"featured_img_caption":"","series_order":"","_links":{"self":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts\/482284","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/users\/2589"}],"replies":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/comments?post=482284"}],"version-history":[{"count":0,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts\/482284\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/media\/494923"}],"wp:attachment":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/media?parent=482284"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/categories?post=482284"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/tags?post=482284"},{"taxonomy":"location","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/location?post=482284"},{"taxonomy":"article_type","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/article_type?post=482284"},{"taxonomy":"serie","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/serie?post=482284"},{"taxonomy":"archives","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/archives?post=482284"},{"taxonomy":"internal_archives","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/internal_archives?post=482284"},{"taxonomy":"reboot-archive","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/reboot-archive?post=482284"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}