{"id":464416,"date":"2023-11-25T10:00:00","date_gmt":"2023-11-25T09:00:00","guid":{"rendered":"https:\/\/innovationorigins.com\/?p=464416"},"modified":"2023-11-25T10:00:00","modified_gmt":"2023-11-25T09:00:00","slug":"in-defense-of-black-box-ai-for-healthcare","status":"publish","type":"post","link":"https:\/\/ioplus.nl\/archive\/en\/in-defense-of-black-box-ai-for-healthcare\/","title":{"rendered":"<strong>In defense of black box AI\u2026 for healthcare<\/strong>"},"content":{"rendered":"\n<p>I have scrolled through many LinkedIn posts, read articles and attended a couple of panel discussions about the need of explainability and transparency when working with AI models to ensure their safety and trustworthiness. The general idea and overall accepted view is that we do need to be able to explain the AI outputs and trace the model inner workings somehow. Depending on which stakeholder the level of explainability can vary, but there should be a sort of baseline.<\/p>\n\n\n\n<p>For this reason, I was very intrigued when I stumbled upon <a href=\"https:\/\/podcasters.spotify.com\/pod\/show\/ethical-machines\/episodes\/Ep--10---In-Defense-of-Black-Box-AI-e25nkga\">a podcast episode of Ethical Machines<\/a> by Reid Blackman titled \u201cIn Defense of Black Box AI\u201d. Black box AI, as the name suggests, refers to AI systems that generate outputs or predictions without disclosing the underlying mechanisms behind the outcomes. The guest of this episode was Kristof Horompoly, Head of Responsible AI at JPMorgan, one of the largest banks in the world. The perspective discussed was opposite to what I had seen until then: <strong>What if we cared more about maximising AI performance without compromising on complexity for the sake of explainability?<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Explainability VS performance<\/strong><\/h2>\n\n\n\n<p>From a technical perspective explainability can come with a trade-off with performance, for certain AI-models. Beyond performance, explainability is time-consuming, increases time to market, can be expensive, and has an environmental impact. Therefore, optimising for performance might outweigh the need for explainability in certain contexts. Instead, we could accept the technology by mapping its input and output spaces, thus checking for accuracy and fairness without understanding the model\u2019s inner workings. The episode analogy describes how we all drive a car without necessarily all being able to know the mechanics of it. I highly recommend checking out the entire episode, as it also dives into simply explained technicalities.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Black box AI for medical use&nbsp;<\/strong><\/h2>\n\n\n\n<p>I was surprised by these learnings, which got me thinking about black box AI for medical use. Both speakers acknowledged that explainability is imperative in regulated sectors such as healthcare or finance to spot biases and mistakes and rectify them. For instance, if an AI-driven conclusion on diagnostic imaging disproportionately impacts a specific ethnicity, explainability becomes pivotal in identifying and addressing such biases. Nonetheless, introducing explainability in the AI workflow does pose an additional step, raising concerns about the practicality and efficiency of AI integration. If we scrutinise AI results for biases, it may seem counterintuitive to leverage AI in the first place.<\/p>\n\n\n\n<p>So, playing the devil\u2019s advocate, what would make it possible to eliminate the need for explainability and focus entirely on the performance of AI in healthcare? How could we make sure that the input and output mapping is not introducing some unfairness? Thinking of the example of imaging diagnostics, AI has shown promise because of its pattern recognition abilities, spotting details, and subtle differences that might be challenging for humans to notice. But are we sure the models are being trained on data relevant to the population we are using the device for? Ethicists can and should work in the data collection and processing phase, advocating for ethical data practices and addressing issues related to dataset bias. However, for now, it is up to a manufacturer to make these ethical considerations and implement them as there are currently no regulations on AI.<\/p>\n\n\n<div class=\"vlp-link-container vlp-layout-basic wp-block-visual-link-preview-link advgb-dyn-8eecc8c0\"><a href=\"https:\/\/ioplus.nl\/archive\/en\/meet-prime-minister-ai-the-digital-leader\/\" class=\"vlp-link\" title=\"Meet Prime Minister AI, the digital leader\"><\/a><div class=\"vlp-layout-zone-side\"><div class=\"vlp-block-2 vlp-link-image\"><\/div><\/div><div class=\"vlp-layout-zone-main\"><div class=\"vlp-block-0 vlp-link-title\">Meet Prime Minister AI, the digital leader<\/div><div class=\"vlp-block-1 vlp-link-summary\">Our cognitive abilities: we&#8217;re quite proud of them. It sets us apart from the rest of the animal kingdom, as I often hear in documentaries.<\/div><\/div><\/div>\n\n\n<h2 class=\"wp-block-heading\"><strong>From overarching guidelines and principles<\/strong> to concrete rules <\/h2>\n\n\n\n<p>Looking at the medical device industry specifically, the EU Medical Device Regulation (EU MDR) represents the current regulatory framework, which has significantly transformed the European market since its introduction in 2017. This regulation has prompted all companies to prioritise compliance swiftly. It mandates enhanced monitoring and continuous data collection to demonstrate the safety and efficacy of devices. However, it does not prescribe specific details on what data to collect or how to collect it. Instead, it establishes overarching guidelines and principles that manufacturers must follow in gathering and handling medical device-related data.<\/p>\n\n\n\n<p>What if subject matter experts could come up with thresholds and standards regarding amounts of data, which data to collect and how to label it to introduce into the regulation for medical devices? Could the introduction of a black box AI model in healthcare be possible if we had more control over avoiding unfair and biased outputs? The regulation may have to slowly branch to specific applications, for example the previously mentioned imaging diagnostics. <\/p>\n\n\n\n<p>A group of experts could iterate and develop a concrete set of requirements for the data collection phase that would allow for the development of a safer model from an ethical perspective from the get-go. In this way, we would not have to compromise on explainability methods and less performant algorithms. The effort of continuous monitoring of the outputs would then follow, but it is less costly, faster and already done in compliance with the EU MDR.<\/p>\n\n\n\n<p>With this opinion piece, I want to challenge the perspective of concentrating most efforts for ethical AI applications at the implementation phase, in favour of taking time to build safe AI models from the data collection and handling phase.<\/p>\n\n\n\n<p>This could mean waiting some years to roll out a black box AI system after regulating the data collection with concrete rules from ethical and technical perspectives and collecting the data accordingly. However, a demonstrated close collaboration with ethical teams in&nbsp; AI development could enhance acceptance of AI within the healthcare sector.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I have scrolled through many LinkedIn posts, read articles and attended a couple of panel discussions about the need of explainability and transparency when working with AI models to ensure their safety and trustworthiness. The general idea and overall accepted view is that we do need to be able to explain the AI outputs and [&hellip;]<\/p>\n","protected":false},"author":2594,"featured_media":493413,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"advgb_blocks_editor_width":"","advgb_blocks_columns_visual_guide":"","footnotes":""},"categories":[7809],"tags":[69735,10373,82339,102,7807,82342],"location":[66582],"article_type":[55242],"serie":[],"archives":[],"internal_archives":[],"reboot-archive":[],"class_list":["post-464416","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-opinion","tag-ai-nl-3","tag-ai","tag-blackbox","tag-column","tag-column-en","tag-victoria-bruno-3","location-europe","article_type-column"],"blocksy_meta":[],"acf":{"subtitle":"Victoria Bruno takes on a critical counterpoint to AI for Innovation Origins. This week's column is about an AI black box for healthcare. ","text_display_homepage":false},"author_meta":{"display_name":"Victoria Bruno","author_link":"https:\/\/ioplus.nl\/archive\/author\/victoria-bruno\/"},"featured_img":"https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2023\/11\/cameronbrockbank_professional_corperate_illustration_of_a_black_d99148a3-a23c-47a7-96f6-a3f929e8567a.png","coauthors":[],"tax_additional":{"categories":{"linked":["<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/opinion\/\" class=\"advgb-post-tax-term\">Opinion<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">Opinion<\/span>"]},"tags":{"linked":["<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/opinion\/\" class=\"advgb-post-tax-term\">AI<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/opinion\/\" class=\"advgb-post-tax-term\">AI<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/opinion\/\" class=\"advgb-post-tax-term\">blackbox<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/opinion\/\" class=\"advgb-post-tax-term\">Column<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/opinion\/\" class=\"advgb-post-tax-term\">Column<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/opinion\/\" class=\"advgb-post-tax-term\">Victoria Bruno<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">AI<\/span>","<span class=\"advgb-post-tax-term\">AI<\/span>","<span class=\"advgb-post-tax-term\">blackbox<\/span>","<span class=\"advgb-post-tax-term\">Column<\/span>","<span class=\"advgb-post-tax-term\">Column<\/span>","<span class=\"advgb-post-tax-term\">Victoria Bruno<\/span>"]}},"comment_count":"0","relative_dates":{"created":"Posted 2 years ago","modified":"Updated 2 years ago"},"absolute_dates":{"created":"Posted on November 25, 2023","modified":"Updated on November 25, 2023"},"absolute_dates_time":{"created":"Posted on November 25, 2023 10:00 am","modified":"Updated on November 25, 2023 10:00 am"},"featured_img_caption":"","series_order":"","_links":{"self":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts\/464416","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/users\/2594"}],"replies":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/comments?post=464416"}],"version-history":[{"count":0,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts\/464416\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/media\/493413"}],"wp:attachment":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/media?parent=464416"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/categories?post=464416"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/tags?post=464416"},{"taxonomy":"location","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/location?post=464416"},{"taxonomy":"article_type","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/article_type?post=464416"},{"taxonomy":"serie","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/serie?post=464416"},{"taxonomy":"archives","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/archives?post=464416"},{"taxonomy":"internal_archives","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/internal_archives?post=464416"},{"taxonomy":"reboot-archive","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/reboot-archive?post=464416"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}