{"id":117881,"date":"2017-11-20T06:30:08","date_gmt":"2017-11-20T05:30:08","guid":{"rendered":"https:\/\/e52.nl\/?p=117881"},"modified":"2017-11-20T06:30:08","modified_gmt":"2017-11-20T05:30:08","slug":"algorithms-neither-neutral-value-free","status":"publish","type":"post","link":"https:\/\/ioplus.nl\/archive\/en\/algorithms-neither-neutral-value-free\/","title":{"rendered":"\u201cAlgorithms are neither neutral nor value-free\u201d"},"content":{"rendered":"<p><a href=\"https:\/\/www.tue.nl\/universiteit\/faculteiten\/industrial-engineering-innovation-sciences\/de-faculteit\/medewerkers\/details\/ep\/e\/d\/ep-uid\/20170386\/\">Dr. Katleen Gabriels<\/a> is Assistant Professor at Eindhoven University of Technology. She is specialized in computer ethics. In 2016, her book \u201cOnlife\u201d was published (in Dutch), in which she analyzes the potentials and pitfalls of the Internet of Things, digitization, and big data. Katleen is an elected steering committee member of Ethicomp, the international organization around ethical computing. <a href=\"https:\/\/jaxenter.com\/machine-learning-interview-gabriels-138569.html\">Jaxenter interviewed her<\/a> because of the keynote she will give at the Machine Learning Conference next month in Berlin.<\/p>\n<p><em><strong>By <a href=\"https:\/\/jaxenter.com\/machine-learning-interview-gabriels-138569.html#authors-block\">Melanie Feldman<\/a>, Jaxenter<\/strong><\/em><\/p>\n<p>Our first\u00a0<a href=\"https:\/\/mlconference.ai\/\">ML Conference<\/a>\u00a0will debut in December in Berlin. Until then, we\u2019d like to give you a taste of what\u2019s to come. We talked with, Dr. Katleen Gabriels, Assistant Professor at Eindhoven University of Technology about how algorithms influence our daily lives and why ethics are essential to the development of machine learning.<\/p>\n<h4>JAXenter: In your\u00a0<a href=\"https:\/\/mlconference.ai\/machine-learning-principles\/the-potentials-and-pitfalls-of-ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">ML Conference\u00a0keynote<\/a>, you will talk about the influence of algorithms on our daily life. What is your personal opinion towards this topic \u2013 do you think the influence of algorithms is under- or overrated?<\/h4>\n<p>&nbsp;<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/archive.ioplus.nl\/wp-content\/uploads\/2017\/11\/Screen-Shot-2017-11-18-at-14.05.54.png\" width=\"1134\" height=\"1138\" class=\"aligncenter\" \/><\/p>\n<p><strong>Katleen Gabriels:<\/strong>\u00a0This influence is definitely underrated. We already live in the era of the Internet of Things (IoT), where algorithms increasingly make decisions for and about us on a daily basis. Algorithms already decide on our love life on dating apps and dating websites, our potential jobs (as companies can use them to scan our resumes), and\u00a0<a href=\"https:\/\/www.wired.com\/2017\/04\/courts-using-ai-sentence-criminals-must-stop-now\/\" target=\"_blank\" rel=\"noopener noreferrer\">even in court cases<\/a>.<\/p>\n<p>Or consider for instance \u2018recommender engines\u2019 such as Google\u2019s search engine: numerous people worldwide inform themselves daily about the world on a platform where algorithms decide which information you will or will not see. And the company keeps the algorithms themselves secret. Unfortunately, still too many people think that the ranking of the results is based on \u2018reliability\u2019. We should increase awareness about this, not only about algorithms but also about \u2018search engine optimization\u2019, especially in an IoT-era with persuasive and predictive technologies that can easily violate our autonomy in undesirable ways.<\/p>\n<h4>JAXenter: You say algorithms can never be completely neutral because their creators (developers) are never neutral. What advice can you give developers to save them from falling into the trap of unwanted influence?<\/h4>\n<p><strong>Katleen Gabriels:\u00a0<\/strong>To realize that these algorithms are neither neutral nor value-free is an essential starting point. At Eindhoven University of Technology, where I work, all students (future engineers) have to take courses on ethics, such as engineering ethics. The non-neutrality of technology is an important part of these courses. The way you as an engineer design a technology influences how users can make use of it: this is just a simple example to illustrate that this is not a neutral process. With regard to algorithms, there is a plethora of examples that show how human biases slip into them, such as racist profiling in \u2018precrime methodology\u2019. Here as well it is important to increase awareness.<\/p>\n<h4><a href=\"https:\/\/jaxenter.com\/machine-learning-productivity-survey-138301.html\" target=\"_blank\" rel=\"noopener noreferrer\">SEE MORE:\u00a0Machine Learning \u2014 the new poster child for boosted productivity<\/a><\/h4>\n<h4>JAXenter:\u00a0An example of ML gone \u2018wrong\u2019 is chatbot Tay, which quickly\u00a0started saying inappropriate things on the internet. Do you think that artificial intelligence needs a\u00a0sort of moral guideline in the first place?<\/h4>\n<p><strong>Katleen Gabriels:\u00a0<\/strong>Definitely! And Microsoft should have considered this before \u2018releasing\u2019 Tay on Twitter. There are some positive developments: Google, for instance, has\u00a0<a href=\"https:\/\/www.theguardian.com\/technology\/2017\/oct\/04\/google-deepmind-ai-artificial-intelligence-ethics-group-problems\" target=\"_blank\" rel=\"noopener noreferrer\">an ethics board on AI<\/a>.<\/p>\n<p>However, this moral guideline, or code of ethics, should be part of an extensive public debate, and not just one at the company, or in academic or expert circles only: we as a society have to reflect together on desirable and undesirable developments.<\/p>\n<h4>JAXenter: Where do you see the biggest potential for the positive use of artificial intelligence?<\/h4>\n<p><strong>Katleen Gabriels:\u00a0<\/strong>I welcome technological development and innovation, but this progress should go hand in hand with ethical progress (or at least not decay) and this does not happen automatically: we really have to work hard to attain it. AI can assist humans in so many positive ways that it is difficult to just pick one example. To give one, albeit a general one: AI offers great potential for healthcare, for instance<a href=\"https:\/\/news.stanford.edu\/2017\/01\/25\/artificial-intelligence-used-identify-skin-cancer\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u00a0in the analysis of complex data<\/a>.<\/p>\n<p><strong>Dr. Katleen Gabriels<\/strong>\u00a0will be delivering one talk at\u00a0<a href=\"https:\/\/mlconference.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">ML Conference<\/a>\u00a0which will focus on why\u00a0algorithms and datasets are not neutral, as well as how we can anticipate and reduce undesirable consequences and pitfalls.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Dr. Katleen Gabriels is Assistant Professor at Eindhoven University of Technology. She is specialized in computer ethics. In 2016, her book \u201cOnlife\u201d was published (in Dutch), in which she analyzes the potentials and pitfalls of the Internet of Things, digitization, and big data. Katleen is an elected steering committee member of Ethicomp, the international organization [&hellip;]<\/p>\n","protected":false},"author":1572,"featured_media":521088,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"advgb_blocks_editor_width":"","advgb_blocks_columns_visual_guide":"","footnotes":""},"categories":[42],"tags":[21660,9402,484,21662,21664,2519],"location":[],"article_type":[],"serie":[],"archives":[],"internal_archives":[],"reboot-archive":[],"class_list":["post-117881","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-sustainability-nl","tag-berlin","tag-data-studio","tag-innovation","tag-jaxenter","tag-kathleen-gabriels","tag-tue-en"],"blocksy_meta":[],"acf":{"subtitle":"","text_display_homepage":false},"author_meta":{"display_name":"Gastauteur","author_link":"https:\/\/ioplus.nl\/archive\/author\/gastauteur\/"},"featured_img":"https:\/\/ioplus.nl\/archive\/wp-content\/uploads\/2017\/11\/machine-learning-300x122.jpg","coauthors":[],"tax_additional":{"categories":{"linked":["<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/sustainability-nl\/\" class=\"advgb-post-tax-term\">Sustainability<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">Sustainability<\/span>"]},"tags":{"linked":["<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/sustainability-nl\/\" class=\"advgb-post-tax-term\">Berlin<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/sustainability-nl\/\" class=\"advgb-post-tax-term\">data studio<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/sustainability-nl\/\" class=\"advgb-post-tax-term\">innovation<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/sustainability-nl\/\" class=\"advgb-post-tax-term\">JAXenter<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/sustainability-nl\/\" class=\"advgb-post-tax-term\">Kathleen Gabriels<\/a>","<a href=\"https:\/\/ioplus.nl\/archive\/en\/category\/sustainability-nl\/\" class=\"advgb-post-tax-term\">TU\/e<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">Berlin<\/span>","<span class=\"advgb-post-tax-term\">data studio<\/span>","<span class=\"advgb-post-tax-term\">innovation<\/span>","<span class=\"advgb-post-tax-term\">JAXenter<\/span>","<span class=\"advgb-post-tax-term\">Kathleen Gabriels<\/span>","<span class=\"advgb-post-tax-term\">TU\/e<\/span>"]}},"comment_count":"0","relative_dates":{"created":"Posted 8 years ago","modified":"Updated 8 years ago"},"absolute_dates":{"created":"Posted on November 20, 2017","modified":"Updated on November 20, 2017"},"absolute_dates_time":{"created":"Posted on November 20, 2017 6:30 am","modified":"Updated on November 20, 2017 6:30 am"},"featured_img_caption":"","series_order":"","_links":{"self":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts\/117881","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/users\/1572"}],"replies":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/comments?post=117881"}],"version-history":[{"count":0,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/posts\/117881\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/media\/521088"}],"wp:attachment":[{"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/media?parent=117881"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/categories?post=117881"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/tags?post=117881"},{"taxonomy":"location","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/location?post=117881"},{"taxonomy":"article_type","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/article_type?post=117881"},{"taxonomy":"serie","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/serie?post=117881"},{"taxonomy":"archives","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/archives?post=117881"},{"taxonomy":"internal_archives","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/internal_archives?post=117881"},{"taxonomy":"reboot-archive","embeddable":true,"href":"https:\/\/ioplus.nl\/archive\/wp-json\/wp\/v2\/reboot-archive?post=117881"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}