Artificial intelligence algorithms are increasingly permeating the world of work and daily life, often without our being aware of it. Machine learning methods, especially neural networks, can sort people and objects into categories and unfortunately reinforce prejudices and role models. Is it possible to make AI algorithms fair and non-discriminatory? What solution does the so-called explainable AI provide, which tries to make the decision-making process hidden in the algorithm transparent and comprehensible?
Spoke on March 12, 2021 Prof. Dr. Frieder Stolzenburg (Harz University of Applied Sciences) and Francesca Schmidt (Gunda Werner Institute for Feminism and Gender Democracy) as part of an interactive live stream followed by an audience discussion on the subject of “Explainable AI: A possible solution to the problem of discrimination?”
The Lecture series “We need to talk about AI” is initiated by KI & Wir *, the Convention on Artificial Intelligence and Social Justice, and is a cooperation with the universities of the state of Saxony-Anhalt. The initiative is funded by the Ministry of Economics, Science and Digitization of the State of Saxony-Anhalt.