|Organization||Graduate school and research, Yrkeshögskolan Arcada, Helsinki,
Finland Graduate School of Data Science, Seoul National University
|BA||Computer Science, Politecnico di Milano, Italy|
|MS||Computer Science, Politecnico di Milano, Italy|
|PHD||Computer Science, Politecnico di Milano, Italy|
|1991-2021||Computer Science Dept. Goethe University Frankfurt, Germany|
|1987-1990||Politecnico di Milano, Italy|
|1986||IBM Almaden Research Center, USA|
|1985-1986||Computer Science Dept. UC Berkeley, USA|
|How to Assess Trustworthy AI in Practice|
|Applications based on Machine Learning and/or Deep Learning carry specific (mostly unintentional) risks that are considered within AI ethics. As a consequence, the quest for trustworthy AI has become a central issue for governance and technology impact assessment efforts, and has increased in the last four years, with a focus on identifying both ethical and legal principles.
As AI capabilities have grown exponentially, it has become increasingly difficult to determine whether their model outputs or system behaviors protect the rights and interests of an ever-wider group of stakeholders – let alone evaluate them as ethical or legal, or meeting the goals of improving human welfare and freedom.
For example, what if decisions made using an AI-driven algorithm benefit some socially salient groups more than others? And what if we fail to identify and prevent these inequalities because we cannot explain how decisions were derived?
Moreover, we also need to consider how the adoption of these new algorithms and the lack of knowledge and control over their inner workings may impact those in charge of making decisions.
In this talk I will share our experiences in assessing Trustworthy AI systems in practice by using the Z-Inspection® process.
The Z-Inspection® process is the result of 4.5 years of applied research of the Z-Inspection® initiative, a network of high-class world experts lead by Prof. Roberto V. Zicari.
Z-Inspection® is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.