Skip to main content

Web Content Display Web Content Display

Skip banner

Web Content Display Web Content Display

INCET logo

Web Content Display Web Content Display

BIOUNCERTAINTY - ERC Starting Grant no. 805498

ERC logo

Web Content Display Web Content Display

Web Content Display Web Content Display

9 June 2022 - Cristian Timmermann - Levels of Explicability for medical Artificial Intelligence: What do we need and what can we get?

9 June 2022 - Cristian Timmermann - Levels of Explicability for medical Artificial Intelligence: What do we need and what can we get?

We have the pleasure to invite you to another research seminar in the ‘BIOUNCERTAINTY’ research project. This week Cristian Timmermann will give a talk: "Levels of Explicability for medical Artificial Intelligence: What do we need and what can we get?". The seminar will take place on Thursday, 9th of June, at 5:30 p.m. in the room 25 of Institute of Philosophy of Jagiellonian University and via MS Teams.

Talk is based on the collective work of: Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch, Cristian Timmermann

Abstract:

Definition of the problem: The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts present a challenge for diagnostic AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because doctors and patients desire to trace how results are produced without risking reductions in the performance of AI systems. The centrality of explicability within the informed consent process for diagnostic AI systems invites for an ethical reflection on the trade-offs. What levels of explicability are needed to properly obtain informed consent when utilizing diagnostic AI systems?

Arguments: We proceed in four steps: First, we map the terms commonly associated with explicability as described in the literature, i.e. explainability, interpretability, understandability, comprehensibility, demonstrability, and transparency. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we relate the five elements of informed consent, i.e. information disclosure, understanding, competence, voluntariness, and acknowledging the decision to the levels of explicability identified in the first step. This allows concluding which level of explicability physicians must reach and what patients can expect. In a last step, we discuss whether and how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic systems in radiology as an example.

Conclusion: We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.

 

Link to the MS Teams meeting