Przejdź do głównej treści

Widok zawartości stron Widok zawartości stron

Pomiń baner

Widok zawartości stron Widok zawartości stron

Logo INCET

Widok zawartości stron Widok zawartości stron

BIOUNCERTAINTY - ERC Starting Grant no. 805498

ERC logo

Widok zawartości stron Widok zawartości stron

Znajdziesz nas tutaj:

Widok zawartości stron Widok zawartości stron

9 czerwca 2022 roku - Cristian Timmermann - Levels of Explicability for medical Artificial Intelligence: What do we need and what can we get?

9 czerwca 2022 roku - Cristian Timmermann - <span lang='en'>Levels of Explicability for medical Artificial Intelligence: What do we need and what can we get?</span>

Interdyscyplinarne Centrum Etyki UJ (INCET) zaprasza na kolejne seminarium badawcze w ramach projektu BIOUNCERTAINTY, na którym referat wygłosi Cristian Timmermann. Spotkanie odbędzie się w czwartek 9 czerwca o godzinie 17:30 w sali 25 Instytutu Filozofii UJ (Grodzka 52) oraz za pośrednictwem platformy MS Teams.

Referat na podstawie pracy zbiorowej. Autorzy: Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch, Cristian Timmermann

Abstrakt:

Definition of the problem: The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts present a challenge for diagnostic AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because doctors and patients desire to trace how results are produced without risking reductions in the performance of AI systems. The centrality of explicability within the informed consent process for diagnostic AI systems invites for an ethical reflection on the trade-offs. What levels of explicability are needed to properly obtain informed consent when utilizing diagnostic AI systems?

Arguments: We proceed in four steps: First, we map the terms commonly associated with explicability as described in the literature, i.e. explainability, interpretability, understandability, comprehensibility, demonstrability, and transparency. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we relate the five elements of informed consent, i.e. information disclosure, understanding, competence, voluntariness, and acknowledging the decision to the levels of explicability identified in the first step. This allows concluding which level of explicability physicians must reach and what patients can expect. In a last step, we discuss whether and how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic systems in radiology as an example.

Conclusion: We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.

 

Link do spotkania w MS Teams