Skip to main content

08 March 2022 Dipl.-Journ. Constantin Schulte Strathaus, Presse- und Öffentlichkeitsarbeit, KU Eichstätt-Ingolstadt

The use of artificial intelligence in medicine offers new ways for making more precise diagnoses and relieving doctors from routine tasks. How well do doctors really have to understand this technology to develop the "right” measure of trust in such systems? And does the use of AI lead to any ethically relevant changes in the doctor-patient relationship? It is answers to these and similar questions that a project headed by the THI Ingolstadt and the Catholic University Eichstätt-Ingolstadt (KU) will be working on.

Cooperative partners in this project are Prof. Dr. Matthias Uhl, who holds the Professorship for Social Implications and Ethical Aspects of AI and Prof. Dr.-Ing. Marc Aubreville, Professor of Image Understanding and Medical Application of AI at the THI as well as Prof. Dr. Alexis Fritz, holder of the Chair of Moral Theology at the KU. The project “Responsibility Gaps in Human-Machine Interactions: The Ambivalence of Trust in AI” is being funded by the bidt, the Bavarian Research Institute for Digital Transformation.

Monotonous tasks are time-consuming and tiring to humans. Having experienced doctors assess dozens of mammograms can have the unwanted side-effect that small but diagnostically relevant details are overlooked. Putting AI to good use in this field has the potential to relieve humans of this burden and free up their capacities for decision-making. “This is based on the assumption, that the human experts must be able to trust the AI system. This trust, however, can lead to the doctor not critically reassessing the AI decision”, says Prof. Dr. Marc Aubreville. Even systems that are typically used in the medical field are not infallible. That is, after all, why in all procedures humans are meant to be the last authority in the decision-making chain.

But is that enough to establish a reliable degree of accountability in the interaction

“Seen from an ethical perspective, merely optimizing AI systems technically, is too narrow an approach. That is why we want to compare the first scenario with a situation, in which the first step is not a recommendation by AI. Instead, a real doctor has made a diagnosis, which only then in a second step is validated by artificial intelligence”, says Professor Fritz. He will explore the normative requirements that help decision-makers remain aware of their agency, thereby making them shift less responsibility to an artificial intelligence. To this end, existing studies on the ethical ramifications of the interaction between doctors and systems based on artificial intelligence are being gathered and analyzed. The different concepts of responsibility and accountability as part of the medical practice will be evaluated in workshops and qualitative interviews with doctors and engineers. Among other things, the relationship between doctor and patient will also play a role. Do doctors, for example, feel that consulting a recommendation system questions their authority in front of patients?

In general terms, the participants of this project want to provide data for the development of user-aware AI solutions. “Today, a lot of solutions are designed without any consideration of the subsequent decision-making processes. Therefore, based on our findings, we will be able to evaluate how best to present the results and ambiguities of algorithms to the expert in charge. Our aim in this will be to find the right balance of trust in algorithmic recommendations, especially in situations outside the norm, in which an algorithm might not be able to provide the best advice”, says Fritz.of human and machine? “The simplest approach, which aims at introducing a human into the process only when wrong decisions have to be rectified, is too naive”, says Prof. Dr. Alexis Fritz. Just as humans will feel less accountable when they have reached a decision in cooperation with other humans, studies have shown that the same holds true if human decision-makers have been counseled by a system that makes recommendations. Prof. Dr. Matthias Uhl sums up the findings of his own empirical studies as follows: “In different morally relevant contexts of decision-making we have seen that humans keep following AI recommendations, even if we give them good reason to doubt the system’s recommendations, for example because the system is trained using poor-quality data.”

Back to top Icon

This website uses cookies and the Matomo web analysis tool. By continuing to browse you agree to our use of cookies. Change your settings here. More information.