Humans and Machines
Suppose that a software program presents a serious diagnosis, like cancer, without providing any rationale for the decision. Would humans trust a machine’s judgement in such a case? “Machine learning can support medical diagnoses. But if the decisions made by an Artificial Intelligence (AI) system are not comprehensible to doctors and patients, they have to be taken with a grain of salt and might even have to be ignored in sensitive fields like medical diagnostics,” says Dr. Ute Schmid, Professor of Cognitive Systems at the University of Bamberg.
Since September 2018, Schmid and her research assistant Bettina Finzel have been involved in an interdisciplinary and multi-institutional project aimed at using a number of particular samples to make automated diagnoses more transparent.
Artificial Intelligence Diagnoses Illnesses
The so-called “Transparent Medical Expert Companion” comprises two prototypes: one model uses video material to recognize pain experienced by patients unable to communicate their discomfort themselves—patients who may be in intensive care and sedated, or those who suffer from neurocognitive conditions like dementia. The model explains classification and is meant to help doctors find the most suitable and effective pain treatment. Another prototype is currently being developed to assist in digital pathology. It creates confirmable colon cancer diagnoses on the basis of microscopy imaging data.
Bettina Finzel, a research assistant at the University of Bamberg’s Faculty of Information Systems and Applied Computer Science, is also involved; the 29-year-old has been working on the project since she was a master’s student. “It’s fascinating to be part of a project that facilitates cooperative collaboration between doctors and a computerised information system,” says Finzel.
The doctoral candidate’s work is supported by her mentor and project director Professor Ute Schmid and by the Bamberg Graduate School of Affective and Cognitive Sciences, of which she is a member. The Graduate School offers regular courses, doctoral seminars, and workshops with internationally renowned researchers.
Our work is very interdisciplinary. This means that I’m not only developing my skills in the field of computer science, but I’m also gaining insights into medicine, psychology, and cognitive science.Bettina Finzel, doctoral candidate, Cognitive Systems research group
Faculty of Information Systems and Applied Computer Science
Professorships: 16 (soon to be increased to 35, including 7 professorships in AI: calls for professorships)
Research Assistants: 69
Women’s Advancement: The first university in the German-speaking world, the University of Bamberg received the “Minerva Informatics Equality Award” in 2018 for providing exemplary support for women’s careers in computer science.
Focus Research Topics
- Computer sciences: media informatics, cultural informatics, data science, human-computer interaction, and artificial intelligence
- Information systems programme examining added-value-oriented design, development and management of operational information systems
- Classic computer sciences: distributed systems, security, and software engineering
Teaching Opportunities in English: M.Sc. International Software Systems Science, M.Sc. International Information Systems Management
Faculty of Information Systems and Applied Computer Sciences
Interdisciplinary Cooperation for Comprehensive Solutions
Various research groups are involved in the development of the “Transparent Medical Expert Companion.” The Fraunhofer Institute for Integrated Circuits IIS in Erlangen is investigating deep learning approaches to classify types of tissue in microscope images and recognize facial expressions. The Fraunhofer Heinrich Hertz Institute HHI in Berlin are applying their Layer-wise Relevance Propagation (LRP) approach to make classification decisions from these black-box learners transparent. LRP generates visual explanations based on pixels that were identified as important for the deep neural network when deriving its classification. In individual cases, the expertise of the University of Erlangen’s Pathological Institute, in collaboration with Professor Arndt Hartmann and the University of Bamberg’s Professor of Physiological Psychology, Dr. Stefan Lautenbacher, is also incorporated.
“This research project calls for knowledge from various fields,” explains Privatdozent and project coordinator Dr. Thomas Wittenberg of the Fraunhofer IIS. “Thanks to the interdisciplinary cooperation, it’s possible for us to develop companions for different medical experts that meet important criteria like transparency and explicability while providing sound diagnostic results.”
This is exactly what Bettina Finzel finds so appealing: “Our work is very interdisciplinary. This means that I’m not only developing my skills in the field of computer science, but I’m also gaining insights into medicine, psychology, and cognitive science.” Additionally, she enjoys the congenial atmosphere among the doctoral candidates and project partners who meet together regularly.
This field is full of possibilities! I can imagine conducting future research on machine learning as it applies to other medical conditions.Bettina Finzel, doctoral candidate, Cognitive Systems research group
The Bamberg team’s principal task is to program those components, which explain the deep neural network’s decisions. In particular, the researchers apply the interpretable machine learning approach ILP (inductive logic programming) to generate verbal explanations, that enrich the visual highlightings generated by LRP. ILP identifies complex regularities in the data and generalizes across them. The learned white-box model can be transformed into a verbal explanation. These combined visual-verbal explanations can support developers to evaluate the learned models (e.g., to recognize unwanted overfitting) and allow detailed and domain-specific explanations that can be communicated to the medical experts.
The researchers’ goal is to develop an intelligent decision support system that, not only reports a diagnosis—that a person is experiencing pain or that a tissue sample shows a specific tumor class—but that also provides the reasoning behind this assessment. For instance, a verbal explanation for pain might be: the patient’s eyebrows are lowered, the cheeks are raised, and the eyelids are pressed together. This explanation is related to an image, indicating the relevant parts of the face with coloration and arrows. An additional aspect of such a transparent and comprehensible system would be that the system provides an estimate of the degree of certainty of its diagnosis.
Besides supporting medical experts, the AI system could also be used as an intelligent tutor supporting young doctors. To meet this aim, the explanatory component of the system is expanded to include additional explanatory modalities—prototypes and near miss examples. A prototypical instance for a class—for example, a typical image for a certain type of tumor—can support novices in learning. A near miss example for a certain class can help them to recognize the essential differences between instances which look similar but belong to different classes.
Bamberg Graduate School of Affective and Cognitive Sciences (BaGrACS)
Founded: 2014
Doctoral Candidates: 19, of whom 10 are male and 9 are female
Focal Research Topics
- Psychology, computer science
Target Level
- Graduates in psychology, with an emphasis on affective and cognitive sciences
- Doctoral candidates in neighboring subjects (applied computer science)
Curriculum
- BaGrACS is specifically focused on an introduction to affective and cognitive sciences, affective sciences, cognitive sciences, und perceptual sciences
- Research internships, workshops, language-centre supported courses, seminars, symposia
Doctors Tailor the System to Meet Their Needs
“The attending doctors decide whether or not they agree with the assessment,” says Finzel. “They can influence the algorithms by making amendments and corrections in the system. In this way, the software continues to learn and incorporate the experts’ invaluable knowledge.” Ultimately, responsibility lies with the person who is being assisted – not replaced – by the transparent companion.
Furthermore, transparent companions can be used to help train doctors in the future. The German Federal Ministry of Education and Research has funded the project through August 2021 with a total of €1.3 million, of which approximately €290,000 have been allocated to the University of Bamberg. Finzel even has plans to continue her engagement with doctor-AI cooperation after the project’s conclusion: “This field is full of possibilities! I can imagine conducting future research on machine learning as it applies to other medical conditions.”
Members of all four faculties at the University of Bamberg cooperate in the Digital Humanities, Social and Human Sciences research focus area towards the goal of developing innovative information technologies and testing digital solutions for research in participating disciplines.