Al-Driven Decision-Making in the Clinic. Ethical, Legal and Societal Challenges (vALID)

Short description

AI is on everyone’s lips. Applications of AI are becoming increasingly relevant in the field of clinical decision-making. While many of the conceivable use cases of clinical AI still lay in the future, others have already begun to shape practice. The project vALID provides a normative, legal, and technical analysis of how AI-driven clinical Decisions Support Systems could be aligned with the ideal of clinician and patient sovereignty. It examines how concepts of trustworthiness, transparency, agency, and responsibility are affected and shifted by clinical AI—both on a theoretical level, and with regards to concrete moral and legal consequences. One of the hypotheses of vALID is that the basic concepts of our normative and bioethical instruments are shifted. Depending on the embedding of the system in decision-making processes, the modes of interaction of the actors involved are transformed. vALID thus investigates the following questions: who or what guides clinical decision-making if AI appears as a new (quasi-)agent on the scene? Who is responsible for errors? What would it mean to make AI-black-boxes transparent to clinicians and patients? How could the ideal of trustworthy AI be attained in the clinic?

The analysis is grounded in an empirical case study, which deploys mock-up simulations of AI-driven Clinical Decision Support Systems to systematically gather attitudes of physicians and patients on variety of designs and use cases. The aim is to develop an interdisciplinary governance perspective on system design, regulation, and implementation of AI-based clinical systems that not only optimize decision-making processes, but also maintain and reinforce the sovereignty of physicians and patients.

Funder: Federal Ministry of Education and Research (BMBF)

Akronym: vALID – Al-Driven Decision-Making in the Clinic. Ethical, Legal and Societal Challenges

Projektzeitraum: 2020–2023



Ethical subproject

The research profile of the Chair of Systematic Theology II (Ethics) at Friedrich-Alexander-Universität Erlangen-Nürnberg is characterized by a fundamental-theological perspective on ethics, and places particular focus on issues in bioethics and social ethics. In line with this emphasis, ethics is conceived of as an academic discipline that uncovers normative questions, analyses their presuppositions and implications, develops criteria for the assessment of situations of moral conflict, and informs policy making. In vALID, the team firstly analyses ethical challenges arising from artificial intelligence in general and in clinical contexts in particular. In a second step, the subproject critically reflects on gaps and aporias in current descriptive heuristics and proposed strategies to navigate these challenges. Finally, in a third, constructive step and in close interaction with the vALID subprojects, the team develops suggestions on how clinical, AI-based decision support systems could be developed and deployed responsibly. One of the hypotheses is that besides the potential to ameliorate clinical processes and decision-making, these applications can advance but also constrain the sovereignty of clinicians and patients.

Legal subproject

The vALID team at the University of Hanover (Chair of Criminal Law, Criminal Procedure Law, Comparative Criminal Law and Philosophy of Law of Prof. Dr. Susanne Beck) is working on addressing legal issues pertaining to artificial intelligence-supported decision-making in clinical settings.

Transparency and a clear allocation of responsibilities play a central role, especially in medical decisions. Only in this way can the doctors’ agency and the patients’ right to self-determination be guaranteed as far as possible. AI’s influence on decision making is changing our current understanding of these concepts. The vALID team plans to first analyse the legal status quo regarding the use of AI in medical decision making. After that, potential solutions will be identified and compared to the solutions discussed in other legal cultures. In close exchange with the project partners, the empirical results will be legally evaluated and the legal evaluation will then be linked to ethical considerations. Our focus will be to elaborate adequate practices of medical decision making in cooperation with AI in such a way that legal responsibility can still be attributed and the patient is involved in a way that preserves his autonomy. One of the most important goals is to collectively develop appropriate guidelines for the creation of new legal regulations for the usage of AI in a medical context.

Technical subproject

Since many years, the Speech and Language Technology Lab of DFKI addresses topics in the biomedical domain, as for instance processing of clinical text to ease the access of information or the prediction of particular events using a combination of structured and unstructured data. Within joint projects together with Charité, multiple technologies and prototypes have been developed. Within the vALID project, DFKI together with its partners will examine future interaction scenarios of AI systems within medical staff and patients, particularly in context of ethical, legal and social aspects. In addition to already existing solutions, which have been developed in other projects, we will create new mock-ups to explore future human-AI interactions scenarios. One of the goals is the development of requirements of future AI and language technologies in medical context, such as explainability or transparency to maximize the decision making sovereignty of medics and data sovereignty of patients.

Empirical subproject

For the third decade, the Digital Nephrology working group at Charité develops and refines an electronic health record of all kidney transplantation patients in Berlin. Based on these high quality data sets and aiming for optimal patient care, implementation of clinical decision support systems (CDSS) into clinical practice is a major goal. While pilot studies are already comparing the performance of AI-algorithms and experienced nephrologists in the prediction of clinically important outcomes, systematic investigations regarding the acceptance of AI-driven systems by physicians and patients, and the ethical and legal consequences involved, are sparse. Within the vALID project, we will deploy different uses cases of CDSS with increasing autonomy and various machine learning explainability techniques. Thereafter, in a study involving physicians working in our outpatient transplantation facility, we will collect data about the chances and risks from the healthcare professional’s and the patient’s perspective, as well as practical, ethical and legal issues, that arise in the context of this very special human-AI-interaction. This could be the starting point for a data-based, interdisciplinary discussion of AI in medicine.

Selected Publikationen