Coping with Certainty (CwiC)

Short information

Vision:In the foreseeable future, digitization and artificial intelligence will change large parts of health research and health care in the long term: thanks to Big Data and learning algorithms, the factor ‘ignorance’ will soon be banned from the health sector. Nothing will remain unknown or hidden – everything will become predictable, provided with probability values and weighed against each other. But what does this mean for the individual: to know his or her concrete risk of illness? What does it mean for society: to know statistically exactly who will fall ill with which disease, when, and what costs will be produced? Does social solidarity stand up to this? Or are other, completely new regulatory mechanisms to be tested?
As a CwiC project, we deal with these topics and questions – and design the legal, ethical and political guidelines that will protect us as a society from a corpus delicti-like health dystopia!

Funder:  Federal Ministry of Education and Research (BMBF)

Acronym: CwiC – Coping with Certainty

Project duration: 2020–2023



Legal subproject

Responsible for the legal subproject

Description: Legal subproject

The legal project begins with a theoretical explanation of the concept of legal knowledge and, in particular, how the law ignores certain facts. From there it continues with a comprehensive analysis of the situation de lege lata. Here, a fundamental rights perspective will provide information about individual reasons for conscientious ignorance, which will then be compared to the legal provisions, especially those of the statutory health insurance (SHI). These norms are based on the basic principle of solidarity, which means, among other things, that the existing state of health does not have to be disclosed to the insurance companies. It is the working hypothesis of the project that both lines of argumentation – the individual and the collective – can be combined and complement each other. In this way, they contribute to determining whether and where the law in its current form permits or demands conscientious ignorance.

Ethical subproject

Responsible for the ethical subproject

Prof. Dr. Matthias Braun

Chair for Social Ethics

University of Bonn

Max Tretter


Department of Systematic Theology
Chair of Systematic Theology II

Description: Ethical subproject

With regard to solidarity, there is a deep-rooted notion that solidarity requires at least a degree of insecurity. The ethical subproject ties in with this idea by questioning existing normative discourses on the relationship between solidarity and its respective entanglement with degrees of certainty.

  • The first task addresses the epistemological question of the extent to which acts of giving in solidarity are challenged by the promises of new AI-generated degrees of certainty.
  • A second task is to elaborate in detail to what extent the clinical use of AI questions trust in institutions – as a prerequisite for solidarity.
  • The third task is to examine the changing forms of individual and collective controllability in times of clinical use of AI. We already have a more or less sharp (culturally handed down) idea of individual possibilities of control – such as the right not to know, claims to transparency or responsibility and liability – which are necessary prerequisites not only to be able to make free decisions, but also to decide under which conditions giving in solidarity is an expression of individual freedom. An important question will be how these forms of controllability are challenged when they not only have to cope with degrees of uncertainty, but are also confronted with the idea of a growing corpus of (postulated) certainty. While this in itself is a complex issue, things become even more complicated when we think about modes of collective controllability and their embedding in more or less sharp concepts of spatiality and temporality.

Economic subproject

Responsible for the economic subproject

Prof. Dr. Nora Szech

Chair of Political Economy

Karlsruhe Institute of Technology

Description: Economic subproject

CwiC investigates the normative and behavioural challenges in dealing with the new possibilities for prediction in and with AI at the interface of science, society and technology. The ability to predict future developments with unprecedented accuracy affects many – if not all – areas of social life.
In the health sector in particular, the new type of prediction allows for much more precise planning, but on the other hand also calls into question central points of reference: individual self-concepts, our general distinction between illness and health, as well as traditional norms, such as the concept of basic solidarity, which is also fundamental with regard to concepts of social security.
In the CwiC subproject Economics, the handling of AI will therefore be examined more closely from the perspective of behavioral science. How do individuals deal with the new possibilities? Are better forecasting possibilities welcome at all, or rather not? Are concepts of insurance questioned or changed?
The demand for and acceptance of more precise information through the use of AI is probably strongly normative. Therefore, we are also interested in the question how social information – especially normative information – influences the use of AI. To investigate behavioral effects, we conduct economic studies at the KIT KD2 Lab. There, the participants make incentive-based decisions – i.e. decisions that have real consequences, for example of a monetary nature – to ensure a high validity of the observed results.

Selected publications