The Digital Ethics Foresight Lab at the University of Potsdam explores current and emerging technological developments in digital medicine from an ethical perspective.
Digital Ethics Foresight Lab
The responsible research, development, and application of digital technologies in health and medicine face a central challenge: epistemic uncertainty.
Digital Ethics Foresight Lab

Digital Ethics Foresight Lab
Researching tomorrow’s health technologies
- Project Duration: 2022- 2027
- Carried out at University of Potsdam
- Administrative contact: Joschka Haltaufderheide
Researchers from the field of digital health
For the University of Tübingen:
- Dr. Florian Funer, M.A.
- Prof. Dr. Hans-Jörg Ehni
- Prof. Dr. Dr. Urban Wiesing
For the University of Potsdam:
- Prof. Dr. Robert Ranisch
- Dr. Joschka Haltaufderheide
To address these challenges, the Digital Ethics Foresight Lab pursues three main goals: first, continuous monitoring and horizon scanning of new developments in digital health; second, the development of methods for ethical foresight and embedded guidance of innovation processes; and third, in-depth research on selected areas of concern.
The Foresight Lab identifies relevant topics through literature reviews, foresight workshops, consultations, and institutional exchange with partners. In doing so, it helps determine factors, actors, and trends that warrant particular ethical attention.
The results are regularly published and made available in the form of briefing notes and fact sheets. This ensures continuous knowledge exchange within the expert community and provides the foundation for future research projects in medical ethics.
Topics researched at the Digital Ethics Foresight Lab
Since the broad attention triggered by OpenAI’s ChatGPT in 2022, large language models (LLMs) have rapidly established themselves as transformative tools in healthcare and medicine. Their general adaptability enables applications in clinical practice, patient support, research, and public health—ranging from diagnosis, triage, and treatment planning to literature search, workflow automation, and global health communication.
The Foresight Lab’s research highlights key ethical challenges arising from the growing use of LLMs in healthcare. Central concerns include epistemic risks such as hallucinations, opacity, and the spread of misleading information; data protection and privacy issues in handling sensitive patient data; biases stemming from non-representative training datasets; and the dangers of uncontrolled experimentation through premature implementation in clinical contexts.
These challenges reveal tensions between innovation and patient safety, professional responsibility, and equitable access to health technologies. Given their disruptive potential and accelerated dissemination, LLMs should be regarded as a form of “social experiment” in medicine—one that requires iterative evaluation, robust ethical frameworks, and clearly defined standards for human oversight. Addressing these challenges is essential to ensure that LLMs contribute to patient well-being, uphold professional integrity, and foster the fair advancement of medical care.
The results have been published in academic articles as well as in a factsheet.
Further information:
Haltaufderheide, Joschka, and Robert Ranisch. 2023. “Ethics of ChatGPT: A Systematic Review of Large Language Models in Healthcare and Medicine (Protocol CRD42023431326).” Unpublished manuscript. https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023431326.
Haltaufderheide, Joschka, and Robert Ranisch. 2024. “The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs).” NPJ Digital Medicine 7 (1): 183. doi:10.1038/s41746-024-01157-x.
Ranisch, Robert, and Joschka Haltaufderheide. 2025. “Foundation Models in Medicine Are a Social Experiment: Time for an Ethical Framework.” NPJ Digital Medicine 8 (1): 525. doi:10.1038/s41746-025-01924-4
Robot-assisted surgery (RAS) has rapidly evolved since the 1970s from an experimental technology into a widely established surgical practice. Systems such as Intuitive Surgical’s Da Vinci platform now perform millions of procedures worldwide. Despite this widespread adoption, ethical reflection on RAS has so far received comparatively little attention within medical ethics. Looking ahead, the trend toward increasing robotic autonomy sharpens questions of responsibility, human–machine interaction, professional competence, and ethical oversight. Ethical research must therefore move beyond viewing surgical robots as mere instruments and instead address the implications of increasingly autonomous systems for clinical practice, professional integrity, and patient well-being.
As part of this research focus, a systematic review was conducted. Its aim was to identify practically relevant ethical issues and to trace their development in light of ongoing technological progress. The results were also summarized in a factsheet.
Further information:
Haltaufderheide, Joschka, Stefanie Pfisterer-Heise, Dawid Pieper, and Robert Ranisch. 2023. “Ethical Aspects of Robot-Assisted Surgery: A Systematic Review (Protocol CRD4202339795).” Unpublished manuscript. www.crd.york.ac.uk/prospero/display_record.php?ID=CRD4202339795.
Haltaufderheide, Joschka, Stefanie Pfisterer-Heise, Dawid Pieper, and Robert Ranisch. 2025. “The Ethical Landscape of Robot-Assisted Surgery: A Systematic Review.” Journal of Robotic Surgery 19 (1): 102. doi:10.1007/s11701-025-02228-1
Disease Interception is an emerging medical paradigm that aims to identify and treat diseases such as neurodegenerative disorders before the onset of clinical symptoms. Within the work of the Foresight Lab, the concept of Disease Interception has been examined from an ethical perspective. While the approach promises to reduce suffering, delay or prevent disease, and promote proactive health management, it also raises profound questions.
Disease Interception challenges conventional distinctions between health, illness, and disease by creating categories of “the healthy ill” – individuals who receive treatment despite the absence of symptoms. This paradigm shift carries risks such as medicalization, psychological burdens, and an erosion of patient autonomy, particularly when technological mediation—through AI, biomarkers, and predictive analytics—reshapes medical decision-making and places future health priorities above present well-being. Ethical challenges include questions of transparency, informed consent, the right not to know, and shifting responsibilities among patients, physicians, and technological systems. As predictive medicine advances, careful ethical scrutiny will be required to reconcile the promise of early intervention with respect for autonomy, justice, and the lived experience of health.
The findings have been published both as a book chapter and as a factsheet.
Further information:
- Haltaufderheide, Joschka, and Robert Ranisch. 2024. “‘Not Quite Ill, Not Quite Free? Disease Interception and the Ethical Implications of Technologically Shaped Decision-Making.’” In Disease Interception as Opportunity and Challenge: An Interdisciplinary Analysis, edited by Lara Wiese, Anke Diehl, and Stefan Huster, 93–110. Bochumer Schriften zum Sozial- und Gesundheitsrecht 26. Baden-Baden: Nomos.