WerteRadar

Gesundheitsdaten souverän spenden

Die administrative Aufnahme in die stationäre Versorgung findet in Krankenhäusern häufig unter hohem Zeitdruck statt. Patient:innen werden die Konsequenzen der Datenweitergabe meist nicht ausreichend erläutert und haben weder die Zeit noch den notwendigen psychischen Freiraum, ihre Entscheidung informiert und reflektiert zu treffen. Der Datenerfassungsprozess selbst findet zumeist mit Personen statt, die keine oder nur wenig weiterführende Informationen zur Datensicherheit zu Verfügung stellen können. Dies betrifft nicht nur den persönlichen Austausch zwischen Ärzt:innen und Patient:innen, sondern auch technische Dienste.

Im interdisziplinären Zusammenspiel aus Informatik, Medienpädagogik und Medizin soll der Prozess der Datenspende im Kontext der Patient:innenanmeldung neu konzeptualisiert werden. Mittels einer rezeptiven Datenkompetenz, die vorwiegend technisch gefördert werden kann, sollen Patient:innen in die Lage versetzt werden, über die Abgabe ihrer Daten souverän zu entscheiden.

Angesprochen ist damit nicht nur die Fähigkeit zur individuellen Entscheidungsfindung über die Abgabe eigener Daten, sondern auch die soziale Situation der Datenspende. Zur Befähigung soll ein Web-basiertes interaktives Empfehlungssystem konzipiert und praktisch umgesetzt werden, das den herkömmlichen Prozess der Datenweitergabe in drei Schritte zerlegt, nämlich die der Exploration, Interpretation und Reflexion. Patient:innen können mit Hilfe dieses dreistufigen Interaktionsdesigns ihre Datenspende nach ihren Wertemaßstäben selbstbestimmt durchführen, eine informierte Entscheidung treffen und über ihre Werte bezüglich der Datenweitergabe reflektieren.

Ziel des Verbundprojekts

Das übergeordnete Ziel des interdisziplinären Vorhabens ist es, die Weitergabe personenbezogener Gesundheitsdaten durch Erfassung und Auswertung bestehender Werte der Interessengruppen, basierend auf der Methode des Value Sensitive Design, neu zu gestalten. Damit sollen Patient:innen unter Berücksichtigung von Aspekten der Datensicherheit und des Datenschutzes eine selbstbestimmte und reflektierte Weitergabe ihrer Daten ermöglicht werden.

Die Verbundpartner:innen eint ein Forschungs- und Erkenntnisinteresse an der theoretischen Frage nach der Ausgestaltung der ethisch-reflexiven und technischen Souveränität bei der Weitergabe von personenbezogenen Gesundheitsdaten.

Der strukturelle Aufbau des Verbunds ist in vier Teilvorhaben unterteilt und erfolgt in der Projektlaufzeit vom 1. Juli 2020 bis 30. Juni 2023.

Teilvorhaben 1

— Medienpädagogische Begleitung

Der WerteRadar lebt vom Einbezug gegenwärtiger Diskussionen und Konzepte rund um digitale Souveränität, welche medienpädagogisch aufbereitet werden. Die Kooperation dient auch dazu, grundlegende Annahmen in Bezug auf die Zielgruppe und Menschenbilder aus pädagogischer Sicht offenzulegen und im Verbund zu reflektieren. In diesem Teilvorhaben wird vor allem ein Verständnis für die ethisch-reflexive Souveränität beim Prozess der Datenweitergabe aufgebaut und die dafür notwendigen Maßnahmen realisiert.

Teilvorhaben 2

— WerteRadar: Reflektierte Datenweitergabe

Die theoretische Basis für dieses Teilvorhabens bieten erste Prinzipien für das Design eines interaktiven Systems, welches es Patient:innen erlaubt, über die Weitergabe ihrer persönlichen Daten zu reflektieren. Innerhalb des Reflexionsprozesses werden die unterschiedlichen Verrauschungsstufen von Differential Privacy berücksichtigt, um dem Schutzinteresse des Einzelnen gerecht zu werden. Dieser Reflexionsprozess kann aber aufgrund der schweren Verständlichkeit der Technologie verhindert werden. Daher werden entsprechende Vermittlungsansätze entwickelt, indem unterschiedliche Realisierungen für den WerteRadar konzipiert, implementiert und evaluiert werden. Das Ergebnis ist ein empirisch validierter Demonstrator des WerteRadar.

Teilvorhaben 3

— Usable Security: Datensicherheit und Datenschutz

Maßnahmen zur Datensicherheit und zum Datenschutz sollen identifiziert werden, die einerseits für Patient:innen verständlich sind und andererseits in der täglichen Handhabung vom medizinischen Personal als geeignet erscheinen. In Abstimmung mit Interessengruppen wird untersucht, wie Daten von Patient:innen mit Methoden nach Differential Privacy anonymisiert und trotzdem weiterhin sinnvoll zu Forschungszwecken eingesetzt werden können. Die Untersuchungsergebnisse bilden die Basis für ein Machine Learning-basiertes Differential Privacy Konzept. Dieses Konzept muss für Patient:innen und Forscher:innen visuell gestaltet werden, sodass die eingesetzten Mechanismen und der gewählte Kompromiss zwischen Wahrung der Privatsphäre und Maximierung der Genauigkeit nachvollziehbar sind. Die Umsetzung wird hinsichtlich ihrer Tauglichkeit für den Einsatz in der medizinischen Forschung evaluiert.

Teilvorhaben 4

— Praktische Anwendung des WerteRadar

An den Kliniken der Charité – Universitätsmedizin Berlin werden mehrere Interessengruppen sowohl aktiv als auch partizipativ am Verbundprojekt teilnehmen. Dieses Teilvorhaben stellt hierbei eine Einbettung der Projektabläufe in die vorhandene Strukturen an der Charité dar. Dazu zählt sowohl die Begleitung der partizipierenden Patent:innen, forschenden Ärzt:innen sowie externen Partner:innen als auch die Unterstützung der assoziierten Verbundpartner:innen. Zusätzlich wird der Brückenschlag für den Technologietransfer moderiert. Der Aufbau klarer Projektorganisationsstrukturen und Einsicht in die Zuständigkeiten innerhalb der Charité sollen es dem Verbund ermöglichen, die Entwicklung über die gesamte Projektlaufzeit zu begleiten.

Publikationen

Advocating Values through Meaningful Participation: Introducing a Method to Elicit and Analyze Values for Enriching Data Donation Practices in Healthcare

Peter Sörries, David Leimstädtner, and Claudia Müller-Birn. 2024. Advocating Values through Meaningful
Participation: Introducing a Method to Elicit and Analyze Values for Enriching Data Donation Practices in
Healthcare. In Proceedings of the ACM Human-Computer Interaction 8, CSCW1, Article 16 (April 2024), 32 pages. https://doi.org/10.1145/3637293 
 
The secondary use of routinely collected patient data made possible by the broad consent form is seen as a prerequisite for developing data-driven health technologies. In Germany, relevant stakeholder groups (e.g., ethics committees and data protection authorities) specified the broad consent form; however, only one group of patient representatives was consulted, which may indicate asymmetries in engagement. This situation informed our research on medical data donation and emphasized foregrounding patient values. Drawing on participatory design, value sensitive design, and emerging research on value-led participation, we propose a method consisting of (1) a workshop concept for participatory value elicitation composed of four carefully coordinated phases and (2) an analysis procedure to examine the empirical data collected. This analysis allowed us to derive design requirements for medical data donation user interfaces. We conducted three workshops with patient advocates of vulnerable groups and patients in residential care of a psychosomatic unit. Our findings provide new directions to improve user interfaces for medical data donation: First, user interfaces need to enhance patients‘ reflective thinking about the potential consequences of their data donation; second, a decision facilitator supporting patients‘ value-based decision-making (e.g., by providing simple language or tailoring descriptions to patient needs); and finally, a data intermediary relieving patients‘ decision-making and giving them control over their data after donation. Moreover, we emphasize the need to increase the use of participatory approaches in health technology development.

Investigating Responsible Nudge Design for Informed Decision-Making Enabling Transparent and Reflective Decision-Making

David Leimstädtner, Peter Sörries, and Claudia Müller-Birn. 2023.
Investigating Responsible Nudge Design for Informed Decision-Making Enabling Transparent and Reflective Decision-Making. In Mensch und Computer 2023 (MuC ’23), September 3–6, 2023, Rapperswil, Switzerland. ACM, New York, NY, USA 17 Pages. https://doi.org/10.1145/3603555.3603567.

Consent interfaces are habitually designed to coerce people into sharing the maximum amount of data, rather than making decisions that align with their intentions and privacy attitudes, by leveraging cognitive biases to nudge users toward certain decision outcomes through interface design. Reflection and transparency have been proposed as two design dimensions of a choice architecture constituting a responsible nudge approach capable of counteracting these mechanisms by prompting reflected choice. In a crowdsourced experiment, we evaluate these capabilities of a proposed data-disclosure consent interface design deploying the responsible nudge approach within a realistic setting by exploiting a status quo bias during the sign-up of an online survey platform as a secondary task within a crowdsourcing context. Our results provide insights into a responsible design of consent interfaces, suggesting that prompting reflection significantly decreases the discrepancy between users‘ privacy attitudes and decision outcomes. Meanwhile, making the presence of a nudge transparent had no significant effect on its influence. Furthermore, identifying individuals‘ attitudes as a significant predictor of privacy behavior provides a promising direction for future research.

Foregrounding Values through Public Participation: Eliciting Values of Citizens in the Context of Mobility Data Donation

Peter Sörries, Daniel Franzen, Markus Sperl, and Claudia Müller-Birn. 2023.
Foregrounding Values through Public Participation:
Eliciting Values of Citizens in the Context of Mobility Data Donation. In Mensch und Computer 2023 (MuC ’23), September 3–6, 2023, Rapperswil, Switzerland. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3603555.3608531.

Citizen science (CS) projects are conducted with interested volunteers and have already shown promise for large-scale scientific research. However, CS tends to cultivate the sharing of large amounts of data. Towards this, our research aims to understand better citizens‘ potential privacy concerns in such participation formats. We, therefore, investigate how meaningful public participation can be facilitated to foreground citizens‘ values regarding mobility data donation in CS. In this regard, we developed a two-step method: (1) a workshop concept for participatory value elicitation and (2) an analysis procedure to examine the empirical data collected systematically. Our findings based on three workshops provide new directions for improving data donation practices in CS.

Endorsing Values through Participation: Facilitating Workshops for Participatory Value Elicitation in Two Different Contexts to Inform Sociotechnical Designs

Peter Sörries, David Leimstädtner, Markus Sperl, and Claudia Müller-Birn. 2023.
Endorsing Values through Participation: Facilitating Workshops for Participatory Value Elicitation in Two Different Contexts to Inform Sociotechnical Designs.
Veröffentlicht durch die Gesellschaft für Informatik e.V. in P. Fröhlich & V. Cobus (Hrsg.): Mensch und Computer 2023 (MuC ’23) – Workshopband, September 3–6, 2023, Rapperswil, Switzerland. https://doi.org/10.18420/muc2023-mci-ws02-412.

Legal measures such as the GDPR aim to regulate the collection and use of personal data for scientific or commercial purposes. However, these measures might not be enough to protect individual privacy. Moreover, it is rarely possible for individuals to participate in and contribute to regulatory strategies. Informed by this situation, we were challenged on how responsible data collection can be achieved considering individuals‘ values and needs. Based on our ongoing research in healthcare and urban mobility, we developed a two-step method: first, a workshop concept for participatory values elicitation, and second, an analysis procedure to examine the empirical data collected systematically. Our findings from the workshops show how values can inform sociotechnical designs.

Unfolding Values through Systematic Guidance: Conducting a Value-Centered Participatory Workshop for a Patient-Oriented Data Donation

David Leimstädtner, Peter Sörries, and Claudia Müller-Birn. 2022.
Unfolding Values through Systematic Guidance: Conducting a Value-Centered Participatory Workshop for a Patient-Oriented Data Donation. In Mensch und Computer 2022 (MuC ’22), September 4–7, 2022, Darmstadt, Germany. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3543758.3547560.

Routinely collected clinical patient data posits a valuable resource for data-driven medical innovation. Such secondary data use for medical research purposes is dependent on the patient’s consent. To gain an understanding of the patients‘ values and needs regarding medical data donations, we developed a participatory workshop method, integrating approaches from value-sensitive and reflective design to explore patients‘ values and translate them into hypothetical, ideal design solutions. The data gathered in the workshop are used to derive practicable design requirements for patient-oriented data donation technologies. In this paper, we introduce the workshop process and evaluate its application.

Taking a Value Perspective on Medical Data Donation Through Participatory Workshops

Claudia Müller-Birn, David Leimstädtner, and Peter Sörries. 2022.
Taking a Value Perspective on Medical Data Donation Through Participatory Workshops. Veröffentlicht durch die Gesellschaft für Informatik e.V. in K. Marky, U. Grünefeld & T. Kosch (Hrsg.): Mensch und Computer 2022 – Workshopband, 04.-07. September 2022, Darmstadt. https://doi.org/10.18420/muc2022-ws02-235.

Clinical patient data is a valuable resource for data-driven medical research. However, discussions around personal data privacy highlight the urgency of designing user interfaces that communicate the possibilities and limitations of the data security used when sharing personal health data. To better understand patients‘ values regarding medical data sharing, we developed a methodical approach for value-centered participatory workshops. This approach is inspired by two strains, value-sensitive design and reflective design, to reveal values related to a data donation process in the medical field. The data collected in the workshop (the first of three) will be used to derive design recommendations to improve data donation processes.

Am I Private and If So, how Many?: Communicating Privacy Guarantees of Differential Privacy with Risk Communication Formats

Daniel Franzen, Saskia Nuñez von Voigt, Peter Sörries, Florian Tschorsch, and Claudia Müller-Birn. 2022.
Am I Private and If So, how Many? Communicating Privacy Guarantees of Differential Privacy with Risk Communication Formats. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS ’22). Association for Computing Machinery, New York, NY, USA, 1125–1139. https://doi.org/10.1145/3548606.3560693

 

Every day, we have to decide multiple times, whether and how much personal data we allow to be collected. This decision is not trivial, since there are many legitimate and important purposes for data collection, for examples, the analysis of mobility data to improve urban traffic and transportation. However, often the collected data can reveal sensitive information about individuals. Recently visited locations can, for example, reveal information about political or religious views or even about an individual’s health. Privacy-preserving technologies, such as differential privacy (DP), can be employed to protect the privacy of individuals and, furthermore, provide mathematically sound guarantees on the maximum privacy risk. However, they can only support informed privacy decisions, if individuals understand the provided privacy guarantees. This article proposes a novel approach for communicating privacy guarantees to support individuals in their privacy decisions when sharing data. For this, we adopt risk communication formats from the medical domain in conjunction with a model for privacy guarantees of DP to create quantitative privacy risk notifications. We conducted a crowd-sourced study with 343 participants to evaluate how well our notifications conveyed the privacy risk information and how confident participants were about their own understanding of the privacy risk. Our findings suggest that these new notifications can communicate the objective information similarly well to currently used qualitative notifications, but left individuals less confident in their understanding. We also discovered that several of our notifications and the currently used qualitative notification disadvantage individuals with low numeracy: these individuals appear overconfident compared to their actual understanding of the associated privacy risks and are, therefore, less likely to seek the needed additional information before an informed decision. The promising results allow for multiple directions in future research, for example, adding visual aids or tailoring privacy risk communication to characteristics of the individuals.

Personalized PATE: Differential Privacy for Machine Learning with Individual Privacy Guarantees

Christoper Mühl and Franziska Boenisch. 2022.
Personalized PATE: Differential Privacy for Machine Learning with Individual Privacy Guarantees. Proceedings on Privacy Enhancing Technologies, 1, 158-176.
arXiv preprint arXiv:2202.10517.

Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP). Yet, usually, the privacy of the training data comes at the costs of the resulting ML models‘ utility. One reason for this is that DP uses one homogeneous privacy budget epsilon for all training data points, which has to align with the strictest privacy requirement encountered among all data holders. In practice, different data holders might have different privacy requirements and data points of data holders with lower requirements could potentially contribute more information to the training process of the ML models. To account for this possibility, we propose three novel methods that extend the DP framework Private Aggregation of Teacher Ensembles (PATE) to support training an ML model with different personalized privacy guarantees within the training data. We formally describe the methods, provide theoretical analyses of their privacy bounds, and experimentally evaluate their effect on the final model’s utility at the example of the MNIST and Adult income datasets. Our experiments show that our personalized privacy methods yield higher accuracy models than the non-personalized baseline. Thereby, our methods can improve the privacy-utility trade-off in scenarios in which different data holders consent to contribute their sensitive data at different privacy levels

Situated Case Studies for a Human-Centered Design of Explanation User Interfaces

Claudia Müller-Birn, Katrin Glinka, Peter Sörries, Michael Tebbe, and Susanne Michl. 2021.
Situated Case Studies for a Human-Centered Design of Explanation User Interfaces.
In ACM CHI Workshop on Operationalizing Human-Centered Perspectives in Explainable AI.
arXiv:2103.15462.

Researchers and practitioners increasingly consider a human-centered perspective in the design of machine learning-based applications, especially in the context of Explainable Artificial Intelligence (XAI). However, clear methodological guidance in this context is still missing because each new situation seems to require a new setup, which also creates different methodological challenges. Existing case study collections in XAI inspired us; therefore, we propose a similar collection of case studies for human-centered XAI that can provide methodological guidance or inspiration for others. We want to showcase our idea in this workshop by describing three case studies from our research. These case studies are selected to highlight how apparently small differences require a different set of methods and considerations. With this workshop contribution, we would like to engage in a discussion on how such a collection of case studies can provide a methodological guidance and critical reflection.

Privacy Needs Reflection: Conceptional Design Rationales for Privacy-Preserving Explanation User Interfaces

Peter Sörries, Claudia Müller-Birn, Katrin Glinka, Franziska Boenisch, Marian Margraf, Sabine Sayegh-Jodehl, and Matthias Rose. 2021.
Privacy needs Reflection: Conceptional Design Rationales for Privacy-Preserving Explanation User Interfaces.
In Proceedings of the Mensch und Computer 2021 Workshop on Usable Security und Privacy Workshop.
doi.org/10.18420/muc2021-mci-ws14-389.

The application of machine learning (ML) in the medical domain has recently received a lot of attention. However, the constantly growing need for data in such ML-based approaches raises many privacy concerns, particularly when data originate from vulnerable groups, for example, people with a rare disease. In this context, a challenging but promising approach is the design of privacy-preserving computation technologies (e.g. differential privacy). However, design guidance on how to implement such approaches in practice has been lacking. In our research, we explore these challenges in the design process by involving stakeholders from medicine, security, ML, and human-computer interaction, as well as patients themselves. We emphasize the suitability of reflective design in this context by considering the concept of privacy by design. Based on a real-world use case situated in the healthcare domain, we explore the existing privacy needs of our main stakeholders, i.e. medical researchers or physicians and patients. Stakeholder needs are illustrated within two scenarios that help us to reflect on contradictory privacy needs. This reflection process informs conceptional design rationales and our proposal for privacy-preserving explanation user interfaces. We propose that the latter support both patients’ privacy preferences for a meaningful data donation and experts’ understanding of the privacy-preserving computation technology employed.

Aktivitäten

05/10/2023

Forum Privatheit: Werteorientierte partizipative Methoden zur Unterstützung von Bürger:innen bei der Wahrnehmung ihrer Datensouveränität

27 & 28/04/2023

Abschlussveranstaltung: Mensch-Technik-Interaktion für digitale Souveränität (DISO)

13/10/2022

Forum Privatheit: Werteorientiertes Design als Ansatz für individuelle und kollektive Selbstbestimmung

19/09/2022

Wie können die Werte von Patientinnen und Patienten bei der Spende ihrer Gesundheitsdaten Berücksichtigung finden?

15/06/2022

Webtalk: Digitale Selbstbestimmung – ohne uns? Partizipation und Digitale Innovation

12/05/2022

Vernetzungstreffen: Mensch-Technik-Interaktion für digitale Souveränität

16/02/2022

Coding IxD — Interaktive Ausstellung zum Thema "Digital:Sovereignty"

30/11/2021

BMBF-Forschungstour „Miteinander durch Innovation“: #IchZukunftUnd #DigitaleSouveränität

30/11/2021

Mensch und Technik
in Interaktion
— Wie gelingt individuelle
digitale Souveränität?

16/06/2021

Vernetzungstreffen: Mensch-Technik-Interaktion für digitale Souveränität​

27/04/2021

Der Wert von
Informationen
— Gesundheitsdaten
reflektiert spenden