Informed Consent and AI Transparency

Abstract

This panel discussions aims to highlight and explore the role of informed consent in large scale data processing for machine learning and AI development and its impact across a range of stakeholders:

  • For healthcare providers to support their services and whether informed consent is implicit in the provision of care.
  • For patients to have transparency on how their data is being used and re-used in treatment and to power future machine learning.
  • For researchers to understand the scope of informed consent and the requirements to be explicit about how data is used, re-used, stored and handled for clinical trials and development purposes.
  • For industry to apply diligence and best practice in developing their products including discovery, drugs, treatments, software, and tech-enabled devices.
  • For DPOs to ensure that all data processing remains lawful, secure, and transparent whilst protecting data subject rights and governance standards.

The panel discussion will do a deep dive into the role of informed consent and importance of transparency for all AI powered interventions for health data collected at the point of care, clinical trials, product development, and patient self-management.

Heuristics and machine learning algorithms have been in use in healthcare treatments and clinical trials for decades. Recent technological developments across AI are increasing the power of the scope of these algorithms and their autonomy in decision making. This has escalated concerns around how health data, experience and records of individuals are being used in machine learning to provide benefits and enhance these tech capabilities and opportunities for all stakeholders.

The ethical and privacy concerns are varied and nuanced. Regulations are coming in to effect that may address many of these such as the AI Act. Central to these are issues of informed consent and data privacy and protection. Individuals, digital rights groups, health ethicists, and patient advocacy groups are actively engaging to promote meaningful transparency around the existing and emerging use of AI across a range of healthcare activities including devices, diagnostics, remote care, wearables, clinical trials, and industry and commercial use.

Informed consent has been standard in the healthcare practice. However implied consent cannot be assumed for the increasing use of AI and existing and forthcoming regulations are mandating clearer specification. Better understanding of what might be reasonably expected when consenting for treatment or using tooling to self-manage must be prioritised. There are also implications for participant understanding of what they are consenting to, especially in the case of vulnerable groups. This is essential for maintaining trust, autonomy and equity as AI innovations continue to be deployed.

In this session we explore these questions: To what extent is consent to treatment assumed to imply satisfaction with training and use of AI? What is the role of meaningful transparency and implied consent, is it sufficient to meet and regulatory and ethical standards? What do all stakeholders need to do to ensure that transparent understanding and knowledge around AI utilisation of patient data is achieved? What choices should stakeholders be offered around the use of AI, can they opt-out?

Structure

These two 90 -minute sessions will begin with three 15-minute presentations:

This will be followed by a 40-minute panel discussion between them on the challenges that the Regulation will place on different healthcare and research stakeholders, and what practical measures might best enable EHDS success. Time will be included for audience questions and interaction. The session will close with a final summing up of key considerations that policy makers should take on board.