Skip to main navigation Skip to search Skip to main content

RELEASE-AI: Protecting Sensitive Data Across The AI Lifecycle Disclosure Risks and Mitigations in Trusted Research Environments

Research output: Contribution to conferencePoster

29 Downloads (Pure)

Abstract

Trusted Research Environments (TREs) provide secure access to personal and sensitive data, such as Electronic Healthcare Records (EHRs), for approved research. The increasing use of Artificial Intelligence (AI) and Machine Learning (ML) in health research introduces new challenges for managing privacy risks of individuals’ data. We present a comprehensive framework that embeds mitigation strategies throughout the entire AI project lifecycle, structured across six project phases: design, governance, development, evaluation, disclosure control, and release.

This framework aims to empower all stakeholders - researchers, project teams, output checkers, and TRE staff - with clear, phase-specific recommendations on which measures and checks are necessary before model release to help identify potential disclosure risks to data. It promotes early identification of risks with corresponding mitigations and ensures responsibilities are clearly assigned to relevant actors at each stage, from initial planning through to deployment and monitoring. Mitigation strategies include good AI/ML practices, both in terms of code and documentation, privacy-enhancing techniques during training and evaluation, restricting model access via secure query systems, licencing agreements, and adversarial attack testing using tools like SACRO-ML. This process highlights the need to train everyone involved appropriately with relevant role-specific material.

A novel tiering system for disclosure control is proposed, categorising AI projects based on the likelihood of attack and associated sensitive data leakage risks. By integrating a lifecycle-focused risk management process with a scalable disclosure control tiering system, this approach enables innovative AI research while maintaining rigorous data protection standards and public trust.
Original languageEnglish
Publication statusPublished - 23 Sept 2025
EventData and Digital Health: Scotland Cross-Sector Hub Event 3 - COSLA, Verity House, Edinburgh, United Kingdom
Duration: 23 Sept 202523 Sept 2025
https://www.tickettailor.com/events/nhsresearchscotland/1786651

Workshop

WorkshopData and Digital Health
Country/TerritoryUnited Kingdom
CityEdinburgh
Period23/09/2523/09/25
Internet address

Keywords

  • Trusted Research Environment
  • Machine Learning
  • Artificial Intelligence
  • Release
  • Disclosure Control

ASJC Scopus subject areas

  • Health Informatics

Fingerprint

Dive into the research topics of 'RELEASE-AI: Protecting Sensitive Data Across The AI Lifecycle Disclosure Risks and Mitigations in Trusted Research Environments'. Together they form a unique fingerprint.
  • DARE Transformational Programme

    Cole, C. (Investigator)

    1/03/2531/03/27

    Project: Research

Cite this