F. Ritchie, Amy Tilbrook, Christian Cole, Emily Jefferson, Susan Krueger, Esma Mansouri-Bensassi, Simon Rogers, Jim Q. Smith
{"title":"Machine learning models in trusted research environments -- understanding operational risks","authors":"F. Ritchie, Amy Tilbrook, Christian Cole, Emily Jefferson, Susan Krueger, Esma Mansouri-Bensassi, Simon Rogers, Jim Q. Smith","doi":"10.23889/ijpds.v8i1.2165","DOIUrl":null,"url":null,"abstract":"IntroductionTrusted research environments (TREs) provide secure access to very sensitive data for research. All TREs operate manual checks on outputs to ensure there is no residual disclosure risk. Machine learning (ML) models require very large amount of data; if this data is personal, the TRE is a well-established data management solution. However, ML models present novel disclosure risks, in both type and scale.\nObjectivesAs part of a series on ML disclosure risk in TREs, this article is intended to introduce TRE managers to the conceptual problems and work being done to address them.\nMethodsWe demonstrate how ML models present a qualitatively different type of disclosure risk, compared to traditional statistical outputs. These arise from both the nature and the scale of ML modelling.\nResultsWe show that there are a large number of unresolved issues, although there is progress in many areas. We show where areas of uncertainty remain, as well as remedial responses available to TREs.\nConclusionsAt this stage, disclosure checking of ML models is very much a specialist activity. However, TRE managers need a basic awareness of the potential risk in ML models to enable them to make sensible decisions on using TREs for ML model development.","PeriodicalId":36483,"journal":{"name":"International Journal of Population Data Science","volume":"261 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Population Data Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23889/ijpds.v8i1.2165","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
IntroductionTrusted research environments (TREs) provide secure access to very sensitive data for research. All TREs operate manual checks on outputs to ensure there is no residual disclosure risk. Machine learning (ML) models require very large amount of data; if this data is personal, the TRE is a well-established data management solution. However, ML models present novel disclosure risks, in both type and scale.
ObjectivesAs part of a series on ML disclosure risk in TREs, this article is intended to introduce TRE managers to the conceptual problems and work being done to address them.
MethodsWe demonstrate how ML models present a qualitatively different type of disclosure risk, compared to traditional statistical outputs. These arise from both the nature and the scale of ML modelling.
ResultsWe show that there are a large number of unresolved issues, although there is progress in many areas. We show where areas of uncertainty remain, as well as remedial responses available to TREs.
ConclusionsAt this stage, disclosure checking of ML models is very much a specialist activity. However, TRE managers need a basic awareness of the potential risk in ML models to enable them to make sensible decisions on using TREs for ML model development.
导言受信任的研究环境(TRE)为研究提供了对非常敏感数据的安全访问。所有 TRE 都会对输出结果进行人工检查,以确保不存在残余披露风险。机器学习 (ML) 模型需要大量数据;如果这些数据是个人数据,则 TRE 是一种成熟的数据管理解决方案。作为 TRE 中的 ML 披露风险系列文章的一部分,本文旨在向 TRE 管理人员介绍概念性问题以及为解决这些问题而开展的工作。这些风险源于 ML 建模的性质和规模。结果我们表明,尽管在许多领域取得了进展,但仍有大量问题尚未解决。结论在现阶段,对 ML 模型进行披露检查在很大程度上是一项专业活动。然而,TRE 管理者需要对 ML 模型的潜在风险有基本的认识,以便在使用 TRE 进行 ML 模型开发时做出明智的决定。