GRAIMatter:TrusTEd研究环境中的人工智能模型访问指南和资源(GRAIMatter)。

IF 1.6 Q3 HEALTH CARE SCIENCES & SERVICES
E. Jefferson, Christian Cole, Alba Crespi i Boixader, Simon Rogers, Maeve Malone, F. Ritchie, Jim Q. Smith, Francesco Tava, A. Daly, J. Beggs, Antony Chuter
{"title":"GRAIMatter:TrusTEd研究环境中的人工智能模型访问指南和资源(GRAIMatter)。","authors":"E. Jefferson, Christian Cole, Alba Crespi i Boixader, Simon Rogers, Maeve Malone, F. Ritchie, Jim Q. Smith, Francesco Tava, A. Daly, J. Beggs, Antony Chuter","doi":"10.23889/ijpds.v7i3.2005","DOIUrl":null,"url":null,"abstract":"ObjectivesTo assess a range of tools and methods to support Trusted Research Environments (TREs) to assess output from AI methods for potentially identifiable information, investigate the legal and ethical implications and controls, and produce a set of guidelines and recommendations to support all TREs with export controls of AI algorithms. \nApproachTREs provide secure facilities to analyse confidential personal data, with staff checking outputs for disclosure risk before publication. Artificial intelligence (AI) has high potential to improve the linking and analysis of population data, and TREs are well suited to supporting AI modelling. However, TRE governance focuses on classical statistical data analysis. The size and complexity of AI models presents significant challenges for the disclosure-checking process. Models may be susceptible to external hacking: complicated methods to reverse engineer the learning process to find out about the data used for training, with more potential to lead to re-identification than conventional statistical methods. \nResultsGRAIMatter is: \n \nQuantitatively assessing the risk of disclosure from different AI models exploring different models, hyper-parameter settings and training algorithms over common data types \nEvaluating a range of tools to determine effectiveness for disclosure control \nAssessing the legal and ethical implications of TREs supporting AI development and identifying aspects of existing legal and regulatory frameworks requiring reform. \nRunning 4 PPIE workshops to understand their priorities and beliefs around safeguarding and securing data \nDeveloping a set of recommendations including \n \nsuggested open-source toolsets for TREs to use to measure and reduce disclosure risk \ndescriptions of the technical and legal controls and policies TREs should implement across the 5 Safes to support AI algorithm disclosure control \ntraining implications for both TRE staff and how they validate researchers \n \n \n \nConclusionGRAIMatter is developing a set of usable recommendations for TREs to use to guard against the additional risks when disclosing trained AI models from TREs.","PeriodicalId":36483,"journal":{"name":"International Journal of Population Data Science","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2022-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).\",\"authors\":\"E. Jefferson, Christian Cole, Alba Crespi i Boixader, Simon Rogers, Maeve Malone, F. Ritchie, Jim Q. Smith, Francesco Tava, A. Daly, J. Beggs, Antony Chuter\",\"doi\":\"10.23889/ijpds.v7i3.2005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ObjectivesTo assess a range of tools and methods to support Trusted Research Environments (TREs) to assess output from AI methods for potentially identifiable information, investigate the legal and ethical implications and controls, and produce a set of guidelines and recommendations to support all TREs with export controls of AI algorithms. \\nApproachTREs provide secure facilities to analyse confidential personal data, with staff checking outputs for disclosure risk before publication. Artificial intelligence (AI) has high potential to improve the linking and analysis of population data, and TREs are well suited to supporting AI modelling. However, TRE governance focuses on classical statistical data analysis. The size and complexity of AI models presents significant challenges for the disclosure-checking process. Models may be susceptible to external hacking: complicated methods to reverse engineer the learning process to find out about the data used for training, with more potential to lead to re-identification than conventional statistical methods. \\nResultsGRAIMatter is: \\n \\nQuantitatively assessing the risk of disclosure from different AI models exploring different models, hyper-parameter settings and training algorithms over common data types \\nEvaluating a range of tools to determine effectiveness for disclosure control \\nAssessing the legal and ethical implications of TREs supporting AI development and identifying aspects of existing legal and regulatory frameworks requiring reform. \\nRunning 4 PPIE workshops to understand their priorities and beliefs around safeguarding and securing data \\nDeveloping a set of recommendations including \\n \\nsuggested open-source toolsets for TREs to use to measure and reduce disclosure risk \\ndescriptions of the technical and legal controls and policies TREs should implement across the 5 Safes to support AI algorithm disclosure control \\ntraining implications for both TRE staff and how they validate researchers \\n \\n \\n \\nConclusionGRAIMatter is developing a set of usable recommendations for TREs to use to guard against the additional risks when disclosing trained AI models from TREs.\",\"PeriodicalId\":36483,\"journal\":{\"name\":\"International Journal of Population Data Science\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2022-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Population Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23889/ijpds.v7i3.2005\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Population Data Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23889/ijpds.v7i3.2005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

目的评估一系列支持可信研究环境(TRE)的工具和方法,评估人工智能方法对潜在可识别信息的输出,调查法律和道德影响和控制,并制定一套指导方针和建议,以支持所有具有人工智能算法出口控制的TRE。方法TRE提供了分析机密个人数据的安全设施,工作人员在发布前检查输出是否存在披露风险。人工智能(AI)在改善人口数据的连接和分析方面具有很高的潜力,TRE非常适合支持人工智能建模。然而,TRE治理侧重于经典的统计数据分析。人工智能模型的规模和复杂性对披露检查过程提出了重大挑战。模型可能容易受到外部黑客攻击:对学习过程进行逆向工程以找出用于训练的数据的复杂方法,比传统的统计方法更有可能导致重新识别。结果GRAIMatter是:定量评估不同人工智能模型探索不同模型的披露风险,常见数据类型的超参数设置和训练算法评估一系列工具以确定披露控制的有效性评估支持人工智能开发的TRE的法律和道德影响,并确定需要改革的现有法律和监管框架的各个方面。举办4次PPIE研讨会,以了解他们在保护和保护数据方面的优先事项和信念制定一套建议,包括供TRE使用的建议开源工具集,以衡量和减少TRE应在5个Safes中实施的技术和法律控制及政策的披露风险描述,以支持AI算法披露控制培训对TRE工作人员的影响以及他们如何验证研究人员结论GRAIMatter正在为TRE制定一套可用的建议,以在披露TRE中经过训练的人工智能模型时防范额外的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).
ObjectivesTo assess a range of tools and methods to support Trusted Research Environments (TREs) to assess output from AI methods for potentially identifiable information, investigate the legal and ethical implications and controls, and produce a set of guidelines and recommendations to support all TREs with export controls of AI algorithms. ApproachTREs provide secure facilities to analyse confidential personal data, with staff checking outputs for disclosure risk before publication. Artificial intelligence (AI) has high potential to improve the linking and analysis of population data, and TREs are well suited to supporting AI modelling. However, TRE governance focuses on classical statistical data analysis. The size and complexity of AI models presents significant challenges for the disclosure-checking process. Models may be susceptible to external hacking: complicated methods to reverse engineer the learning process to find out about the data used for training, with more potential to lead to re-identification than conventional statistical methods. ResultsGRAIMatter is: Quantitatively assessing the risk of disclosure from different AI models exploring different models, hyper-parameter settings and training algorithms over common data types Evaluating a range of tools to determine effectiveness for disclosure control Assessing the legal and ethical implications of TREs supporting AI development and identifying aspects of existing legal and regulatory frameworks requiring reform. Running 4 PPIE workshops to understand their priorities and beliefs around safeguarding and securing data Developing a set of recommendations including suggested open-source toolsets for TREs to use to measure and reduce disclosure risk descriptions of the technical and legal controls and policies TREs should implement across the 5 Safes to support AI algorithm disclosure control training implications for both TRE staff and how they validate researchers ConclusionGRAIMatter is developing a set of usable recommendations for TREs to use to guard against the additional risks when disclosing trained AI models from TREs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.50
自引率
0.00%
发文量
386
审稿时长
20 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信