医疗保健领域机器学习方法对虚假和不可信数据的敏感性

F. Marulli, S. Marrone, Laura Verde
{"title":"医疗保健领域机器学习方法对虚假和不可信数据的敏感性","authors":"F. Marulli, S. Marrone, Laura Verde","doi":"10.3390/jsan11020021","DOIUrl":null,"url":null,"abstract":"Machine Learning models are susceptible to attacks, such as noise, privacy invasion, replay, false data injection, and evasion attacks, which affect their reliability and trustworthiness. Evasion attacks, performed to probe and identify potential ML-trained models’ vulnerabilities, and poisoning attacks, performed to obtain skewed models whose behavior could be driven when specific inputs are submitted, represent a severe and open issue to face in order to assure security and reliability to critical domains and systems that rely on ML-based or other AI solutions, such as healthcare and justice, for example. In this study, we aimed to perform a comprehensive analysis of the sensitivity of Artificial Intelligence approaches to corrupted data in order to evaluate their reliability and resilience. These systems need to be able to understand what is wrong, figure out how to overcome the resulting problems, and then leverage what they have learned to overcome those challenges and improve their robustness. The main research goal pursued was the evaluation of the sensitivity and responsiveness of Artificial Intelligence algorithms to poisoned signals by comparing several models solicited with both trusted and corrupted data. A case study from the healthcare domain was provided to support the pursued analyses. The results achieved with the experimental campaign were evaluated in terms of accuracy, specificity, sensitivity, F1-score, and ROC area.","PeriodicalId":288992,"journal":{"name":"J. Sens. Actuator Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Sensitivity of Machine Learning Approaches to Fake and Untrusted Data in Healthcare Domain\",\"authors\":\"F. Marulli, S. Marrone, Laura Verde\",\"doi\":\"10.3390/jsan11020021\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine Learning models are susceptible to attacks, such as noise, privacy invasion, replay, false data injection, and evasion attacks, which affect their reliability and trustworthiness. Evasion attacks, performed to probe and identify potential ML-trained models’ vulnerabilities, and poisoning attacks, performed to obtain skewed models whose behavior could be driven when specific inputs are submitted, represent a severe and open issue to face in order to assure security and reliability to critical domains and systems that rely on ML-based or other AI solutions, such as healthcare and justice, for example. In this study, we aimed to perform a comprehensive analysis of the sensitivity of Artificial Intelligence approaches to corrupted data in order to evaluate their reliability and resilience. These systems need to be able to understand what is wrong, figure out how to overcome the resulting problems, and then leverage what they have learned to overcome those challenges and improve their robustness. The main research goal pursued was the evaluation of the sensitivity and responsiveness of Artificial Intelligence algorithms to poisoned signals by comparing several models solicited with both trusted and corrupted data. A case study from the healthcare domain was provided to support the pursued analyses. The results achieved with the experimental campaign were evaluated in terms of accuracy, specificity, sensitivity, F1-score, and ROC area.\",\"PeriodicalId\":288992,\"journal\":{\"name\":\"J. Sens. Actuator Networks\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"J. Sens. Actuator Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/jsan11020021\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Sens. Actuator Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/jsan11020021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

机器学习模型容易受到攻击,例如噪声、隐私侵犯、重播、虚假数据注入和逃避攻击,这些攻击会影响其可靠性和可信度。规避攻击(用于探测和识别潜在的ml训练模型的漏洞)和中毒攻击(用于获取在提交特定输入时可以驱动其行为的倾斜模型)代表了一个需要面对的严重和开放的问题,以确保依赖基于ml或其他AI解决方案的关键领域和系统的安全性和可靠性,例如医疗保健和司法。在本研究中,我们旨在对人工智能方法对损坏数据的敏感性进行全面分析,以评估其可靠性和弹性。这些系统需要能够理解哪里出了问题,找出如何克服由此产生的问题,然后利用它们所学到的知识来克服这些挑战,提高它们的健壮性。主要的研究目标是通过比较几个具有可信和损坏数据的模型来评估人工智能算法对中毒信号的敏感性和响应性。提供了一个来自医疗保健领域的案例研究来支持所进行的分析。通过实验活动获得的结果在准确性、特异性、敏感性、f1评分和ROC面积方面进行评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sensitivity of Machine Learning Approaches to Fake and Untrusted Data in Healthcare Domain
Machine Learning models are susceptible to attacks, such as noise, privacy invasion, replay, false data injection, and evasion attacks, which affect their reliability and trustworthiness. Evasion attacks, performed to probe and identify potential ML-trained models’ vulnerabilities, and poisoning attacks, performed to obtain skewed models whose behavior could be driven when specific inputs are submitted, represent a severe and open issue to face in order to assure security and reliability to critical domains and systems that rely on ML-based or other AI solutions, such as healthcare and justice, for example. In this study, we aimed to perform a comprehensive analysis of the sensitivity of Artificial Intelligence approaches to corrupted data in order to evaluate their reliability and resilience. These systems need to be able to understand what is wrong, figure out how to overcome the resulting problems, and then leverage what they have learned to overcome those challenges and improve their robustness. The main research goal pursued was the evaluation of the sensitivity and responsiveness of Artificial Intelligence algorithms to poisoned signals by comparing several models solicited with both trusted and corrupted data. A case study from the healthcare domain was provided to support the pursued analyses. The results achieved with the experimental campaign were evaluated in terms of accuracy, specificity, sensitivity, F1-score, and ROC area.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信