James Anibal, Jasmine Gunkel, Shaheen Awan, Hannah Huth, Hang Nguyen, Tram Le, Jean-Christophe Bélisle-Pipon, Micah Boyer, Lindsey Hazen, Yael Bensoussan, David Clifton, Bradford Wood
{"title":"医生现在要对你进行测谎:人工智能对病人进行事实核查的伦理问题。","authors":"James Anibal, Jasmine Gunkel, Shaheen Awan, Hannah Huth, Hang Nguyen, Tram Le, Jean-Christophe Bélisle-Pipon, Micah Boyer, Lindsey Hazen, Yael Bensoussan, David Clifton, Bradford Wood","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) methods have been proposed for the prediction of social behaviors which could be reasonably understood from patient-reported information. This raises novel ethical concerns about respect, privacy, and control over patient data. Ethical concerns surrounding clinical AI systems for social behavior verification can be divided into two main categories: (1) the potential for inaccuracies/biases within such systems, and (2) the impact on trust in patient-provider relationships with the introduction of automated AI systems for \"fact-checking\", particularly in cases where the data/models may contradict the patient. Additionally, this report simulated the misuse of a verification system using patient voice samples and identified a potential LLM bias against patient-reported information in favor of multi-dimensional data and the outputs of other AI methods (i.e., \"AI self-trust\"). Finally, recommendations were presented for mitigating the risk that AI verification methods will cause harm to patients or undermine the purpose of the healthcare system.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11468487/pdf/","citationCount":"0","resultStr":"{\"title\":\"The doctor will polygraph you now: ethical concerns with AI for fact-checking patients.\",\"authors\":\"James Anibal, Jasmine Gunkel, Shaheen Awan, Hannah Huth, Hang Nguyen, Tram Le, Jean-Christophe Bélisle-Pipon, Micah Boyer, Lindsey Hazen, Yael Bensoussan, David Clifton, Bradford Wood\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial intelligence (AI) methods have been proposed for the prediction of social behaviors which could be reasonably understood from patient-reported information. This raises novel ethical concerns about respect, privacy, and control over patient data. Ethical concerns surrounding clinical AI systems for social behavior verification can be divided into two main categories: (1) the potential for inaccuracies/biases within such systems, and (2) the impact on trust in patient-provider relationships with the introduction of automated AI systems for \\\"fact-checking\\\", particularly in cases where the data/models may contradict the patient. Additionally, this report simulated the misuse of a verification system using patient voice samples and identified a potential LLM bias against patient-reported information in favor of multi-dimensional data and the outputs of other AI methods (i.e., \\\"AI self-trust\\\"). Finally, recommendations were presented for mitigating the risk that AI verification methods will cause harm to patients or undermine the purpose of the healthcare system.</p>\",\"PeriodicalId\":93888,\"journal\":{\"name\":\"ArXiv\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11468487/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ArXiv\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The doctor will polygraph you now: ethical concerns with AI for fact-checking patients.
Artificial intelligence (AI) methods have been proposed for the prediction of social behaviors which could be reasonably understood from patient-reported information. This raises novel ethical concerns about respect, privacy, and control over patient data. Ethical concerns surrounding clinical AI systems for social behavior verification can be divided into two main categories: (1) the potential for inaccuracies/biases within such systems, and (2) the impact on trust in patient-provider relationships with the introduction of automated AI systems for "fact-checking", particularly in cases where the data/models may contradict the patient. Additionally, this report simulated the misuse of a verification system using patient voice samples and identified a potential LLM bias against patient-reported information in favor of multi-dimensional data and the outputs of other AI methods (i.e., "AI self-trust"). Finally, recommendations were presented for mitigating the risk that AI verification methods will cause harm to patients or undermine the purpose of the healthcare system.