在医疗保健中使用人工智能对病人和医生信心的影响

Elif Maltaş, Muhammet Gümüş, Emine Kızılkaya, Si̇bel Orhan
{"title":"在医疗保健中使用人工智能对病人和医生信心的影响","authors":"Elif Maltaş, Muhammet Gümüş, Emine Kızılkaya, Si̇bel Orhan","doi":"10.46648/gnj.301","DOIUrl":null,"url":null,"abstract":"Artificial intelligence and robotic systems are rapidly entering healthcare, playing key roles in certain medical functions, including diagnostics and clinical treatments. The focus in the development of health technology has been on human-machine interactions. This has led to a number of technology-centric problems. This study focuses on the impact of these technologies on the patient-doctor relationship and human-organization in the healthcare. It’s argued that artificial intelligence in health can have significant effects on patient-doctor trust. It focuses on three main drivers of trust that is potentially supported or disrupted by the introduction of artificial intelligence or robotic systems in healthcare. First, doctors are certified and licensed to practice medicine. A license; indicates that some individuals have certain skills, knowledge, and a high level of value. Second, physicians appear to play a role as part of an active duo tasked with providing care that promotes patient value. Finally, a patient's experiences with their doctor are a positive or negative driver of trust between patient-physician understanding. When a doctor interacts with a patient, he builds social and experiential \"capital\" with him, and this understanding leads to increased trust. It’s argued that healthcare artificial intelligence systems should be considered as assistive technologies that go beyond the usual functions of medical devices. As a result, AI systems in health need to be regulated to provide relevant values. It’s suggested for patients and physicians that three high-level principles can guide this effort. Medical professionals must be licensed in healthcare AI systems; Alternative care methods should be provided until the approval and “standard of care” given by the patient or caregiver prior to its implementation and in terms of artificial intelligence is accepted. In prioritizing these functions in regulatory measures, medical community favour the appropriate societal and interpersonal deployment and implementation of such health technologies.","PeriodicalId":394509,"journal":{"name":"Gevher Nesibe Journal IESDR","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effects Of Artificial Intelligence Used In Healthcare On Confidence On Patient And Physician\",\"authors\":\"Elif Maltaş, Muhammet Gümüş, Emine Kızılkaya, Si̇bel Orhan\",\"doi\":\"10.46648/gnj.301\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence and robotic systems are rapidly entering healthcare, playing key roles in certain medical functions, including diagnostics and clinical treatments. The focus in the development of health technology has been on human-machine interactions. This has led to a number of technology-centric problems. This study focuses on the impact of these technologies on the patient-doctor relationship and human-organization in the healthcare. It’s argued that artificial intelligence in health can have significant effects on patient-doctor trust. It focuses on three main drivers of trust that is potentially supported or disrupted by the introduction of artificial intelligence or robotic systems in healthcare. First, doctors are certified and licensed to practice medicine. A license; indicates that some individuals have certain skills, knowledge, and a high level of value. Second, physicians appear to play a role as part of an active duo tasked with providing care that promotes patient value. Finally, a patient's experiences with their doctor are a positive or negative driver of trust between patient-physician understanding. When a doctor interacts with a patient, he builds social and experiential \\\"capital\\\" with him, and this understanding leads to increased trust. It’s argued that healthcare artificial intelligence systems should be considered as assistive technologies that go beyond the usual functions of medical devices. As a result, AI systems in health need to be regulated to provide relevant values. It’s suggested for patients and physicians that three high-level principles can guide this effort. Medical professionals must be licensed in healthcare AI systems; Alternative care methods should be provided until the approval and “standard of care” given by the patient or caregiver prior to its implementation and in terms of artificial intelligence is accepted. In prioritizing these functions in regulatory measures, medical community favour the appropriate societal and interpersonal deployment and implementation of such health technologies.\",\"PeriodicalId\":394509,\"journal\":{\"name\":\"Gevher Nesibe Journal IESDR\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Gevher Nesibe Journal IESDR\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.46648/gnj.301\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gevher Nesibe Journal IESDR","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.46648/gnj.301","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能和机器人系统正在迅速进入医疗保健领域,在某些医疗功能中发挥着关键作用,包括诊断和临床治疗。卫生技术发展的重点是人机交互。这导致了许多以技术为中心的问题。本研究的重点是这些技术对医疗保健中的医患关系和人-组织的影响。有人认为,医疗领域的人工智能可以对医患之间的信任产生重大影响。它重点关注了在医疗保健领域引入人工智能或机器人系统可能会支持或破坏信任的三个主要驱动因素。首先,医生有行医执照。许可证;表明某些个人具有一定的技能、知识和较高的价值水平。其次,医生似乎扮演着积极的双重角色,他们的任务是提供促进患者价值的护理。最后,病人与他们的医生的经历是一个积极或消极的驱动信任之间的病人-医生的理解。当医生与病人互动时,他与病人建立了社会和经验“资本”,这种理解会增加信任。有人认为,医疗人工智能系统应该被视为超越医疗设备通常功能的辅助技术。因此,需要对卫生领域的人工智能系统进行监管,以提供相关价值。对于患者和医生,建议有三个高水平的原则可以指导这项工作。医疗专业人员必须获得医疗人工智能系统的许可;应提供替代护理方法,直到患者或护理人员在实施之前批准和“标准护理”,并在人工智能方面被接受。在监管措施中优先考虑这些功能时,医学界赞成适当地在社会和人际间部署和实施这些保健技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Effects Of Artificial Intelligence Used In Healthcare On Confidence On Patient And Physician
Artificial intelligence and robotic systems are rapidly entering healthcare, playing key roles in certain medical functions, including diagnostics and clinical treatments. The focus in the development of health technology has been on human-machine interactions. This has led to a number of technology-centric problems. This study focuses on the impact of these technologies on the patient-doctor relationship and human-organization in the healthcare. It’s argued that artificial intelligence in health can have significant effects on patient-doctor trust. It focuses on three main drivers of trust that is potentially supported or disrupted by the introduction of artificial intelligence or robotic systems in healthcare. First, doctors are certified and licensed to practice medicine. A license; indicates that some individuals have certain skills, knowledge, and a high level of value. Second, physicians appear to play a role as part of an active duo tasked with providing care that promotes patient value. Finally, a patient's experiences with their doctor are a positive or negative driver of trust between patient-physician understanding. When a doctor interacts with a patient, he builds social and experiential "capital" with him, and this understanding leads to increased trust. It’s argued that healthcare artificial intelligence systems should be considered as assistive technologies that go beyond the usual functions of medical devices. As a result, AI systems in health need to be regulated to provide relevant values. It’s suggested for patients and physicians that three high-level principles can guide this effort. Medical professionals must be licensed in healthcare AI systems; Alternative care methods should be provided until the approval and “standard of care” given by the patient or caregiver prior to its implementation and in terms of artificial intelligence is accepted. In prioritizing these functions in regulatory measures, medical community favour the appropriate societal and interpersonal deployment and implementation of such health technologies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信