{"title":"人工智能在心理健康护理中的风险。","authors":"Ceylon Dell","doi":"10.1111/jpm.13119","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) has increasingly integrated into various aspects of healthcare, including diagnostics, care planning and patient management. In psychiatric healthcare specifically, conversational AI is seen as a potential solution to support psychiatric nurses in the assessment of psychiatric illnesses (Rebelo, Verboom, and Santos <span>2023</span>). However, it is important that the oversight of a psychiatric nurse is needed when integrating conversational AI into psychiatric care. This editorial explores the risk of using conversational AI for psychiatric assessments and emphasises the need for human intervention in AI-driven processes.</p><p>In community mental health settings, psychiatric nurses play a crucial role in managing a wide range of conditions. The increasing demand for care, driven by factors such as heightened public awareness and reduction of stigmatisation for psychiatric illnesses, has led to longer waiting times for services (British Medical Association <span>2024</span>). Conversational AI systems have been proposed to alleviate this pressure by streamlining the psychiatric assessment process (Rollwage et al. <span>2024</span>). One of the key benefits of AI is its potential to triage and prioritise those with the most urgent needs, thereby ensuring that critical cases are addressed promptly (Lu et al. <span>2023</span>).</p><p>Psychiatric assessments require a deep understanding of the patient's presentation, psychiatric symptoms and the context of patient behaviour. Conversational AI systems are based on language learning models (LLM) and analyse data they have been trained on previously, which is mainly in written format and often derived from patients' electronic health records initially, advises Yang et al. (<span>2022</span>). Electronic health records contain all patient notes relevant to the episode of care the patient is receiving. It can include quantitative data sets such as diagnoses, charts, patient demographics and patient-led questionnaires based on symptomology or more nuanced qualitative data sets, such as psychiatric nursing observations of patient behaviour and caregiver views (Goldstein et al. <span>2022</span>). These data points help to construct a picture of a patient's presentation as part of an assessment before a patient is spoken to. Indeed, the psychiatric nurse will often review referrals, previous history and information to gain some knowledge of the patient before assessing the needs for that episode of care. However, an AI could face significant challenges in interpreting patient information with the depth and nuance that a psychiatric nurse could, suggests Elyoseph, Levkovich, and Shinan-Altman (<span>2024</span>), potentially resulting in inaccuracies in assessment, which can lead to inaccurate severity assessment and poor treatment outcomes.</p><p>The notion of bias in AI systems is a significant issue in psychiatric care, particularly because these biases often originate from the clinicians themselves Meidert et al. (<span>2023</span>). As psychiatric nurses and other clinicians input information into electronic medical records, their own perceptions and potential biases towards the patient's culture and language, for example, can influence the information recorded, thus affecting training data for AI systems which are required to analyse a patient's psychiatric symptoms. In this way, the human bias impacts the AI bias. Particularly in mental health settings, where patients experience psychiatric symptoms in different ways, it is important to have accurate psychiatric assessment. For example, a conversational AI developed predominantly with patient data from Western healthcare settings could fail to interpret an observation from a psychiatric nurse and written expressions of psychiatric distress from the patient themselves in non-Western cultures (Teye et al. <span>2022</span>).</p><p>Psychiatric assessments involve more than recording symptoms and patient history. They require a range of skills such as empathetic listening, observation and effective communication, which are crucial for fostering trust and understanding. These skills help establish a connection that is vital for effective treatment (Stiger-Outshoven et al. <span>2024</span>). However, without the psychiatric nurse skill set, AI technologies may reduce these complex interactions to data points, failing to capture the patient's unique psychiatric experiences. This reductionist approach by AI may also overlook emotional, behavioural and psychological signals, often conveyed through less overt means such as changes in mood, tone of voice or pauses in conversation. These nonverbal cues are normally observed in face-to-face interactions. One limitation of AI is that it is currently reliant on textual data. Therefore, observations are not documented in the electronic medical record; it might not understand, misanalyse or simply overlook observed nonverbal communications from the psychiatric patient. This could potentially lead to inadequate care, highlighting the essential role of psychiatric nurse oversight in psychiatric assessments, whose keen observations, knowledge and understanding of these subtle cues are critical to delivering effective psychiatric care. As AI continues to support all aspects of healthcare, its application in psychiatric assessments presents opportunities and challenges. While conversational AI systems offer promising solutions, it is the specialism of psychiatric nurses that enables the interpretation of complex human behaviours and the establishment of therapeutic relationships—this cannot yet be fully replicated by AI technology but can be embraced by it, suggests Nashwan et al. (<span>2023</span>).</p><p>To truly benefit from conversational AI in psychiatric settings, AI systems should be designed and trained to enhance the efficiency and accuracy of psychiatric assessments with the guidance and expertise of skilled psychiatric nurses. With this method, AI systems can complement the experience of the psychiatric nurse and enhance the patient experience in psychiatric care.</p><p>The author has nothing to report.</p><p>The author declares no conflicts of interest.</p>","PeriodicalId":50076,"journal":{"name":"Journal of Psychiatric and Mental Health Nursing","volume":"32 2","pages":"382-383"},"PeriodicalIF":2.6000,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jpm.13119","citationCount":"0","resultStr":"{\"title\":\"The Risks of Artificial Intelligence in Mental Health Care\",\"authors\":\"Ceylon Dell\",\"doi\":\"10.1111/jpm.13119\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Artificial intelligence (AI) has increasingly integrated into various aspects of healthcare, including diagnostics, care planning and patient management. In psychiatric healthcare specifically, conversational AI is seen as a potential solution to support psychiatric nurses in the assessment of psychiatric illnesses (Rebelo, Verboom, and Santos <span>2023</span>). However, it is important that the oversight of a psychiatric nurse is needed when integrating conversational AI into psychiatric care. This editorial explores the risk of using conversational AI for psychiatric assessments and emphasises the need for human intervention in AI-driven processes.</p><p>In community mental health settings, psychiatric nurses play a crucial role in managing a wide range of conditions. The increasing demand for care, driven by factors such as heightened public awareness and reduction of stigmatisation for psychiatric illnesses, has led to longer waiting times for services (British Medical Association <span>2024</span>). Conversational AI systems have been proposed to alleviate this pressure by streamlining the psychiatric assessment process (Rollwage et al. <span>2024</span>). One of the key benefits of AI is its potential to triage and prioritise those with the most urgent needs, thereby ensuring that critical cases are addressed promptly (Lu et al. <span>2023</span>).</p><p>Psychiatric assessments require a deep understanding of the patient's presentation, psychiatric symptoms and the context of patient behaviour. Conversational AI systems are based on language learning models (LLM) and analyse data they have been trained on previously, which is mainly in written format and often derived from patients' electronic health records initially, advises Yang et al. (<span>2022</span>). Electronic health records contain all patient notes relevant to the episode of care the patient is receiving. It can include quantitative data sets such as diagnoses, charts, patient demographics and patient-led questionnaires based on symptomology or more nuanced qualitative data sets, such as psychiatric nursing observations of patient behaviour and caregiver views (Goldstein et al. <span>2022</span>). These data points help to construct a picture of a patient's presentation as part of an assessment before a patient is spoken to. Indeed, the psychiatric nurse will often review referrals, previous history and information to gain some knowledge of the patient before assessing the needs for that episode of care. However, an AI could face significant challenges in interpreting patient information with the depth and nuance that a psychiatric nurse could, suggests Elyoseph, Levkovich, and Shinan-Altman (<span>2024</span>), potentially resulting in inaccuracies in assessment, which can lead to inaccurate severity assessment and poor treatment outcomes.</p><p>The notion of bias in AI systems is a significant issue in psychiatric care, particularly because these biases often originate from the clinicians themselves Meidert et al. (<span>2023</span>). As psychiatric nurses and other clinicians input information into electronic medical records, their own perceptions and potential biases towards the patient's culture and language, for example, can influence the information recorded, thus affecting training data for AI systems which are required to analyse a patient's psychiatric symptoms. In this way, the human bias impacts the AI bias. Particularly in mental health settings, where patients experience psychiatric symptoms in different ways, it is important to have accurate psychiatric assessment. For example, a conversational AI developed predominantly with patient data from Western healthcare settings could fail to interpret an observation from a psychiatric nurse and written expressions of psychiatric distress from the patient themselves in non-Western cultures (Teye et al. <span>2022</span>).</p><p>Psychiatric assessments involve more than recording symptoms and patient history. They require a range of skills such as empathetic listening, observation and effective communication, which are crucial for fostering trust and understanding. These skills help establish a connection that is vital for effective treatment (Stiger-Outshoven et al. <span>2024</span>). However, without the psychiatric nurse skill set, AI technologies may reduce these complex interactions to data points, failing to capture the patient's unique psychiatric experiences. This reductionist approach by AI may also overlook emotional, behavioural and psychological signals, often conveyed through less overt means such as changes in mood, tone of voice or pauses in conversation. These nonverbal cues are normally observed in face-to-face interactions. One limitation of AI is that it is currently reliant on textual data. Therefore, observations are not documented in the electronic medical record; it might not understand, misanalyse or simply overlook observed nonverbal communications from the psychiatric patient. This could potentially lead to inadequate care, highlighting the essential role of psychiatric nurse oversight in psychiatric assessments, whose keen observations, knowledge and understanding of these subtle cues are critical to delivering effective psychiatric care. As AI continues to support all aspects of healthcare, its application in psychiatric assessments presents opportunities and challenges. While conversational AI systems offer promising solutions, it is the specialism of psychiatric nurses that enables the interpretation of complex human behaviours and the establishment of therapeutic relationships—this cannot yet be fully replicated by AI technology but can be embraced by it, suggests Nashwan et al. (<span>2023</span>).</p><p>To truly benefit from conversational AI in psychiatric settings, AI systems should be designed and trained to enhance the efficiency and accuracy of psychiatric assessments with the guidance and expertise of skilled psychiatric nurses. With this method, AI systems can complement the experience of the psychiatric nurse and enhance the patient experience in psychiatric care.</p><p>The author has nothing to report.</p><p>The author declares no conflicts of interest.</p>\",\"PeriodicalId\":50076,\"journal\":{\"name\":\"Journal of Psychiatric and Mental Health Nursing\",\"volume\":\"32 2\",\"pages\":\"382-383\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jpm.13119\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Psychiatric and Mental Health Nursing\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jpm.13119\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"NURSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Psychiatric and Mental Health Nursing","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jpm.13119","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NURSING","Score":null,"Total":0}
引用次数: 0
摘要
人工智能(AI)越来越多地融入医疗保健的各个方面,包括诊断、护理计划和患者管理。特别是在精神科医疗保健中,会话人工智能被视为支持精神科护士评估精神疾病的潜在解决方案(Rebelo, Verboom, and Santos 2023)。然而,当将会话人工智能整合到精神科护理中时,精神科护士的监督是必要的。这篇社论探讨了使用对话式人工智能进行精神病学评估的风险,并强调需要在人工智能驱动的过程中进行人工干预。在社区精神卫生机构中,精神科护士在管理各种疾病方面发挥着至关重要的作用。由于公众意识的提高和对精神疾病污名化的减少等因素,对护理的需求不断增加,导致等待服务的时间更长(英国医学协会,2024年)。对话式人工智能系统被提议通过简化精神病学评估过程来减轻这种压力(Rollwage等人,2024)。人工智能的主要好处之一是它有可能对那些最迫切需要的人进行分类和优先处理,从而确保及时解决关键病例(Lu et al. 2023)。精神病学评估需要对患者的表现、精神症状和患者行为背景有深入的了解。Yang等人建议(2022),会话式人工智能系统基于语言学习模型(LLM),并分析之前接受过培训的数据,这些数据主要是书面格式,最初通常来自患者的电子健康记录。电子健康记录包含与患者正在接受的治疗相关的所有患者记录。它可以包括定量数据集,如诊断、图表、患者人口统计数据和基于症状学的患者主导的问卷调查,也可以包括更细致的定性数据集,如精神科护理对患者行为和护理人员观点的观察(Goldstein et al. 2022)。这些数据点有助于在与患者交谈之前构建患者表现的图像,作为评估的一部分。事实上,精神科护士在评估护理需求之前,会经常回顾转诊、既往病史和信息,以获得对患者的一些了解。然而,Elyoseph、Levkovich和Shinan-Altman(2024)认为,人工智能在像精神科护士那样深入和细致地解读患者信息方面可能面临重大挑战,这可能导致评估不准确,从而导致不准确的严重程度评估和不良的治疗结果。人工智能系统中的偏见概念在精神病学护理中是一个重要问题,特别是因为这些偏见通常来自临床医生自己。例如,当精神科护士和其他临床医生将信息输入电子病历时,他们自己对患者文化和语言的看法和潜在偏见可能会影响记录的信息,从而影响分析患者精神症状所需的人工智能系统的培训数据。通过这种方式,人类的偏见影响了人工智能的偏见。特别是在精神卫生机构中,患者以不同的方式经历精神症状,进行准确的精神评估非常重要。例如,主要使用来自西方医疗机构的患者数据开发的对话式人工智能可能无法解释非西方文化中精神科护士的观察结果和患者自己的精神痛苦书面表达(Teye et al. 2022)。精神病学评估不只是记录症状和病史。他们需要一系列的技能,如同理心的倾听、观察和有效的沟通,这些对于培养信任和理解至关重要。这些技能有助于建立对有效治疗至关重要的联系(Stiger-Outshoven et al. 2024)。然而,如果没有精神科护士的技能,人工智能技术可能会将这些复杂的相互作用简化为数据点,无法捕捉到患者独特的精神体验。人工智能的这种简化方法也可能忽略了情绪、行为和心理信号,这些信号通常通过不太明显的方式传达,比如情绪的变化、语调或谈话中的停顿。这些非语言暗示通常在面对面的交流中观察到。人工智能的一个限制是,它目前依赖于文本数据。因此,观察结果没有记录在电子病历中;它可能无法理解、错误分析或简单地忽略观察到的精神病人的非语言交流。
The Risks of Artificial Intelligence in Mental Health Care
Artificial intelligence (AI) has increasingly integrated into various aspects of healthcare, including diagnostics, care planning and patient management. In psychiatric healthcare specifically, conversational AI is seen as a potential solution to support psychiatric nurses in the assessment of psychiatric illnesses (Rebelo, Verboom, and Santos 2023). However, it is important that the oversight of a psychiatric nurse is needed when integrating conversational AI into psychiatric care. This editorial explores the risk of using conversational AI for psychiatric assessments and emphasises the need for human intervention in AI-driven processes.
In community mental health settings, psychiatric nurses play a crucial role in managing a wide range of conditions. The increasing demand for care, driven by factors such as heightened public awareness and reduction of stigmatisation for psychiatric illnesses, has led to longer waiting times for services (British Medical Association 2024). Conversational AI systems have been proposed to alleviate this pressure by streamlining the psychiatric assessment process (Rollwage et al. 2024). One of the key benefits of AI is its potential to triage and prioritise those with the most urgent needs, thereby ensuring that critical cases are addressed promptly (Lu et al. 2023).
Psychiatric assessments require a deep understanding of the patient's presentation, psychiatric symptoms and the context of patient behaviour. Conversational AI systems are based on language learning models (LLM) and analyse data they have been trained on previously, which is mainly in written format and often derived from patients' electronic health records initially, advises Yang et al. (2022). Electronic health records contain all patient notes relevant to the episode of care the patient is receiving. It can include quantitative data sets such as diagnoses, charts, patient demographics and patient-led questionnaires based on symptomology or more nuanced qualitative data sets, such as psychiatric nursing observations of patient behaviour and caregiver views (Goldstein et al. 2022). These data points help to construct a picture of a patient's presentation as part of an assessment before a patient is spoken to. Indeed, the psychiatric nurse will often review referrals, previous history and information to gain some knowledge of the patient before assessing the needs for that episode of care. However, an AI could face significant challenges in interpreting patient information with the depth and nuance that a psychiatric nurse could, suggests Elyoseph, Levkovich, and Shinan-Altman (2024), potentially resulting in inaccuracies in assessment, which can lead to inaccurate severity assessment and poor treatment outcomes.
The notion of bias in AI systems is a significant issue in psychiatric care, particularly because these biases often originate from the clinicians themselves Meidert et al. (2023). As psychiatric nurses and other clinicians input information into electronic medical records, their own perceptions and potential biases towards the patient's culture and language, for example, can influence the information recorded, thus affecting training data for AI systems which are required to analyse a patient's psychiatric symptoms. In this way, the human bias impacts the AI bias. Particularly in mental health settings, where patients experience psychiatric symptoms in different ways, it is important to have accurate psychiatric assessment. For example, a conversational AI developed predominantly with patient data from Western healthcare settings could fail to interpret an observation from a psychiatric nurse and written expressions of psychiatric distress from the patient themselves in non-Western cultures (Teye et al. 2022).
Psychiatric assessments involve more than recording symptoms and patient history. They require a range of skills such as empathetic listening, observation and effective communication, which are crucial for fostering trust and understanding. These skills help establish a connection that is vital for effective treatment (Stiger-Outshoven et al. 2024). However, without the psychiatric nurse skill set, AI technologies may reduce these complex interactions to data points, failing to capture the patient's unique psychiatric experiences. This reductionist approach by AI may also overlook emotional, behavioural and psychological signals, often conveyed through less overt means such as changes in mood, tone of voice or pauses in conversation. These nonverbal cues are normally observed in face-to-face interactions. One limitation of AI is that it is currently reliant on textual data. Therefore, observations are not documented in the electronic medical record; it might not understand, misanalyse or simply overlook observed nonverbal communications from the psychiatric patient. This could potentially lead to inadequate care, highlighting the essential role of psychiatric nurse oversight in psychiatric assessments, whose keen observations, knowledge and understanding of these subtle cues are critical to delivering effective psychiatric care. As AI continues to support all aspects of healthcare, its application in psychiatric assessments presents opportunities and challenges. While conversational AI systems offer promising solutions, it is the specialism of psychiatric nurses that enables the interpretation of complex human behaviours and the establishment of therapeutic relationships—this cannot yet be fully replicated by AI technology but can be embraced by it, suggests Nashwan et al. (2023).
To truly benefit from conversational AI in psychiatric settings, AI systems should be designed and trained to enhance the efficiency and accuracy of psychiatric assessments with the guidance and expertise of skilled psychiatric nurses. With this method, AI systems can complement the experience of the psychiatric nurse and enhance the patient experience in psychiatric care.
期刊介绍:
The Journal of Psychiatric and Mental Health Nursing is an international journal which publishes research and scholarly papers that advance the development of policy, practice, research and education in all aspects of mental health nursing. We publish rigorously conducted research, literature reviews, essays and debates, and consumer practitioner narratives; all of which add new knowledge and advance practice globally.
All papers must have clear implications for mental health nursing either solely or part of multidisciplinary practice. Papers are welcomed which draw on single or multiple research and academic disciplines. We give space to practitioner and consumer perspectives and ensure research published in the journal can be understood by a wide audience. We encourage critical debate and exchange of ideas and therefore welcome letters to the editor and essays and debates in mental health.