{"title":"使用聊天 GPT 评估警察威胁、风险和伤害","authors":"Eric Halford , Andrew Webster","doi":"10.1016/j.ijlcj.2024.100686","DOIUrl":null,"url":null,"abstract":"<div><p>General purpose artificial intelligence (GPAI) is a form of advanced AI system that includes the recently introduced ChatGPT. GPAI is known for its capacity to understand and emulate human responses, and potentially offers an opportunity to reduce human error when conducting tasks that involve analysis, judgement, and reasoning. To support officers to do this, the police presently use a range of decision-making support tools, one of which is called THRIVE (Threat, Harm, Risk, Investigation, Vulnerability, and Engagement). THRIVE is designed to provide police practitioners with a model to improve their identification and response to vulnerability. Despite the existence of such decision models, a 2020 meta-analysis of police cases resulting in death or serious injury identified contributory failures that included poor risk identification, risk management, failure to adhere to evidentiary processes, poor criminal investigations, and inadequate police engagement with victims, including the level of care and assistance provided (Allnock, et al, 2020). Importantly, this report outlined human error as being a major underpinning factor of the failures.</p><p>Although GPAI offers an opportunity to improve analysis, judgement, and reasoning, such systems have not yet been tested in policing, a field where any reduction in human error, particularly in the assessment of threat, harm, risk, and vulnerability can potentially save lives. This study is the first attempt to do this by using the chain-of-thought prompt methodology to test the GPAI ChatGPT (3.5 vs 4) in a controlled environment using 30 life-like police scenarios, crafted, and analyzed by expert practitioners. In doing so, we identify that ChatGPT 4 significantly outperforms its 3.5 predecessor, indicating that GPAI presents considerable opportunity in policing. However, systems that use this technology require extensive directional prompting to ensure outputs that can be considered accurate, and therefore, potentially safe to utilize in an operational setting. The article concludes by discussing how practitioners and researchers can further refine police related chain-of-thought prompts or use application programming interfaces (APIs) to improve responses provided by such GPAI.</p></div>","PeriodicalId":46026,"journal":{"name":"International Journal of Law Crime and Justice","volume":"78 ","pages":"Article 100686"},"PeriodicalIF":1.0000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1756061624000387/pdfft?md5=6e6e9e9c1d5fed6b47eb936ebe222ce0&pid=1-s2.0-S1756061624000387-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Using chat GPT to evaluate police threats, risk and harm\",\"authors\":\"Eric Halford , Andrew Webster\",\"doi\":\"10.1016/j.ijlcj.2024.100686\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>General purpose artificial intelligence (GPAI) is a form of advanced AI system that includes the recently introduced ChatGPT. GPAI is known for its capacity to understand and emulate human responses, and potentially offers an opportunity to reduce human error when conducting tasks that involve analysis, judgement, and reasoning. To support officers to do this, the police presently use a range of decision-making support tools, one of which is called THRIVE (Threat, Harm, Risk, Investigation, Vulnerability, and Engagement). THRIVE is designed to provide police practitioners with a model to improve their identification and response to vulnerability. Despite the existence of such decision models, a 2020 meta-analysis of police cases resulting in death or serious injury identified contributory failures that included poor risk identification, risk management, failure to adhere to evidentiary processes, poor criminal investigations, and inadequate police engagement with victims, including the level of care and assistance provided (Allnock, et al, 2020). Importantly, this report outlined human error as being a major underpinning factor of the failures.</p><p>Although GPAI offers an opportunity to improve analysis, judgement, and reasoning, such systems have not yet been tested in policing, a field where any reduction in human error, particularly in the assessment of threat, harm, risk, and vulnerability can potentially save lives. This study is the first attempt to do this by using the chain-of-thought prompt methodology to test the GPAI ChatGPT (3.5 vs 4) in a controlled environment using 30 life-like police scenarios, crafted, and analyzed by expert practitioners. In doing so, we identify that ChatGPT 4 significantly outperforms its 3.5 predecessor, indicating that GPAI presents considerable opportunity in policing. However, systems that use this technology require extensive directional prompting to ensure outputs that can be considered accurate, and therefore, potentially safe to utilize in an operational setting. The article concludes by discussing how practitioners and researchers can further refine police related chain-of-thought prompts or use application programming interfaces (APIs) to improve responses provided by such GPAI.</p></div>\",\"PeriodicalId\":46026,\"journal\":{\"name\":\"International Journal of Law Crime and Justice\",\"volume\":\"78 \",\"pages\":\"Article 100686\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2024-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S1756061624000387/pdfft?md5=6e6e9e9c1d5fed6b47eb936ebe222ce0&pid=1-s2.0-S1756061624000387-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Law Crime and Justice\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1756061624000387\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"CRIMINOLOGY & PENOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Law Crime and Justice","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1756061624000387","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"CRIMINOLOGY & PENOLOGY","Score":null,"Total":0}
Using chat GPT to evaluate police threats, risk and harm
General purpose artificial intelligence (GPAI) is a form of advanced AI system that includes the recently introduced ChatGPT. GPAI is known for its capacity to understand and emulate human responses, and potentially offers an opportunity to reduce human error when conducting tasks that involve analysis, judgement, and reasoning. To support officers to do this, the police presently use a range of decision-making support tools, one of which is called THRIVE (Threat, Harm, Risk, Investigation, Vulnerability, and Engagement). THRIVE is designed to provide police practitioners with a model to improve their identification and response to vulnerability. Despite the existence of such decision models, a 2020 meta-analysis of police cases resulting in death or serious injury identified contributory failures that included poor risk identification, risk management, failure to adhere to evidentiary processes, poor criminal investigations, and inadequate police engagement with victims, including the level of care and assistance provided (Allnock, et al, 2020). Importantly, this report outlined human error as being a major underpinning factor of the failures.
Although GPAI offers an opportunity to improve analysis, judgement, and reasoning, such systems have not yet been tested in policing, a field where any reduction in human error, particularly in the assessment of threat, harm, risk, and vulnerability can potentially save lives. This study is the first attempt to do this by using the chain-of-thought prompt methodology to test the GPAI ChatGPT (3.5 vs 4) in a controlled environment using 30 life-like police scenarios, crafted, and analyzed by expert practitioners. In doing so, we identify that ChatGPT 4 significantly outperforms its 3.5 predecessor, indicating that GPAI presents considerable opportunity in policing. However, systems that use this technology require extensive directional prompting to ensure outputs that can be considered accurate, and therefore, potentially safe to utilize in an operational setting. The article concludes by discussing how practitioners and researchers can further refine police related chain-of-thought prompts or use application programming interfaces (APIs) to improve responses provided by such GPAI.
期刊介绍:
The International Journal of Law, Crime and Justice is an international and fully peer reviewed journal which welcomes high quality, theoretically informed papers on a wide range of fields linked to criminological research and analysis. It invites submissions relating to: Studies of crime and interpretations of forms and dimensions of criminality; Analyses of criminological debates and contested theoretical frameworks of criminological analysis; Research and analysis of criminal justice and penal policy and practices; Research and analysis of policing policies and policing forms and practices. We particularly welcome submissions relating to more recent and emerging areas of criminological enquiry including cyber-enabled crime, fraud-related crime, terrorism and hate crime.