增强对大型语言模型的信任,以简化军事行动中的决策

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Emanuela Marasco , Thirimachos Bourlai
{"title":"增强对大型语言模型的信任,以简化军事行动中的决策","authors":"Emanuela Marasco ,&nbsp;Thirimachos Bourlai","doi":"10.1016/j.imavis.2025.105489","DOIUrl":null,"url":null,"abstract":"<div><div>Large Language Models (LLMs) have the potential to enhance decision-making significantly in core military operational contexts that support training, readiness, and mission execution under low-risk conditions. Still, their implementation must be approached carefully, considering the associated risks. This paper examines the integration of LLMs into military decision-making, emphasizing the LLM’s ability to improve intelligence analysis, enhance situational awareness, support strategic planning, predict threats, optimize logistics, and strengthen cybersecurity. The paper also considers misinterpretation, bias, misinformation, or overreliance on AI-generated suggestions, potentially leading to errors in routine but critical decision-making processes. Our work concludes by proposing solutions and promoting the responsible implementation of LLMs to ensure their effective and ethical use in military operations. To build trust in LLMs, this paper advocates for developing cybersecurity frameworks, transparency, and ethical oversight. It further suggests using machine unlearning (MU) to selectively remove outdated or compromised data from LLM training datasets, preserving the integrity of the insights they generate. The paper underscores the imperative for integrating LLMs in low-risk military contexts, coupled with sustained research efforts to mitigate potential hazards.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"158 ","pages":"Article 105489"},"PeriodicalIF":4.2000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing trust in Large Language Models for streamlined decision-making in military operations\",\"authors\":\"Emanuela Marasco ,&nbsp;Thirimachos Bourlai\",\"doi\":\"10.1016/j.imavis.2025.105489\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Large Language Models (LLMs) have the potential to enhance decision-making significantly in core military operational contexts that support training, readiness, and mission execution under low-risk conditions. Still, their implementation must be approached carefully, considering the associated risks. This paper examines the integration of LLMs into military decision-making, emphasizing the LLM’s ability to improve intelligence analysis, enhance situational awareness, support strategic planning, predict threats, optimize logistics, and strengthen cybersecurity. The paper also considers misinterpretation, bias, misinformation, or overreliance on AI-generated suggestions, potentially leading to errors in routine but critical decision-making processes. Our work concludes by proposing solutions and promoting the responsible implementation of LLMs to ensure their effective and ethical use in military operations. To build trust in LLMs, this paper advocates for developing cybersecurity frameworks, transparency, and ethical oversight. It further suggests using machine unlearning (MU) to selectively remove outdated or compromised data from LLM training datasets, preserving the integrity of the insights they generate. The paper underscores the imperative for integrating LLMs in low-risk military contexts, coupled with sustained research efforts to mitigate potential hazards.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"158 \",\"pages\":\"Article 105489\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625000770\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625000770","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)具有在核心军事作战环境中显著增强决策的潜力,支持低风险条件下的训练、准备和任务执行。尽管如此,它们的实施必须谨慎对待,考虑到相关的风险。本文考察了法学硕士与军事决策的整合,强调了法学硕士提高情报分析、增强态势感知、支持战略规划、预测威胁、优化物流和加强网络安全的能力。该论文还考虑了误解、偏见、错误信息或过度依赖人工智能生成的建议,这些都可能导致常规但关键的决策过程中的错误。我们的工作总结是提出解决方案并促进法律管理的负责任实施,以确保其在军事行动中有效和道德地使用。为了建立法学硕士的信任,本文提倡开发网络安全框架、透明度和道德监督。它进一步建议使用机器学习(MU)有选择性地从LLM训练数据集中删除过时或受损的数据,以保持其生成的见解的完整性。本文强调了在低风险军事背景下整合法学硕士的必要性,并结合持续的研究努力来减轻潜在的危害。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing trust in Large Language Models for streamlined decision-making in military operations
Large Language Models (LLMs) have the potential to enhance decision-making significantly in core military operational contexts that support training, readiness, and mission execution under low-risk conditions. Still, their implementation must be approached carefully, considering the associated risks. This paper examines the integration of LLMs into military decision-making, emphasizing the LLM’s ability to improve intelligence analysis, enhance situational awareness, support strategic planning, predict threats, optimize logistics, and strengthen cybersecurity. The paper also considers misinterpretation, bias, misinformation, or overreliance on AI-generated suggestions, potentially leading to errors in routine but critical decision-making processes. Our work concludes by proposing solutions and promoting the responsible implementation of LLMs to ensure their effective and ethical use in military operations. To build trust in LLMs, this paper advocates for developing cybersecurity frameworks, transparency, and ethical oversight. It further suggests using machine unlearning (MU) to selectively remove outdated or compromised data from LLM training datasets, preserving the integrity of the insights they generate. The paper underscores the imperative for integrating LLMs in low-risk military contexts, coupled with sustained research efforts to mitigate potential hazards.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信