{"title":"学会信任天网:在网络空间与人工智能对接","authors":"Christopher Whyte","doi":"10.1080/13523260.2023.2180882","DOIUrl":null,"url":null,"abstract":"ABSTRACT The use of AI to automate defense and intelligence tasks is increasing. And yet, little is known about how algorithmic analyses, data capture, and decisions will be perceived by elite decision-makers. This article presents the results of two experiments that explore manifestations of AI systems in the cyber conflict decision-making loop. Though findings suggest that technical expertise positively impacts respondents’ ability to gauge the potential utility and credibility of an input (indicating that training can, in fact, overcome bias), the perception of human agency in the loop even in the presence of AI inputs mitigates this effect and makes decision-makers more willing to operate on less information. This finding is worrying given the extensive challenges involved in effectively building human oversight and opportunity for intervention into any effective employment of AI for national security purposes. The article considers these obstacles and potential solutions in the context of data gathered.","PeriodicalId":46729,"journal":{"name":"Contemporary Security Policy","volume":"44 1","pages":"308 - 344"},"PeriodicalIF":4.0000,"publicationDate":"2023-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Learning to trust Skynet: Interfacing with artificial intelligence in cyberspace\",\"authors\":\"Christopher Whyte\",\"doi\":\"10.1080/13523260.2023.2180882\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT The use of AI to automate defense and intelligence tasks is increasing. And yet, little is known about how algorithmic analyses, data capture, and decisions will be perceived by elite decision-makers. This article presents the results of two experiments that explore manifestations of AI systems in the cyber conflict decision-making loop. Though findings suggest that technical expertise positively impacts respondents’ ability to gauge the potential utility and credibility of an input (indicating that training can, in fact, overcome bias), the perception of human agency in the loop even in the presence of AI inputs mitigates this effect and makes decision-makers more willing to operate on less information. This finding is worrying given the extensive challenges involved in effectively building human oversight and opportunity for intervention into any effective employment of AI for national security purposes. The article considers these obstacles and potential solutions in the context of data gathered.\",\"PeriodicalId\":46729,\"journal\":{\"name\":\"Contemporary Security Policy\",\"volume\":\"44 1\",\"pages\":\"308 - 344\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2023-03-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Contemporary Security Policy\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1080/13523260.2023.2180882\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INTERNATIONAL RELATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Contemporary Security Policy","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1080/13523260.2023.2180882","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INTERNATIONAL RELATIONS","Score":null,"Total":0}
Learning to trust Skynet: Interfacing with artificial intelligence in cyberspace
ABSTRACT The use of AI to automate defense and intelligence tasks is increasing. And yet, little is known about how algorithmic analyses, data capture, and decisions will be perceived by elite decision-makers. This article presents the results of two experiments that explore manifestations of AI systems in the cyber conflict decision-making loop. Though findings suggest that technical expertise positively impacts respondents’ ability to gauge the potential utility and credibility of an input (indicating that training can, in fact, overcome bias), the perception of human agency in the loop even in the presence of AI inputs mitigates this effect and makes decision-makers more willing to operate on less information. This finding is worrying given the extensive challenges involved in effectively building human oversight and opportunity for intervention into any effective employment of AI for national security purposes. The article considers these obstacles and potential solutions in the context of data gathered.
期刊介绍:
One of the oldest peer-reviewed journals in international conflict and security, Contemporary Security Policy promotes theoretically-based research on policy problems of armed conflict, intervention and conflict resolution. Since it first appeared in 1980, CSP has established its unique place as a meeting ground for research at the nexus of theory and policy.
Spanning the gap between academic and policy approaches, CSP offers policy analysts a place to pursue fundamental issues, and academic writers a venue for addressing policy. Major fields of concern include:
War and armed conflict
Peacekeeping
Conflict resolution
Arms control and disarmament
Defense policy
Strategic culture
International institutions.
CSP is committed to a broad range of intellectual perspectives. Articles promote new analytical approaches, iconoclastic interpretations and previously overlooked perspectives. Its pages encourage novel contributions and outlooks, not particular methodologies or policy goals. Its geographical scope is worldwide and includes security challenges in Europe, Africa, the Middle-East and Asia. Authors are encouraged to examine established priorities in innovative ways and to apply traditional methods to new problems.