IEEE transactions on technology and society最新文献

筛选
英文 中文
AI in Cybersecurity: The Paradox 网络安全中的人工智能:悖论
IEEE transactions on technology and society Pub Date : 2023-06-14 DOI: 10.1109/TTS.2023.3280109
Katina Michael;Roba Abbas;George Roussos
{"title":"AI in Cybersecurity: The Paradox","authors":"Katina Michael;Roba Abbas;George Roussos","doi":"10.1109/TTS.2023.3280109","DOIUrl":"10.1109/TTS.2023.3280109","url":null,"abstract":"Modern artificial intelligence is inherently paradoxical in many ways. While AI aims to increase automation, it also requires more intimate human involvement to reflect on the insights generated (automation paradox). While AI results in job displacement, it also creates new jobs, some simply to provide the necessary support systems for those newly unemployed (transition paradox). And as generative AI takes away control over the creative process, it also offers new creative opportunities (creativity paradox). This article considers another paradox, that relates to the fact that computational systems created using AI can be used both for public good in civilian applications and for harm across a range of application areas and settings [A1]. This contradiction is explored within an organizational and governmental context, where modern AI relies on data which might be externally or internally-sourced [A2]. External data sources [A3] are inclusive of open-source intelligence (OS-INT), such as information available on the Internet and the dark web, and internal data sources may include proprietary data found within an organizational or a wider governmental context [A4]. A further relevant consideration is the expanding role of the Internet of Things to support smart infrastructures, which has created new vulnerabilities [A5].","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 2","pages":"104-109"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8566059/10153436/10153442.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47052208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Technology and Society Publication Information IEEE技术与社会出版信息汇刊
IEEE transactions on technology and society Pub Date : 2023-06-14 DOI: 10.1109/TTS.2023.3276155
{"title":"IEEE Transactions on Technology and Society Publication Information","authors":"","doi":"10.1109/TTS.2023.3276155","DOIUrl":"https://doi.org/10.1109/TTS.2023.3276155","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 2","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8566059/10153436/10153441.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50354284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discerning Between the “Easy” and “Hard” Problems of AI Governance 辨别人工智能治理的“简单”和“困难”问题
IEEE transactions on technology and society Pub Date : 2023-04-17 DOI: 10.1109/TTS.2023.3267382
Matti Minkkinen;Matti Mäntymäki
{"title":"Discerning Between the “Easy” and “Hard” Problems of AI Governance","authors":"Matti Minkkinen;Matti Mäntymäki","doi":"10.1109/TTS.2023.3267382","DOIUrl":"10.1109/TTS.2023.3267382","url":null,"abstract":"While there is widespread consensus that artificial intelligence (AI) needs to be governed owing to its rapid diffusion and societal implications, the current scholarly discussion on AI governance is dispersed across numerous disciplines and problem domains. This paper clarifies the situation by discerning two problem areas, metaphorically titled the “easy” and “hard” problems of AI governance, using a dialectic theory synthesis approach. The “easy problem” of AI governance concerns how organizations’ design, development, and use of AI systems align with laws, values, and norms stemming from legislation, ethics guidelines, and the surrounding society. Organizations can provisionally solve the “easy problem” by implementing appropriate organizational mechanisms to govern data, algorithms, and algorithmic systems. The “hard problem” of AI governance concerns AI as a general-purpose technology that transforms organizations and societies. Rather than a matter to be resolved, the “hard problem” is a sensemaking process regarding socio-technical change. Partial solutions to the “hard problem” may open unforeseen issues. While societies should not lose track of the “hard problem” of AI governance, there is significant value in solving the “easy problem” for two reasons. First, the “easy problem” can be provisionally solved by tackling bias, harm, and transparency issues. Second, solving the “easy problem” helps solve the “hard problem,” as responsible organizational AI practices create virtuous rather than vicious cycles.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 2","pages":"188-194"},"PeriodicalIF":0.0,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8566059/10153436/10103193.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43557849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Ethics of Mandatory Exoskeleton Use in Commercial and Industrial Settings 在商业和工业环境中强制使用外骨骼的伦理问题
IEEE transactions on technology and society Pub Date : 2023-04-05 DOI: 10.1109/TTS.2023.3264807
Timothy R. Pote;Nicole V. Asbeck;Alan T. Asbeck
{"title":"The Ethics of Mandatory Exoskeleton Use in Commercial and Industrial Settings","authors":"Timothy R. Pote;Nicole V. Asbeck;Alan T. Asbeck","doi":"10.1109/TTS.2023.3264807","DOIUrl":"10.1109/TTS.2023.3264807","url":null,"abstract":"Research shows exoskeletons can reduce muscle activity and decrease the risk of injury for workers. Exoskeletons, therefore, are becoming more prevalent in industrial workplaces, and their use in some circumstances has already been mandated. It is probable that additional employers will mandate the use of exoskeletons as a means of mitigating injuries to their employees. This presents ethical concerns because employers hold power over the employees’ wages and employment. Some employees who are required to wear exoskeletons may not be able to, while others may not wish to. How should workers’ privacy and preferences be weighted? Should employees be prohibited from jobs that use exoskeletons if the exoskeletons do not fit them or if they do not wish to disclose their body’s measurements? Should companies using exoskeletons be permitted to require workers to perform additional work with an exoskeleton? In this paper, we examine these and other ethical considerations related to mandatory exoskeleton use through the Six Pillars of Character framework of the Josephson Institute of Ethics (2002) and the Universal Moral Values for Corporate Codes of Ethics framework by Schwartz (2005). We provide a discussion of possible solutions following ethical tenets, including executing pilot studies before mandatory use policies, offering several self-adjustable models of exoskeletons, and allowing existing workers to transfer jobs if they are ill at ease with new exoskeleton policies. The best course of action may depend on specific individual circumstances.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 4","pages":"302-313"},"PeriodicalIF":0.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62589586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Technology and Society Publication Information IEEE技术与社会出版信息汇刊
IEEE transactions on technology and society Pub Date : 2023-03-29 DOI: 10.1109/TTS.2023.3239124
{"title":"IEEE Transactions on Technology and Society Publication Information","authors":"","doi":"10.1109/TTS.2023.3239124","DOIUrl":"https://doi.org/10.1109/TTS.2023.3239124","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8566059/10086685/10086941.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50319285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ExTRUST: Reducing Exploit Stockpiles With a Privacy-Preserving Depletion System for Inter-State Relationships 利用保护隐私的耗尽系统减少国家间关系的漏洞库存
IEEE transactions on technology and society Pub Date : 2023-03-29 DOI: 10.1109/TTS.2023.3280356
Thomas Reinhold;Philipp Kuehn;Daniel Günther;Thomas Schneider;Christian Reuter
{"title":"ExTRUST: Reducing Exploit Stockpiles With a Privacy-Preserving Depletion System for Inter-State Relationships","authors":"Thomas Reinhold;Philipp Kuehn;Daniel Günther;Thomas Schneider;Christian Reuter","doi":"10.1109/TTS.2023.3280356","DOIUrl":"10.1109/TTS.2023.3280356","url":null,"abstract":"Cyberspace is a fragile construct threatened by malicious cyber operations of different actors, with vulnerabilities in IT hardware and software forming the basis for such activities, thus also posing a threat to global IT security. Advancements in the field of artificial intelligence accelerate this development, either with artificial intelligence enabled cyber weapons, automated cyber defense measures, or artificial intelligence-based threat and vulnerability detection. Especially state actors, with their long-term strategic security interests, often stockpile such knowledge of vulnerabilities and exploits to enable their military or intelligence service cyberspace operations. While treaties and regulations to limit these developments and to enhance global IT security by disclosing vulnerabilities are currently being discussed on the international level, these efforts are hindered by state concerns about the disclosure of unique knowledge and about giving up tactical advantages. This leads to a situation where multiple states are likely to stockpile at least some identical exploits, with technical measures to enable a depletion process for these stockpiles that preserve state secrecy interests and consider the special constraints of interacting states as well as the requirements within such environments being non-existent. This paper proposes such a privacy-preserving approach that allows multiple state parties to privately compare their stock of vulnerabilities and exploits to check for items that occur in multiple stockpiles without revealing them so that their disclosure can be considered. We call our system ExTRUST and show that it is scalable and can withstand several attack scenarios. Beyond the intergovernmental setting, ExTRUST can also be used for other zero-trust use cases, such as bug-bounty programs.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 2","pages":"158-170"},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44720704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness 使用以人为本的方法设计人工智能:可信度的可解释性和准确性
IEEE transactions on technology and society Pub Date : 2023-03-29 DOI: 10.1109/TTS.2023.3257627
Jordan Richard Schoenherr;Roba Abbas;Katina Michael;Pablo Rivas;Theresa Dirndorfer Anderson
{"title":"Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness","authors":"Jordan Richard Schoenherr;Roba Abbas;Katina Michael;Pablo Rivas;Theresa Dirndorfer Anderson","doi":"10.1109/TTS.2023.3257627","DOIUrl":"10.1109/TTS.2023.3257627","url":null,"abstract":"One of the major criticisms of Artificial Intelligence is its lack of explainability. A claim is made by many critics that without knowing how an AI may derive a result or come to a given conclusion, it is impossible to trust in its outcomes. This problem is especially concerning when AI-based systems and applications fail to perform their tasks successfully. In this Special Issue Editorial, we focus on two main areas, explainable AI (XAI) and accuracy, and how both dimensions are critical to building trustworthy systems. We review prominent XAI design themes, leading to a reframing of the design and development effort that highlights the significance of the human, thereby demonstrating the importance of human-centered AI (HCAI). The HCAI approach advocates for a range of deliberate design-related decisions, such as those pertaining to multi-stakeholder engagement and the dissolving of disciplinary boundaries. This enables the consideration and integration of deep interdisciplinary knowledge, as evidenced in our example of social cognitive approaches to AI design. This Editorial then presents a discussion on ways forward, underscoring the value of a balanced approach to assessing the opportunities, risks and responsibilities associated with AI design. We conclude by presenting papers in the Special Issue and their contribution, pointing to future research endeavors.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"9-23"},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8566059/10086685/10086944.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43631308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Can God Be an AI With Robo-Priests? 上帝能成为机器人牧师的人工智能吗?
IEEE transactions on technology and society Pub Date : 2023-03-29 DOI: 10.1109/TTS.2023.3238960
M. G. Michael;Katina Michael;Terri Bookman
{"title":"Can God Be an AI With Robo-Priests?","authors":"M. G. Michael;Katina Michael;Terri Bookman","doi":"10.1109/TTS.2023.3238960","DOIUrl":"https://doi.org/10.1109/TTS.2023.3238960","url":null,"abstract":"“You may love your computer, but your computer [and AI] do not love you” (Kallistos Ware) \u0000<xref>[1, p. 26]</xref>\u0000.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"2-8"},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8566059/10086685/10086704.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50319287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Ethics Principles in Practice: Perspectives of Designers and Developers 实践中的人工智能伦理原则:设计者和开发者的视角
IEEE transactions on technology and society Pub Date : 2023-03-15 DOI: 10.1109/TTS.2023.3257303
Conrad Sanderson;David Douglas;Qinghua Lu;Emma Schleiger;Jon Whittle;Justine Lacey;Glenn Newnham;Stefan Hajkowicz;Cathy Robinson;David Hansen
{"title":"AI Ethics Principles in Practice: Perspectives of Designers and Developers","authors":"Conrad Sanderson;David Douglas;Qinghua Lu;Emma Schleiger;Jon Whittle;Justine Lacey;Glenn Newnham;Stefan Hajkowicz;Cathy Robinson;David Hansen","doi":"10.1109/TTS.2023.3257303","DOIUrl":"10.1109/TTS.2023.3257303","url":null,"abstract":"As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of researchers and engineers from Australia’s national scientific research agency (CSIRO), who are involved in designing and developing AI systems for many application areas. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government. The principles comprise: (1) privacy protection and security, (2) reliability and safety, (3) transparency and explainability, (4) fairness, (5) contestability, (6) accountability, (7) human-centred values, (8) human, social and environmental well-being. Discussions on the gained insights from the interviews include various tensions and trade-offs between the principles, and provide suggestions for implementing each high-level principle. We also present suggestions aiming to enhance associated support mechanisms.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 2","pages":"171-187"},"PeriodicalIF":0.0,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45362167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Can God Be an AI With Robo-Priests? 有了机器人牧师,上帝能成为人工智能吗?
IEEE transactions on technology and society Pub Date : 2023-03-01 DOI: 10.1109/tts.2023.3238960
M. G. Michael, Katina Michael, T. Bookman
{"title":"Can God Be an AI With Robo-Priests?","authors":"M. G. Michael, Katina Michael, T. Bookman","doi":"10.1109/tts.2023.3238960","DOIUrl":"https://doi.org/10.1109/tts.2023.3238960","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62589226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信