2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)最新文献

筛选
英文 中文
Development of Social Impact Considerations during Engineering Internships 工程实习期间社会影响考量的发展
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ETHICS57328.2023.10154985
Malini Josiam, Sophia Vicente, Taylor T. Johnson
{"title":"Development of Social Impact Considerations during Engineering Internships","authors":"Malini Josiam, Sophia Vicente, Taylor T. Johnson","doi":"10.1109/ETHICS57328.2023.10154985","DOIUrl":"https://doi.org/10.1109/ETHICS57328.2023.10154985","url":null,"abstract":"Internships are known to be valuable experiences for engineering students, as they provide students with hands-on engineering experience and development of professional skills. However, less is known about internships in terms of how they develop engineering students' skills related to social impact considerations. In this work in progress paper, we conducted semi structured interviews with 10 engineering students who participated in engineering internships during the previous summer. Our preliminary results indicate that while students believe that engineers should consider the social impact of their work, those same engineering students are not always equipped with the tools to discuss the social impact of their internship projects. Thus, we demonstrate a need for more intentional development of connections between engineering work and social impact during internships and in engineering curriculum.","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117130443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ETHICS-2023 Session F4 - Workshop: ‘I can't teach ethics, I'm not an ethicist’: Transforming STEM ethics education begins with engaging faculty as ethical subjects 伦理-2023第四部分-工作坊:“我不能教伦理学,我不是伦理学家”:转变STEM伦理教育从让教师成为伦理主体开始
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ethics57328.2023.10155087
{"title":"ETHICS-2023 Session F4 - Workshop: ‘I can't teach ethics, I'm not an ethicist’: Transforming STEM ethics education begins with engaging faculty as ethical subjects","authors":"","doi":"10.1109/ethics57328.2023.10155087","DOIUrl":"https://doi.org/10.1109/ethics57328.2023.10155087","url":null,"abstract":"","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127434803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Moral Advisors: enhancing human ethical decision-making 人工道德顾问:增强人类的伦理决策
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ETHICS57328.2023.10155026
Marco Tassella, R. Chaput, Mathieu Guillermin
{"title":"Artificial Moral Advisors: enhancing human ethical decision-making","authors":"Marco Tassella, R. Chaput, Mathieu Guillermin","doi":"10.1109/ETHICS57328.2023.10155026","DOIUrl":"https://doi.org/10.1109/ETHICS57328.2023.10155026","url":null,"abstract":"This short paper focuses on understanding moral dilemmas, Artificial Moral Advisors, and their possible roles in ethical decision-making. After a brief analysis of the philosophical debate around dilemmas, we propose three different classes of dilemmas. We then discuss how AI-based advisors could be used to enhance human ethical decision-making, with a particular focus on three possible AI skills (identifying, presenting and settling dilemmas), as well as on their role as ethical experts. The resulting proposal opens up to new possible uses of AI moral advisors, and to the help they might offer in difficult decisions.","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126740282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ETHICS-2023: Conference and Technical Program Committee 伦理学-2023:会议和技术计划委员会
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ethics57328.2023.10155058
{"title":"ETHICS-2023: Conference and Technical Program Committee","authors":"","doi":"10.1109/ethics57328.2023.10155058","DOIUrl":"https://doi.org/10.1109/ethics57328.2023.10155058","url":null,"abstract":"","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126940425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ETHICS-2023 Session E2 - Panel: 4+1: The impacts of academia, industry, government and civil society on sustainable development (Sponsored by IEEE TechEthics) 伦理-2023会议E2 -小组讨论:4+1:学术界、工业界、政府和民间社会对可持续发展的影响(由IEEE TechEthics赞助)
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ethics57328.2023.10154964
Kelly E. Bohrer, Sara Belligoni, B. Redd, Carson J. Reeling, M. A. Vasquez
{"title":"ETHICS-2023 Session E2 - Panel: 4+1: The impacts of academia, industry, government and civil society on sustainable development (Sponsored by IEEE TechEthics)","authors":"Kelly E. Bohrer, Sara Belligoni, B. Redd, Carson J. Reeling, M. A. Vasquez","doi":"10.1109/ethics57328.2023.10154964","DOIUrl":"https://doi.org/10.1109/ethics57328.2023.10154964","url":null,"abstract":"","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126887628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tempering Transparency in Human-Robot Interaction 调节人机交互的透明度
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ETHICS57328.2023.10154942
Kantwon Rogers, A. Howard
{"title":"Tempering Transparency in Human-Robot Interaction","authors":"Kantwon Rogers, A. Howard","doi":"10.1109/ETHICS57328.2023.10154942","DOIUrl":"https://doi.org/10.1109/ETHICS57328.2023.10154942","url":null,"abstract":"In recent years, particular interest has been taken by researchers and governments in examining and regulating aspects of transparency and explainability within artificially intelligent (AI) system. An AI system is “transparent” if humans can understand the mechanisms behind its behavior and use this understanding to make predictions about future behavior while the goal of explainable AI is to clarify an AI system's actions in a way that humans can understand. With this increased interest, research has presented conflicting views on the benefits of algorithmic transparency and explanations [1]. Moreover, research has also highlighted flaws within policy implementations of algorithmic transparency which generally remain too vague and often results in deficient adoption [2]. Even with these pitfalls of transparency, it seems as if the default view of many societies is that AI systems should be made more transparent and explainable; however, we argue that there needs to exist added skepticism of this position. In particular, we believe it is a useful exercise to consider exploring, as a counternarrative, an emerging area within computing that necessitates a lack of transparency-deceptive AI. The newly evolving area of research pertains to the creation (intentionally or not) of AI agents that learn to deceive humans and other AI agents. Here we define deception as “the process by which actions are chosen to manipulate beliefs so as to take advantage of the erroneous inferences” [3] and we use this interchangeably with “lying”. While there may be physically designed aspects of deception in embodied agents, such as the anthropomorphism and zoomorphism of robots [4], [5], here we wish to focus on deception related to utterances and actions of AI agents. On its surface, the idea of deceptive AI agents may not readily seem beneficial; however, there exists added effort to create AI agents that are able to be integrated socially within our societies. Seeing as deception is a foundational part of many human and animal groups, some argue that giving AI agents the ability to learn to deceive is necessary and inevitable for them to truly interact effectively [6], [7]. In fact, it has been found that deception can be an emergent behavior when training systems on human data [8]-thus strengthening the notion that behaving deceptively is a part of what it means to interact with humans. Moreover, prior research has shown that AI deception, rather than transparent truthfulness, can lead to better outcomes in human-robot interactions [9]–[11]. However, deception does of course have obvious drawbacks including an erosion of trust [12]–[15] and decreasing desired reutilization [12], [15]. Because of these negative aspects, and the clear possibly of malicious usage, some suggest the need for entirely truthful agents [16]. However, due to the infancy and lack of knowledge of the effects (short and long term) of deception within human-AI agent interaction, it is currently","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123819285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditing Practitioner Judgment for Algorithmic Fairness Implications 审计从业人员对算法公平性影响的判断
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ETHICS57328.2023.10154992
Ike Obi, Colin M. Gray
{"title":"Auditing Practitioner Judgment for Algorithmic Fairness Implications","authors":"Ike Obi, Colin M. Gray","doi":"10.1109/ETHICS57328.2023.10154992","DOIUrl":"https://doi.org/10.1109/ETHICS57328.2023.10154992","url":null,"abstract":"The development of Artificial Intelligence (AI) systems involves a significant level of judgment and decision making on the part of engineers and designers to ensure the safety, robustness, and ethical design of such systems. However, the kinds of judgments that practitioners employ while developing AI platforms are rarely foregrounded or examined to explore areas practitioners might need ethical support. In this short paper, we employ the concept of design judgment to foreground and examine the kinds of sensemaking software engineers use to inform their decisionmaking while developing AI systems. Relying on data generated from two exploratory observation studies of student software engineers, we connect the concept of fairness to the foregrounded judgments to implicate their potential algorithmic fairness impacts. Our findings surface some ways in which the design judgment of software engineers could adversely impact the downstream goal of ensuring fairness in AI systems. We discuss the implications of these findings in fostering positive innovation and enhancing fairness in AI systems, drawing attention to the need to provide ethical guidance, support, or intervention to practitioners as they engage in situated and contextual judgments while developing AI systems.","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127710561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
‘Emerging proxies’ in information-rich machine learning: a threat to fairness? 信息丰富的机器学习中的“新兴代理”:对公平的威胁?
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ETHICS57328.2023.10155045
A. McLoughney, J. Paterson, M. Cheong, Anthony Wirth
{"title":"‘Emerging proxies’ in information-rich machine learning: a threat to fairness?","authors":"A. McLoughney, J. Paterson, M. Cheong, Anthony Wirth","doi":"10.1109/ETHICS57328.2023.10155045","DOIUrl":"https://doi.org/10.1109/ETHICS57328.2023.10155045","url":null,"abstract":"Anti-discrimination law in many jurisdictions effectively bans the use of race and gender in automated decision-making. For example, this law means that insurance companies should not explicitly ask about legally protected attributes, e.g., race, in order to tailor their premiums to particular customers. In legal terms, indirect discrimination occurs when a generally neutral rule or variable is used, but significantly negatively affects one demographic group. An emerging example of this concern is inclusion of proxy variables in Machine Learning (ML) models, where neutral variables are predictive of protected attributes. For example, postcodes or zip codes are representative of communities, and therefore racial demographics and social-economic class; i.e., a traditional example of ‘redlining’ pre-dating modern automated techniques [1]. The law struggles with proxy variables in machine learning: indirect discrimination cases are difficult to bring to court, particularly because finding substantial evidence that shows the indirect discrimination to be unlawful is difficult [2]. With more complex machine-learning models being developed for automated decision making, e.g., random forests or state-of-the-art deep neural networks, more data points on customers are accumulated [1], from a wide variety of sources. With such rich data, ML models can produce multiple interconnected correlations - such as that found in single neurons in a neural network, or single decision trees in a random forest - which are predictive of protected attributes, akin to traditional uses of discrete proxy variables. In this poster, we introduce the concept of \"emerging proxies\", that are a combination of several variables, from which the ML model could infer the protected attribute(s) of the individuals in the dataset. This concept differs from the traditional concept of proxies because rather than addressing a single proxy variable, a distribution of interconnected proxies would have to be addressed. Our contribution is to provide evidence for the capacity of complex ML models to identify protected attributes through the correlation of other variables. This correlation is not made explicitly through a discrete one to one relationship between variables, but through a many-to-one relationship. This contribution complements concerns raised in legal analyses of automated decision-making about proxies in ML models leading to indirect discrimination [3]. Our contribution shows that if an ML model contains “emerging proxies” for a protected attribute, the distribution of proxies will be a roadblock when attempting to de-bias the model, limiting the pathways available for addressing potential discrimination caused by the ML model.","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"170 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123502756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Intelligence & Smart City Ethics: A Systematic Review 人工智能与智慧城市伦理:系统回顾
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ETHICS57328.2023.10154961
Connor Phillips, J. Jiao
{"title":"Artificial Intelligence & Smart City Ethics: A Systematic Review","authors":"Connor Phillips, J. Jiao","doi":"10.1109/ETHICS57328.2023.10154961","DOIUrl":"https://doi.org/10.1109/ETHICS57328.2023.10154961","url":null,"abstract":"Smart city technologies have enabled the tracking of urban residents to a more granular degree than previously was possible. The increase in data collection and analysis, enabled by artificial intelligence, presents privacy, safety, and other ethical concerns. This systematic review collects and organizes the body of knowledge surrounding ethics of smart cities. Authors used a keyword search in 5 databases to highlight 34 academic publications dated between 2014 and 2022. The work demonstrates that articles are generally focused on ethical concerns of privacy, safety, and fairness, specific technology-based reviews, or frameworks and lenses to guide conversation. This paper helps to organize a cross-disciplinary topic and collects the body of knowledge around smart city ethics into a singular, comprehensive source for practitioners, researchers, and stakeholders.","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126183955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to NSF's Ethical and Responsible Research (ER2) Program 国家科学基金会伦理与责任研究(ER2)计划简介
2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS) Pub Date : 2023-05-18 DOI: 10.1109/ethics57328.2023.10155101
W. Bauchspies, Alex Romero, J. Borenstein, Michael Steele
{"title":"Introduction to NSF's Ethical and Responsible Research (ER2) Program","authors":"W. Bauchspies, Alex Romero, J. Borenstein, Michael Steele","doi":"10.1109/ethics57328.2023.10155101","DOIUrl":"https://doi.org/10.1109/ethics57328.2023.10155101","url":null,"abstract":"This poster describes key elements of the National Science Foundation's Ethical and Responsible Research (ER2) Program.","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115820492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信