{"title":"From Pixels to Principles: A Decade of Progress and Landscape in Trustworthy Computer Vision.","authors":"Kexin Huang, Yan Teng, Yang Chen, Yingchun Wang","doi":"10.1007/s11948-024-00480-6","DOIUrl":"10.1007/s11948-024-00480-6","url":null,"abstract":"<p><p>The rapid development of computer vision technologies and applications has brought forth a range of social and ethical challenges. Due to the unique characteristics of visual technology in terms of data modalities and application scenarios, computer vision poses specific ethical issues. However, the majority of existing literature either addresses artificial intelligence as a whole or pays particular attention to natural language processing, leaving a gap in specialized research on ethical issues and systematic solutions in the field of computer vision. This paper utilizes bibliometrics and text-mining techniques to quantitatively analyze papers from prominent academic conferences in computer vision over the past decade. It first reveals the developing trends and specific distribution of attention regarding trustworthy aspects in the computer vision field, as well as the inherent connections between ethical dimensions and different stages of visual model development. A life-cycle framework regarding trustworthy computer vision is then presented by making the relevant trustworthy issues, the operation pipeline of AI models, and viable technical solutions interconnected, providing researchers and policymakers with references and guidance for achieving trustworthy CV. Finally, it discusses particular motivations for conducting trustworthy practices and underscores the consistency and ambivalence among various trustworthy principles and technical attributes.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"26"},"PeriodicalIF":2.7,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11164730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Defending and Defining Environmental Responsibilities for the Health Research Sector.","authors":"Bridget Pratt","doi":"10.1007/s11948-024-00487-z","DOIUrl":"10.1007/s11948-024-00487-z","url":null,"abstract":"<p><p>Six planetary boundaries have already been exceeded, including climate change, loss of biodiversity, chemical pollution, and land-system change. The health research sector contributes to the environmental crisis we are facing, though to a lesser extent than healthcare or agriculture sectors. It could take steps to reduce its environmental impact but generally has not done so, even as the planetary emergency worsens. So far, the normative case for why the health research sector should rectify that failure has not been made. This paper argues strong philosophical grounds, derived from theories of health and social justice, exist to support the claim that the sector has a duty to avoid or minimise causing or contributing to ecological harms that threaten human health or worsen health inequity. The paper next develops ideas about the duty's content, explaining why it should entail more than reducing carbon emissions, and considers what limits might be placed on the duty.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"25"},"PeriodicalIF":2.7,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11156718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger
{"title":"AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.","authors":"Laura Arbelaez Ossa, Stephen R Milford, Michael Rost, Anja K Leist, David M Shaw, Bernice S Elger","doi":"10.1007/s11948-024-00486-0","DOIUrl":"10.1007/s11948-024-00486-0","url":null,"abstract":"<p><p>While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"24"},"PeriodicalIF":2.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11150179/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard T Cimino, Scott C Streiner, Daniel D Burkey, Michael F Young, Landon Bassett, Joshua B Reed
{"title":"Comparing First-Year Engineering Student Conceptions of Ethical Decision-Making to Performance on Standardized Assessments of Ethical Reasoning.","authors":"Richard T Cimino, Scott C Streiner, Daniel D Burkey, Michael F Young, Landon Bassett, Joshua B Reed","doi":"10.1007/s11948-024-00488-y","DOIUrl":"10.1007/s11948-024-00488-y","url":null,"abstract":"<p><p>The Defining Issues Test 2 (DIT-2) and Engineering Ethical Reasoning Instrument (EERI) are designed to measure ethical reasoning of general (DIT-2) and engineering-student (EERI) populations. These tools-and the DIT-2 especially-have gained wide usage for assessing the ethical reasoning of undergraduate students. This paper reports on a research study in which the ethical reasoning of first-year undergraduate engineering students at multiple universities was assessed with both of these tools. In addition to these two instruments, students were also asked to create personal concept maps of the phrase \"ethical decision-making.\" It was hypothesized that students whose instrument scores reflected more postconventional levels of moral development and more sophisticated ethical reasoning skills would likewise have richer, more detailed concept maps of ethical decision-making, reflecting their deeper levels of understanding of this topic and the complex of related concepts. In fact, there was no significant correlation between the instrument scores and concept map scoring, suggesting that the way first-year students conceptualize ethical decision making does not predict the way they behave when performing scenario-based ethical reasoning (perhaps more situated). This disparity indicates a need to more precisely quantify engineering ethical reasoning and decision making, if we wish to inform assessment outcomes using the results of such quantitative analyses.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"23"},"PeriodicalIF":2.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11150177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rethinking Health Recommender Systems for Active Aging: An Autonomy-Based Ethical Analysis.","authors":"Simona Tiribelli, Davide Calvaresi","doi":"10.1007/s11948-024-00479-z","DOIUrl":"10.1007/s11948-024-00479-z","url":null,"abstract":"<p><p>Health Recommender Systems are promising Articial-Intelligence-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging. However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis. In particular, a brief overview of the HRS' technical aspects allows us to shed light on the ethical risks and challenges they might raise on individuals' well-being as they age. Moreover, the study proposes a categorization, understanding, and possible preventive/mitigation actions for the elicited risks and challenges through rethinking the AI ethics core principle of autonomy. Finally, elaborating on autonomy-related ethical theories, the paper proposes an autonomy-based ethical framework and how it can foster the development of autonomy-enabling HRS for AA.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"22"},"PeriodicalIF":2.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11129984/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141155519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Epistemic Trust in Scientific Experts: A Moral Dimension.","authors":"George Kwasi Barimah","doi":"10.1007/s11948-024-00489-x","DOIUrl":"10.1007/s11948-024-00489-x","url":null,"abstract":"<p><p>In this paper, I develop and defend a moralized conception of epistemic trust in science against a particular kind of non-moral account defended by John (2015, 2018). I suggest that non-epistemic value considerations, non-epistemic norms of communication and affective trust properly characterize the relationship of epistemic trust between scientific experts and non-experts. I argue that it is through a moralized account of epistemic trust in science that we can make sense of the deep-seated moral undertones that are often at play when non-experts (dis)trust science.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"21"},"PeriodicalIF":2.7,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11126506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Anticipatory Approach to Ethico-Legal Implications of Future Neurotechnology.","authors":"Stephen Rainey","doi":"10.1007/s11948-024-00482-4","DOIUrl":"10.1007/s11948-024-00482-4","url":null,"abstract":"<p><p>This paper provides a justificatory rationale for recommending the inclusion of imagined future use cases in neurotechnology development processes, specifically for legal and policy ends. Including detailed imaginative engagement with future applications of neurotechnology can serve to connect ethical, legal, and policy issues potentially arising from the translation of brain stimulation research to the public consumer domain. Futurist scholars have for some time recommended approaches that merge creative arts with scientific development in order to theorise possible futures toward which current trends in technology development might be steered. Taking a creative, imaginative approach like this in the neurotechnology context can help move development processes beyond considerations of device functioning, safety, and compliance with existing regulation, and into an active engagement with potential future dynamics brought about by the emergence of the neurotechnology itself. Imagined scenarios can engage with potential consumer uses of devices that might come to challenge legal or policy contexts. An anticipatory, creative approach can imagine what such uses might consist in, and what they might imply. Justifying this approach also prompts a co-responsibility perspective for policymaking in technology contexts. Overall, this furnishes a mode of neurotechnology's emergence that can avoid crises of confidence in terms of ethico-legal issues, and promote policy responses balanced between knowledge, values, protected innovation potential, and regulatory safeguards.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"18"},"PeriodicalIF":2.7,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11096192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Between Technological Utopia and Dystopia: Online Expression of Compulsory Use of Surveillance Technology.","authors":"Yu-Leung Ng, Zhihuai Lin","doi":"10.1007/s11948-024-00483-3","DOIUrl":"10.1007/s11948-024-00483-3","url":null,"abstract":"<p><p>This study investigated people's ethical concerns of surveillance technology. By adopting the spectrum of technological utopian and dystopian narratives, how people perceive a society constructed through the compulsory use of surveillance technology was explored. This study empirically examined the anonymous online expression of attitudes toward the society-wide, compulsory adoption of a contact tracing app that affected almost every aspect of all people's everyday lives at a societal level. By applying the structural topic modeling approach to analyze comments on four Hong Kong anonymous discussion forums, topics concerning the technological utopian, dystopian, and pragmatic views on the surveillance app were discovered. The findings showed that people with a technological utopian view on this app believed that the implementation of compulsory app use can facilitate social good and maintain social order. In contrast, individuals who had a technological dystopian view expressed privacy concerns and distrust of this surveillance technology. Techno-pragmatists took a balanced approach and evaluated its implementation practically.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"19"},"PeriodicalIF":2.7,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11096232/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Australia II: A Case Study in Engineering Ethics.","authors":"Peter van Oossanen, Martin Peterson","doi":"10.1007/s11948-024-00477-1","DOIUrl":"10.1007/s11948-024-00477-1","url":null,"abstract":"<p><p>Australia II became the first foreign yacht to win the America's Cup in 1983. The boat had a revolutionary wing keel and a better underwater hull form. In official documents, Ben Lexcen is credited with the design. He is also listed as the sole inventor of the wing keel in a patent application submitted on February 5, 1982. However, as reported in New York Times, Sydney Morning Herald, and Professional Boatbuilder, the wing keel was in fact designed by engineer Peter van Oossanen at the Netherlands Ship Model Basin in Wageningen, assisted by Dr. Joop Slooff at the National Aerospace Laboratory in Amsterdam. Based on telexes, letters, drawings, and other documents preserved in his personal archive, this paper presents van Oossanen's account of how the revolutionary wing keel was designed. This is followed by an ethical analysis by Martin Peterson, in which he applies the American NSPE and Dutch KIVI codes of ethics to the information provided by van Oossanen. The NSPE and KIVI codes give conflicting advice about the case, and it is not obvious which document is most relevant. This impasse is resolved by applying a method of applied ethics in which similarity-based reasoning is extended to cases that are not fully similar. The key idea, presented in Peterson's book The Ethics of Technology (Peterson, The ethics of technology: A geometric analysis of five moral principles, Oxford University Press, 2017), is to use moral paradigm cases as reference points for constructing a \"moral map\".</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 3","pages":"16"},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}