{"title":"The State of Ethical AI in Practice: A Multiple Case Study of Estonian Public Service Organizations","authors":"Charlene Hinton","doi":"10.4018/ijt.322017","DOIUrl":"https://doi.org/10.4018/ijt.322017","url":null,"abstract":"Despite the prolific introduction of ethical frameworks, empirical research on AI ethics in the public sector is limited. This empirical research investigates how the ethics of AI is translated into practice and the challenges of its implementation by public service organizations. Using the Value Sensitive Design as a framework of inquiry, semi-structured interviews are conducted with eight public service organizations across the Estonian government that have piloted or developed an AI solution for delivering a public service. Results show that the practical application of AI ethical principles is indirectly considered and demonstrated in different ways in the design and development of the AI. However, translation of these principles varies according to the maturity of the AI and the public servant's level of awareness, knowledge, and competences in AI. Data-related challenges persist as public service organizations work on fine-tuning their AI applications.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133256994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Legitimacy of Artificial Intelligence in Judicial Decision Making: Chinese Experience","authors":"Zichun Xu","doi":"10.4018/ijt.311032","DOIUrl":"https://doi.org/10.4018/ijt.311032","url":null,"abstract":"Since the birth of artificial intelligence, the discussion of the legitimacy of its application to judicial scenarios has never stopped. The domestic academic circles question the legality of artificial intelligence decision-making mainly embodies four aspects: the judge's subjectivity crisis, the power legitimacy crisis, the imputation difficulty crisis, and the damage to the justice realization crisis. Therefore, it is urgent to clarify the legal logic of artificial intelligence in judicial decision-making and clarify its decision-making limits. Therefore, this paper aims to prove the legality of intelligent judicial operation simultaneously from the four-dimensional perspectives of artificial intelligence's intervention in judicial decision-making, such as the judge's subjectivity, the legitimacy of power, the attribution of fault, and the realization of justice, with a view to the subject, power, responsibility, justice, four aspects of the governance of China's intelligent judiciary to make recommendations.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126255017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Production Line Project Based on Value Sensitive Design","authors":"L. Kong, Jihua Li","doi":"10.4018/ijt.291550","DOIUrl":"https://doi.org/10.4018/ijt.291550","url":null,"abstract":"Value sensitive design is a new method to embed moral value into the design process and possesses broad research prospects. However, there is a gap between the industrial application and the practical application of VSD since its practical application focuses on human-computer interaction and medical ethics. In this paper, the conceptual, empirical, and technical investigation of VSD are analyzed, and the feasibility of VSD for production line design is demonstrated. It was applied to the production line design process in Shenyang, Liaoning Province, China. Then, specific design issues such as environmental sustainability and safety are solved by analyzing the value demands of stakeholders and balancing the value tension. Thus, the human value of the production line becomes more sensitive, and the value conflict between natural and technical artifact is alleviated. In this process, we reflect on the design problems to be solved and obtain valuable opinions, enabling VSD to better adapt to the industrial production line design.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123325453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabio Fossa, S. Arrigoni, G. Caruso, Hafeez Husain Cholakkal, Pragyan Dahal, Matteo Matteucci, F. Cheli
{"title":"Operationalizing the Ethics of Connected and Automated Vehicles: An Engineering Perspective","authors":"Fabio Fossa, S. Arrigoni, G. Caruso, Hafeez Husain Cholakkal, Pragyan Dahal, Matteo Matteucci, F. Cheli","doi":"10.4018/ijt.291553","DOIUrl":"https://doi.org/10.4018/ijt.291553","url":null,"abstract":"In response to the many social impacts of automated mobility, in September 2020 the European Commission published Ethics of Connected and Automated Vehicles, a report in which recommendations on road safety, privacy, fairness, explainability, and responsibility are drawn from a set of eight overarching principles. This paper presents the results of an interdisciplinary research where philosophers and engineers joined efforts to operationalize the guidelines advanced in the report. To this aim, we endorse a function-based working approach to support the implementation of values and recommendations into the design of automated vehicle technologies. Based on this, we develop methodological tools to tackle issues related to personal autonomy, explainability, and privacy as domains that most urgently require fine-grained guidance due to the associated ethical risks. Even though each tool still requires further inquiry, we believe that our work might already prove the productivity of the function-based approach and foster its adoption in the CAV scientific community.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121898993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Epistemic Democracy and Technopolitics: Four Models of Deliberation","authors":"Pierpaolo Marrone","doi":"10.4018/ijt.291551","DOIUrl":"https://doi.org/10.4018/ijt.291551","url":null,"abstract":"In this article I examine the structure of four deliberative models: epistemic democracy, epistocracy, dystopic algocracy, and utopian algocracy. Epistocracy and algocracy (which in its two versions is an extremization of epistocracy) represent a challenge to the alleged epistemic superiority of democracy: epistocracy for its emphasis on the role of experts; algocracy for its emphasis on technique as a cognitively and ethically superior tool. In the concluding remarks I will advance the thesis that these challenges can only be answered by emphasizing the value of citizens’ political participation, which can also represent both an increase in their cognitive abilities and a value for public ethics.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129448590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Fairness Impact Assessment: Conceptualizing Problems of Fairness in Technological Design","authors":"C. Shelley","doi":"10.4018/ijt.291554","DOIUrl":"https://doi.org/10.4018/ijt.291554","url":null,"abstract":"As modern life becomes ever more mediated by technology, technology assessment becomes ever more important. Tools that help to anticipate and evaluate social impacts of technological designs are crucial to understanding this relationship. This paper presents an assessment tool called the Fairness Impact Assessment (FIA). For present purposes, fairness refers to conflicts of interest between social groups that result from the configuration of technological designs. In these situations, designs operate in a way such that advantages they provide to one social group impose disadvantages on another. The FIA helps to make clear the nature of these conflicts and possibilities for their resolution. As a broad, qualitative framework, the FIA can be applied more generally than specifically quantitative frameworks currently being explored in the field of machine learning. Though not a formula for solving difficult social issues, the FIA provides a systematic means for the investigation of fairness problems in technology design that are otherwise not always well understood or addressed.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115553024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bridging the Theory-Practice Gap: Design-Experts on Capability Sensitive Design","authors":"Naomi Jacobs, W. Ijsselsteijn","doi":"10.4018/IJT.2021070101","DOIUrl":"https://doi.org/10.4018/IJT.2021070101","url":null,"abstract":"Many of the choices that designers and engineers make during a design process impact not only the functionality, usability, or aesthetics of a technology, but also impact the values that might be supported or undermined via the technology design. Designers can actively design for values, and this awareness has led to the development of various ‘ethics by design' approaches. One such approach is capability sensitive design (CSD). Thus far, CSD is only developed from a theoretical-ethical point of view. This article aims to bridge the theory-practice gap by entering into dialogue with various design-experts on ethics by design in general and CSD in particular. An empirical study, consisting of thematic interviews with nine design-experts, was conducted in order to explore design-experts' experiences with designing for values, what they regard as the strengths and weaknesses of CSD, and if CSD could be of practical use to their design (research) practice.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115903847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Theodicy for Artificial Universes: Moral Considerations on Simulation Hypotheses","authors":"S. Gualeni","doi":"10.4018/ijt.2021010102","DOIUrl":"https://doi.org/10.4018/ijt.2021010102","url":null,"abstract":"“Simulation hypotheses” are imaginative scenarios that are typically employed in philosophy to speculate on how likely it is that one is currently living within a simulated universe as well as on the possibility for ever discerning whether one does in fact inhabit one. These philosophical questions in particular overshadowed other aspects and potential uses of simulation hypotheses, some of which are foregrounded in this article. More specifically, “A Theodicy for Artificial Universes” focuses on the moral implications of simulation hypotheses with the objective of speculatively answering questions concerning computer simulations such as: If one is indeed living in a computer simulation, what might be its purpose? What aspirations and values could be inferentially attributed to its alleged creators? And would living in a simulated universe affect the value and meaning one attributes to the existence?","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123948477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development and Psychometric Analysis of Cyber Ethics Instrument (CEI)","authors":"Winfred Yaokumah","doi":"10.4018/ijt.2021010104","DOIUrl":"https://doi.org/10.4018/ijt.2021010104","url":null,"abstract":"This study developed and validated the psychometric properties of a new instrument, cyber ethics instrument (CEI), for assessing cyber ethics. Items related to cyber ethics were generated from a review of both scholarly and practitioner literature for the development of the instrument. The instrument was administered to university students. A sample of 503 responses was used for exploratory factor analysis (EFA) to extract the factor structure. The results of EFA suggested a six-factor structure (cyber privacy, computer ethics, academic integrity, intellectual property, netiquette, cyber safety), explaining 67.7% of the total variance. The results of confirmatory factor analysis (CFA) showed acceptable model fit indices. Therefore, the results established the viability of CEI for measuring cyber ethics. The instrument is essential for advancing the field of cyber ethics research as it will serve as a tool educators and researchers can use to measure the current stage of cyber ethics. The results obtained from using CEI can help identify and recommend cyber ethics interventions.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123362732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Confounding Complexity of Machine Action: A Hobbesian Account of Machine Responsibility","authors":"H. Sætra","doi":"10.4018/ijt.20210101.oa1","DOIUrl":"https://doi.org/10.4018/ijt.20210101.oa1","url":null,"abstract":"In this article, the core concepts in Thomas Hobbes's framework of representation and responsibility are applied to the question of machine responsibility and the responsibility gap and the retribution gap. The method is philosophical analysis and involves the application of theories from political theory to the ethics of technology. A veil of complexity creates the illusion that machine actions belong to a mysterious and unpredictable domain, and some argue that this unpredictability absolves designers of responsibility. Such a move would create a moral hazard related to both (a) strategically increasing unpredictability and (b) taking more risk if responsible humans do not have to bear the costs of the risks they create. Hobbes's theory allows for the clear and arguably fair attribution of action while allowing for necessary development and innovation. Innovation will be allowed as long as it is compatible with social order and provided the beneficial effects outweigh concerns about increased risk. Questions of responsibility are here considered to be political questions.","PeriodicalId":287069,"journal":{"name":"Int. J. Technoethics","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125106713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}