Katie Szilagyi, Jason Millar, AJung Moon, Shalaleh Rismani
{"title":"Driving into the Loop: Mapping Automation Bias and Liability Issues for Advanced Driver Assistance Systems","authors":"Katie Szilagyi, Jason Millar, AJung Moon, Shalaleh Rismani","doi":"10.1007/s44206-023-00066-y","DOIUrl":"https://doi.org/10.1007/s44206-023-00066-y","url":null,"abstract":"Advanced driver assistance systems (ADAS) are transforming the modern driving experience. Today’s vehicles seem better equipped than ever to augment safety by automating routine driving activities. The assumption appears straightforward: automation will necessarily improve road safety because automation replaces the human driver, thereby reducing human driving errors. But is this truly a straightforward assumption? In our contention, this assumption has potentially dangerous limits. This paper explores how well-understood and well-researched psychological and cognitive phenomena pertaining to human interaction with automation should not be properly labelled as misuse. Framing the problem through an automation bias lens, we argue that such so-called instances of misuse can instead be seen as predictable by-products of specific engineering design choices. We engage empirical data to problematize the assumption that automating driving functions directly leads to increased safety. Our conclusion calls for more transparent testing and safety data on the part of manufacturers, for updated notions of misuse in legal contexts, and for updated driver training regimes.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135254976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are Concerns Related to Artificial Intelligence Development and Use Really Necessary: A Philosophical Discussion","authors":"Levent Uzun","doi":"10.1007/s44206-023-00070-2","DOIUrl":"https://doi.org/10.1007/s44206-023-00070-2","url":null,"abstract":"This article explores the philosophical considerations, concerns, and recommendations surrounding the development and use of artificial intelligence and large language models like ChatGPT. It addresses the concerns raised by educators and academics regarding academic integrity and the potential negative effects of LLMs. The article discusses the challenges posed by LLMs, such as plagiarism, and the opportunities they present, such as assisting students in the writing process and improving the quality of their work. It examines different philosophical approaches, including utilitarianism, deontological ethics, and virtue ethics, and their implications for the development and use of AI. The article also delves into key concerns related to privacy, bias, discrimination, and the impact on employment. It provides suggestions for a responsible and ethical approach, including prioritizing ethics and transparency in AI development, establishing clear regulations, and fostering responsible use by users. The importance of ongoing philosophical reflection, ethical considerations, and collaboration among stakeholders is emphasized. The article concludes by highlighting the need for future research to address these concerns and ensure that AI is developed and used in a manner consistent with ethical principles, values, and societal well-being.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136278627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Commercial mHealth Apps and the Providers’ Responsibility for Hope","authors":"Leon Rossmaier, Yashar Saghai, Philip Brey","doi":"10.1007/s44206-023-00071-1","DOIUrl":"https://doi.org/10.1007/s44206-023-00071-1","url":null,"abstract":"Abstract In this paper, we ask whether the providers of commercial mHealth apps for self-tracking create inflated or false hopes for vulnerable user groups and whether they should be held responsible for this. This question is relevant because hopes created by the providers determine the modalities of the apps’ use. Due to the created hopes, users who may be vulnerable to certain design features of the app can experience bad outcomes in various dimensions of their well-being. This adds to structural injustices sustaining or exacerbating the vulnerable position of such user groups. We define structural injustices as systemic disadvantages for certain social groups that may be sustained or exacerbated by unfair power relations. Inflated hopes can also exclude digitally disadvantaged users. Thus, the hopes created by the providers of commercial mHealth apps for self-tracking press the question of whether the deployment and use of mHealth apps meet the requirements for qualifying as a just public health endeavor.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135579132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a Citizen- and Citizenry-Centric Digitalization of the Urban Environment: Urban Digital Twinning as Commoning","authors":"Stefano Calzati, Bastiaan van Loenen","doi":"10.1007/s44206-023-00064-0","DOIUrl":"https://doi.org/10.1007/s44206-023-00064-0","url":null,"abstract":"Abstract In this paper, we make a case for (1) a sociotechnical understanding and (2) a commoning approach to the governance of digital twin technologies applied to the urban environment. The European Union has reinstated many times over the willingness to pursue a citizen-centric approach to digital transformation. However, recent studies show the limits of a human right-based only approach in that this overlooks consequences of data-driven technologies at societal level. The need to synthesize an individual-based and collective-based approach within an ecosystemic vision is key, especially when it comes to cities, which are complex systems affected by problems whose solutions require forms of self-organization. Tackling the limitations of current tech-centered and practice-first city digital twin (CDT) projects in Europe, in this article, we conceptualize the idea of urban digital twinning (UDT) as a process that is contextual, iterative, and participatory. Unpacking the normative understanding of data-as-resource, we claim that a commoning approach to data allows enacting a fair ecosystemic vision of the digitalization of the urban environment which is ultimately both citizen- and citizenry-centric.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135011518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Araya Sookhom, Piyachat Klinthai, Pimpakarn A-masiri, Chutisant Kerdvibulvech
{"title":"A New Study of AI Artists for Changing the Movie Industries","authors":"Araya Sookhom, Piyachat Klinthai, Pimpakarn A-masiri, Chutisant Kerdvibulvech","doi":"10.1007/s44206-023-00065-z","DOIUrl":"https://doi.org/10.1007/s44206-023-00065-z","url":null,"abstract":"Due to the rise of artificial intelligence (AI) in the arts, this paper aims to explore the use of AI for reducing film production costs through the creation of realistic images. Additionally, we investigate whether AI can recreate the same character at the same age. Without needing to replace the original actor, qualitative data collection tools were employed to study three distinct population groups within the film industry: film industry professionals, moviegoers, and technologists. Our research reveals that AI, or AI artists in film production, still face limitations in significantly reducing production costs. Furthermore, it is crucial to engage a text expert in the image production process for films who possesses a comprehensive understanding of film principles in order to achieve images that align with the project’s requirements. Moreover, the introduction of the AI artist technique allows for the recreation of a character at the same age portrayed by the same actor, even if that actor may have passed away. Consequently, obtaining consent from the relatives of the actor or actress becomes a necessary step. Furthermore, the aspect of audience acceptance does not hold significant interest, as it demands a greater level of realism in both the image and the actors, surpassing what AI can provide. Therefore, this paper underscores the increasing influence of AI in the arts, particularly within film production, and examines its potential to reduce costs and recreate characters.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos I. Gutierrez, Anthony Aguirre, Risto Uuk, Claire C. Boine, Matija Franklin
{"title":"A Proposal for a Definition of General Purpose Artificial Intelligence Systems","authors":"Carlos I. Gutierrez, Anthony Aguirre, Risto Uuk, Claire C. Boine, Matija Franklin","doi":"10.1007/s44206-023-00068-w","DOIUrl":"https://doi.org/10.1007/s44206-023-00068-w","url":null,"abstract":"Abstract The European Union (EU) is in the middle of comprehensively regulating artificial intelligence (AI) through an effort known as the AI Act. Within the vast spectrum of issues under the Act’s aegis, the treatment of technologies classified as general purpose AI systems (GPAIS) merits special consideration. Particularly, existing proposals to define GPAIS do not provide sufficient guidance to distinguish these systems from those designed to perform specific tasks, denominated as fixed-purpose. Thus, our working paper has three objectives: first, to highlight the variance and ambiguity in the interpretation of GPAIS in the literature; second, to examine the dimensions of the generality of purpose available to define GPAIS; lastly, to propose a functional definition of the term that facilitates its governance within the EU. Our intention with this piece is to offer policymakers an alternative perspective on GPAIS that improves the hard and soft law efforts to mitigate these systems’ risks and protect the well-being and future of constituencies in the EU and globally.","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135825574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dennis Vetter, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Georgios Kararigas, P. Kringen, V. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Magnus Westerlund, Renee Wurth, R. Zicari
{"title":"Lessons Learned from Assessing Trustworthy AI in Practice","authors":"Dennis Vetter, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Georgios Kararigas, P. Kringen, V. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Magnus Westerlund, Renee Wurth, R. Zicari","doi":"10.1007/s44206-023-00063-1","DOIUrl":"https://doi.org/10.1007/s44206-023-00063-1","url":null,"abstract":"","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"163 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80273006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Debiasing Strategies for Conversational AI: Improving Privacy and Security Decision-Making","authors":"Anna Leschanowsky, Birgit Popp, Nils Peters","doi":"10.1007/s44206-023-00062-2","DOIUrl":"https://doi.org/10.1007/s44206-023-00062-2","url":null,"abstract":"","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75606463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Thermal Imaging in Robotics as a Privacy-Enhancing or Privacy-Invasive Measure? Misconceptions of Privacy when Using Thermal Cameras in Robots","authors":"Naomi Lintvedt","doi":"10.1007/s44206-023-00060-4","DOIUrl":"https://doi.org/10.1007/s44206-023-00060-4","url":null,"abstract":"","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74034808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Innovation Commons for the Data Economy","authors":"Sara Guidi","doi":"10.1007/s44206-023-00059-x","DOIUrl":"https://doi.org/10.1007/s44206-023-00059-x","url":null,"abstract":"","PeriodicalId":72819,"journal":{"name":"Digital society : ethics, socio-legal and governance of digital technology","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74698037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}