AI & SocietyPub Date : 2023-06-14DOI: 10.1007/s00146-023-01708-y
Joshua C. Gellers
{"title":"AI ethics discourse: a call to embrace complexity, interdisciplinarity, and epistemic humility","authors":"Joshua C. Gellers","doi":"10.1007/s00146-023-01708-y","DOIUrl":"10.1007/s00146-023-01708-y","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2593 - 2594"},"PeriodicalIF":2.9,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123474350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-06-13DOI: 10.1007/s00146-023-01698-x
Robert Sparrow
{"title":"Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers","authors":"Robert Sparrow","doi":"10.1007/s00146-023-01698-x","DOIUrl":"10.1007/s00146-023-01698-x","url":null,"abstract":"<div><p>When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what they call the “Friendly AI problem”. Roughly speaking this is the question of how we might ensure that the AI that will develop from the first AI that we create will remain sympathetic to humanity and continue to serve, or at least take account of, our interests. In this paper I draw on the “neo-republican” philosophy of Philip Pettit to argue that solving the Friendly AI problem would not change the fact that the advent of super-intelligent AI would be disastrous for humanity by virtue of rendering us the slaves of machines. A key insight of the republican tradition is that freedom requires equality of a certain sort, which is clearly lacking between pets and their owners. Benevolence is not enough. As long as AI has the power to interfere in humanity’s choices, and the capacity to do so without reference to our interests, then it will dominate us and thereby render us unfree. The pets of kind owners are still pets, which is not a status which humanity should embrace. If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.\u0000</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2439 - 2444"},"PeriodicalIF":2.9,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01698-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123898757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-06-10DOI: 10.1007/s00146-023-01707-z
Gabriel Lanyi
{"title":"The galloping editor","authors":"Gabriel Lanyi","doi":"10.1007/s00146-023-01707-z","DOIUrl":"10.1007/s00146-023-01707-z","url":null,"abstract":"<div><p>Classical natural language processing endeavored to understand the language of native speakers. When this proved to lie beyond the horizon, a scaled-down version settled for text analysis and processing but retained the old name and acronym. But text ≠ language. Any combination of signs and symbols qualifies as text. Language presupposes meaning, which is what connects it to real life. Failing to distinguish between the two results in confusing humanoids (machines thinking like humans) with machinoids (humans thinking like machines). As scientific English (SciEng) became the lingua franca of science, it has acquired all the traits of a machine language: reduced vocabulary, where fewer and fewer words have taken on more and more meanings; prescribed use of pronouns; depersonalized rigid syntactic forms and rules of composition. Compliance with SciEng standards can be automatically verified, which means that Sci Eng can be automatically imitated, what is referred to as AI writing (ChatGPT). The article discusses an attempt to automatically correct deviations from the rules by what is touted as AI editing.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2457 - 2461"},"PeriodicalIF":2.9,"publicationDate":"2023-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128554995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-06-03DOI: 10.1007/s00146-023-01697-y
Pamela Robinson
{"title":"Moral disagreement and artificial intelligence","authors":"Pamela Robinson","doi":"10.1007/s00146-023-01697-y","DOIUrl":"10.1007/s00146-023-01697-y","url":null,"abstract":"<div><p>Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. <i>Moral solutions</i> apply a moral theory or related principles and largely ignore the details of the disagreement. <i>Compromise solutions</i> apply a method of finding a compromise and taking information about the disagreement as input. <i>Epistemic solutions</i> apply an evidential rule that treats the details of the disagreement as evidence of moral truth. Proposals for all three kinds of solutions can be found in the AI ethics and value alignment literature, but little has been said to justify choosing one over the other. I argue that the choice is best framed in terms of <i>moral risk</i>.\u0000</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2425 - 2438"},"PeriodicalIF":2.9,"publicationDate":"2023-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01697-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134974333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-06-02DOI: 10.1007/s00146-023-01699-w
Uwe Klein, Jana Depping, Laura Wohlfahrt, Pantaleon Fassbender
{"title":"Application of artificial intelligence: risk perception and trust in the work context with different impact levels and task types","authors":"Uwe Klein, Jana Depping, Laura Wohlfahrt, Pantaleon Fassbender","doi":"10.1007/s00146-023-01699-w","DOIUrl":"10.1007/s00146-023-01699-w","url":null,"abstract":"<div><p>Following the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) and are extended by individual descriptive elements of AI systems in comparison to the original studies. The first online experiment examines decisions made by artificial intelligence with varying degrees of impact. In the high-impact scenario, applicants are automatically selected for a job and immediately received an employment contract. In the low-impact scenario, three applicants are automatically invited for another interview. In addition, the relationship between age and risk perception is investigated. The second online experiment tests subjects’ perceived trust in decisions made by artificial intelligence, either semi-automatically through the assistance of human experts or fully automatically in comparison. Two task types are distinguished. The task type that requires “human skills”—represented as a performance evaluation situation—and the task type that requires “mechanical skills”—represented as a work distribution situation. In addition, the extent of negative emotions in automated decisions is investigated. The results are related to the findings of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018). Implications for further research activities and practical relevance are discussed.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2445 - 2456"},"PeriodicalIF":2.9,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01699-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128787268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-06-02DOI: 10.1007/s00146-023-01691-4
Ignacy Sitnicki
{"title":"The approach to AI emergence from the standpoint of future contingents","authors":"Ignacy Sitnicki","doi":"10.1007/s00146-023-01691-4","DOIUrl":"10.1007/s00146-023-01691-4","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2385 - 2387"},"PeriodicalIF":2.9,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127933717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-06-01DOI: 10.1007/s00146-023-01695-0
Mihai Nadin
{"title":"Intelligence at any price? A criterion for defining AI","authors":"Mihai Nadin","doi":"10.1007/s00146-023-01695-0","DOIUrl":"10.1007/s00146-023-01695-0","url":null,"abstract":"<div><p>According to how AI has defined itself from its beginning, thinking in non-living matter, i.e., without life, is possible. The premise of symbolic AI is that operating on representations of reality machines can understand it. When this assumption did not work as expected, the mathematical model of the neuron became the engine of artificial “brains.” Connectionism followed. Currently, in the context of Machine Learning success, attempts are made at integrating the symbolic and connectionist paths. There is hope that Artificial General Intelligence (AGI) performance can be achieved. As encouraging as neuro-symbolic AI seems to be, it remains unclear whether AGI is actually a moving target as long as AI itself remains ambiguously defined. This paper makes the argument that the intelligence of machines, expressed in their performance, reflects how adequate the means used for achieving it are. Therefore, energy use and the amount of data necessary qualify as a good metric for comparing natural and artificial performance. \u0000</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"38 5","pages":"1813 - 1817"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49998908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-05-31DOI: 10.1007/s00146-023-01688-z
Jon Eklöf, Thomas Hamelryck, Cadell Last, Alexander Grima, Ulrika Lundh Snis
{"title":"Abstraction, mimesis and the evolution of deep learning","authors":"Jon Eklöf, Thomas Hamelryck, Cadell Last, Alexander Grima, Ulrika Lundh Snis","doi":"10.1007/s00146-023-01688-z","DOIUrl":"10.1007/s00146-023-01688-z","url":null,"abstract":"<div><p>Deep learning developers typically rely on deep learning software frameworks (DLSFs)—simply described as pre-packaged libraries of programming components that provide high-level access to deep learning functionality. New DLSFs progressively encapsulate mathematical, statistical and computational complexity. Such higher levels of abstraction subsequently make it easier for deep learning methodology to spread through mimesis (i.e., imitation of models perceived as successful). In this study, we quantify this increase in abstraction and discuss its implications. Analyzing publicly available code from Github, we found that the introduction of DLSFs correlates both with significant increases in the number of deep learning projects and substantial reductions in the number of lines of code used. We subsequently discuss and argue the importance of abstraction in deep learning with respect to ephemeralization, technological advancement, democratization, adopting timely levels of abstraction, the emergence of mimetic deadlocks, issues related to the use of black box methods including privacy and fairness, and the concentration of technological power. Finally, we also discuss abstraction as a symptom of an ongoing technological metatransition.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2349 - 2357"},"PeriodicalIF":2.9,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01688-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131563495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-05-30DOI: 10.1007/s00146-023-01693-2
Cian Murphy, Peter J. Carew, Larry Stapleton
{"title":"A human-centred systems manifesto for smart digital immersion in Industry 5.0: a case study of cultural heritage","authors":"Cian Murphy, Peter J. Carew, Larry Stapleton","doi":"10.1007/s00146-023-01693-2","DOIUrl":"10.1007/s00146-023-01693-2","url":null,"abstract":"<div><p>Emergent digital technologies provide cultural heritage spaces with the opportunity to reassess their current user journey. An immersive user experience can be developed that is innovative, dynamic, and customised for each attendee. Museums have already begun to move towards interactive exhibitions utilising Artificial Intelligence (AI) and the Internet of Things (IOT), and more recently, the use of Virtual Reality (VR) and Augmented Reality (AR) has become more common in cultural heritage spaces to present items of historical significance. VR concentrates on the provision of full immersion within a digitised environment utilising a headset, whilst AR focuses on the inclusion of digitised content within the existing physical environment that can be accessed through a medium such as a mobile phone application. Machine learning techniques such as a recommender system can support an immersive user journey by issuing personalised recommendations regarding a user’s preferred future content based on their previous activity. An ethical approach is necessary to take the precautions required to protect the welfare of human participants and eliminate any aspect of stereotyping or biased behaviour. This paper sets out a human-centred manifesto intended to provide guidance when inducing smart digital immersion in cultural heritage spaces. A review of existing digital cultural heritage projects was conducted to determine their adherence to the manifesto with the findings indicating that Education was a primary focus across all projects and that Personalisation, Respect and Empathy, and Support were also highly valued. Additionally, the findings indicated that there were areas with room for improvement such as Fairness to ensure that a well-balanced human-centred system is implemented.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2401 - 2416"},"PeriodicalIF":2.9,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135643162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}