Morals & MachinesPub Date : 2021-05-31DOI: 10.5771/2747-5182-2021-1-62
H. Sætra, E. Fosch-Villaronga
{"title":"Research in AI has Implications for Society: How do we Respond?","authors":"H. Sætra, E. Fosch-Villaronga","doi":"10.5771/2747-5182-2021-1-62","DOIUrl":"https://doi.org/10.5771/2747-5182-2021-1-62","url":null,"abstract":"Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134522642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morals & MachinesPub Date : 2021-05-31DOI: 10.5771/2747-5182-2021-1-46
S. Ghose
{"title":"Beyond the Binary: Building a Quantum Future","authors":"S. Ghose","doi":"10.5771/2747-5182-2021-1-46","DOIUrl":"https://doi.org/10.5771/2747-5182-2021-1-46","url":null,"abstract":"Quantum mechanics has not only revolutionized our understanding of the fundamental laws of the universe, but has also transformed modern computing and communications technologies, leading to our current information age. The inherently nondeterministic nature of the theory is now leading to radical and powerful new frameworks for information processing and data transmission. This new quantum revolution raises social, political and ethical questions, but also provides an opportunity to develop quantum-inspired frameworks to examine and build the quantum information era.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126850762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morals & MachinesPub Date : 2021-05-31DOI: 10.5771/2747-5182-2021-1-32
S. Soekadar, J. Chandler, M. Ienca, C. Bublitz
{"title":"On The Verge of the Hybrid Mind","authors":"S. Soekadar, J. Chandler, M. Ienca, C. Bublitz","doi":"10.5771/2747-5182-2021-1-32","DOIUrl":"https://doi.org/10.5771/2747-5182-2021-1-32","url":null,"abstract":"Recent advances in neurotechnology allow for an increasingly tight integration of the human brain and mind with artificial cognitive systems, blending persons with technologies and creating an assemblage that we call a hybrid mind. In some ways the mind has always been a hybrid, emerging from the interaction of biology, culture (including technological artifacts) and the natural environment. However, with the emergence of neurotechnologies enabling bidirectional flows of information between the brain and AI-enabled devices, integrated into mutually adaptive assemblages, we have arrived at a point where the specific examination of this new instantiation of the hybrid mind is essential. Among the critical questions raised by this development are the effects of these devices on the user’s perception of the self, and on the user’s experience of their own mental contents. Questions arise related to the boundaries of the mind and body and whether the hardware and software that are functionally integrated with the body and mind are to be viewed as parts of the person or separate artifacts subject to different legal treatment. Other questions relate to how to attribute responsibility for actions taken as a result of the operations of a hybrid mind, as well as how to settle questions of the privacy and security of information generated and retained within a hybrid mind.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121561660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thales Costa Bertaglia, Adrien Dubois, Catalina Goanta
{"title":"Clout Chasing for the Sake of Content Monetization: Gaming Algorithmic Architectures with Self-Moderation Strategies","authors":"Thales Costa Bertaglia, Adrien Dubois, Catalina Goanta","doi":"10.2139/SSRN.3843631","DOIUrl":"https://doi.org/10.2139/SSRN.3843631","url":null,"abstract":"This short discussion paper addresses how controversy is monetized online by reflecting on a new iteration of the shock value in media production, identified on social media as the ‘clout chasing’ phenomenon. We first exemplify controversial behavior, and subsequently proceed to defining clout chasing, which we discuss this concept in relation to existing frameworks for the understanding of controversy on social media. We then outline what clout chasing entails as a content monetization strategy, and address the risks associated with this approach. In doing so, we introduce the concept of ‘content self-moderation,’ which encompasses how creators use content moderation as a way to hedge monetization risks arising out of their reliance on controversy for economic growth. This concept is discussed in the context of the automated content governance entailed by algorithmic platform architectures, to contribute to existing scholarship on platform governance.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125193142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morals & MachinesPub Date : 2021-05-04DOI: 10.5771/2747-5174-2021-1-86
S. Ranchordás
{"title":"Experimental Regulations for AI: Sandboxes for Morals and Mores","authors":"S. Ranchordás","doi":"10.5771/2747-5174-2021-1-86","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-1-86","url":null,"abstract":"Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a test bed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of these anticipatory or, at times, adaptive regulatory frameworks have remained understudied. This exploratory article delves into the some of the benefits and intricacies of allowing for experimental instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132775194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morals & MachinesPub Date : 1900-01-01DOI: 10.5771/2747-5174-2021-1-74
Alexander Buhmann, Christian Fieseler
{"title":"Tackling the Grand Challenge of Algorithmic Opacity Through Principled Robust Action","authors":"Alexander Buhmann, Christian Fieseler","doi":"10.5771/2747-5174-2021-1-74","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-1-74","url":null,"abstract":"Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133045511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morals & MachinesPub Date : 1900-01-01DOI: 10.5771/2747-5174-2021-1-52
Valentin Jeutner
{"title":"Addressing the Legal Dimensionof Quantum Computers","authors":"Valentin Jeutner","doi":"10.5771/2747-5174-2021-1-52","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-1-52","url":null,"abstract":"Quantum computers are legal things which are going to affect our lives in a tangible manner. As such, their operation and development must be regulated and supervised. No doubt, the transformational potential of quantum computing is remarkable. But if it goes unchecked the evelopment of quantum computers is also going to impact social and legal power-relations in a remarkable manner. Legal principles that can guide regulatory action must be developed in order to hedge the risks associated with the development of quantum computing. This article contributes to the development of such principles by proposing the quantum imperative. The quantum imperative provides that regulators and developers must ensure that the development of quantum computers: (1) does not create or exacerbate inequalities, (2) does not undermine individual autonomy, and that it (3) does not occur without consulting those whose interests they affect.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122540481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morals & MachinesPub Date : 1900-01-01DOI: 10.5771/2747-5174-2021-1-10
Miriam Meckel, Léa Steinacker
{"title":"Hybrid Reality: The Rise of Deepfakes and Diverging Truths","authors":"Miriam Meckel, Léa Steinacker","doi":"10.5771/2747-5174-2021-1-10","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-1-10","url":null,"abstract":"While the manipulation of media has existed as long as their creation, recent advances in Artificial Intelligence (AI) have expedited the range of tampering techniques. Pictures, sound and moving images can now be altered and even generated entirely by computation. We argue that this development contributes to a “hybrid reality”, a construct of both human perception and technologically driven fabrications. In using synthetic media involving deep learning, called deepfakes, as one manifestation,we show how this technological progress leads to a distorted marketplace of ideas and truths that necessitates a renegotiation of democratic processes. We synthesize implications and conclude with recommendations for how to reach a new consensus on the construction of reality.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127035056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}