{"title":"Do We Collaborate With What We Design?","authors":"Katie D Evans, Scott A Robbins, Joanna J Bryson","doi":"10.1111/tops.12682","DOIUrl":null,"url":null,"abstract":"<p><p>The use of terms like \"collaboration\" and \"co-workers\" to describe interactions between human beings and certain artificial intelligence (AI) systems has gained significant traction in recent years. Yet, it remains an open question whether such anthropomorphic metaphors provide either a fertile or even a purely innocuous lens through which to conceptualize designed commercial products. Rather, a respect for human dignity and the principle of transparency may require us to draw a sharp distinction between real and faux peers. At the heart of the concept of collaboration lies the assumption that the collaborating parties are (or behave as if they are) of similar status: two agents capable of comparable forms of intentional action, moral agency, or moral responsibility. In application to current AI systems, this not only seems to fail ontologically but also from a socio-political perspective. AI in the workplace is primarily an extension of capital, not of labor, and the AI \"co-workers\" of most individuals will likely be owned and operated by their employer. In this paper, we critically assess both the accuracy and desirability of using the term \"collaboration\" to describe interactions between humans and AI systems. We begin by proposing an alternative ontology of human-machine interaction, one which features not two equivalently autonomous agents, but rather one machine that exists in a relationship of heteronomy to one or more human agents. In this sense, while the machine may have a significant degree of independence concerning the means by which it achieves its ends, the ends themselves are always chosen by at least one human agent, whose interests may differ from those of the individuals interacting with the machine. We finally consider the motivations and risks inherent to the continued use of the term \"collaboration,\" exploring its strained relation to the concept of transparency, and consequences for the future of work.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"392-411"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093928/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Topics in Cognitive Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/tops.12682","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/15 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
The use of terms like "collaboration" and "co-workers" to describe interactions between human beings and certain artificial intelligence (AI) systems has gained significant traction in recent years. Yet, it remains an open question whether such anthropomorphic metaphors provide either a fertile or even a purely innocuous lens through which to conceptualize designed commercial products. Rather, a respect for human dignity and the principle of transparency may require us to draw a sharp distinction between real and faux peers. At the heart of the concept of collaboration lies the assumption that the collaborating parties are (or behave as if they are) of similar status: two agents capable of comparable forms of intentional action, moral agency, or moral responsibility. In application to current AI systems, this not only seems to fail ontologically but also from a socio-political perspective. AI in the workplace is primarily an extension of capital, not of labor, and the AI "co-workers" of most individuals will likely be owned and operated by their employer. In this paper, we critically assess both the accuracy and desirability of using the term "collaboration" to describe interactions between humans and AI systems. We begin by proposing an alternative ontology of human-machine interaction, one which features not two equivalently autonomous agents, but rather one machine that exists in a relationship of heteronomy to one or more human agents. In this sense, while the machine may have a significant degree of independence concerning the means by which it achieves its ends, the ends themselves are always chosen by at least one human agent, whose interests may differ from those of the individuals interacting with the machine. We finally consider the motivations and risks inherent to the continued use of the term "collaboration," exploring its strained relation to the concept of transparency, and consequences for the future of work.
期刊介绍:
Topics in Cognitive Science (topiCS) is an innovative new journal that covers all areas of cognitive science including cognitive modeling, cognitive neuroscience, cognitive anthropology, and cognitive science and philosophy. topiCS aims to provide a forum for: -New communities of researchers- New controversies in established areas- Debates and commentaries- Reflections and integration The publication features multiple scholarly papers dedicated to a single topic. Some of these topics will appear together in one issue, but others may appear across several issues or develop into a regular feature. Controversies or debates started in one issue may be followed up by commentaries in a later issue, etc. However, the format and origin of the topics will vary greatly.