Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto
{"title":"The Inner Loop of Collective Human-Machine Intelligence.","authors":"Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto","doi":"10.1111/tops.12642","DOIUrl":"10.1111/tops.12642","url":null,"abstract":"<p><p>With the rise of artificial intelligence (AI) and the desire to ensure that such machines work well with humans, it is essential for AI systems to actively model their human teammates, a capability referred to as Machine Theory of Mind (MToM). In this paper, we introduce the inner loop of human-machine teaming expressed as communication with MToM capability. We present three different approaches to MToM: (1) constructing models of human inference with well-validated psychological theories and empirical measurements; (2) modeling human as a copy of the AI; and (3) incorporating well-documented domain knowledge about human behavior into the above two approaches. We offer a formal language for machine communication and MToM, where each term has a clear mechanistic interpretation. We exemplify the overarching formalism and the specific approaches in two concrete example scenarios. Related work that demonstrates these approaches is highlighted along the way. The formalism, examples, and empirical support provide a holistic picture of the inner loop of human-machine teaming as a foundational building block of collective human-machine intelligence.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"248-267"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10748969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of Adaptation in Collective Human-AI Teaming.","authors":"Michelle Zhao, Reid Simmons, Henny Admoni","doi":"10.1111/tops.12633","DOIUrl":"10.1111/tops.12633","url":null,"abstract":"<p><p>This paper explores a framework for defining artificial intelligence (AI) that adapts to individuals within a group, and discusses the technical challenges for collaborative AI systems that must work with different human partners. Collaborative AI is not one-size-fits-all, and thus AI systems must tune their output based on each human partner's needs and abilities. For example, when communicating with a partner, an AI should consider how prepared their partner is to receive and correctly interpret the information they are receiving. Forgoing such individual considerations may adversely impact the partner's mental state and proficiency. On the other hand, successfully adapting to each person's (or team member's) behavior and abilities can yield performance benefits for the human-AI team. Under this framework, an AI teammate adapts to human partners by first learning components of the human's decision-making process and then updating its own behaviors to positively influence the ongoing collaboration. This paper explains the role of this AI adaptation formalism in dyadic human-AI interactions and examines its application through a case study in a simulated navigation domain.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"291-323"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093936/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9339381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hunting for Paradoxes: A Research Strategy for Cognitive Science.","authors":"Nick Chater","doi":"10.1111/tops.70004","DOIUrl":"https://doi.org/10.1111/tops.70004","url":null,"abstract":"<p><p>How should we identify interesting topics in cognitive science? This paper suggests that one useful research strategy is to hunt for, and attempt to resolve, paradoxes: that is, apparent or real contradictions in our understanding of the mind and of thought. The rationale for this strategy is the assumption that our current thinking, and our various partial theories, of any topic are typically ill-defined, inconsistent or both. Thus, contradictions and confusions abound. Isolating paradoxes helps us expose vagueness and contradictions and demands that we formulate our ideas more precisely. From this point of view, finding a robust and puzzling contradiction in our current thinking should be celebrated as an achievement in itself. Ideally, of course, we then make further progress by clarifying how the paradox may be resolved, by clarifying our theories or finding new data that may decide between inconsistent assumptions. This approach is illustrated through examples from the author's research over several decades, which seems in retrospect to involve a repeated, if largely unwitting, application of this strategy.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pranav Gupta, Thuy Ngoc Nguyen, Cleotilde Gonzalez, Anita Williams Woolley
{"title":"Fostering Collective Intelligence in Human-AI Collaboration: Laying the Groundwork for COHUMAIN.","authors":"Pranav Gupta, Thuy Ngoc Nguyen, Cleotilde Gonzalez, Anita Williams Woolley","doi":"10.1111/tops.12679","DOIUrl":"10.1111/tops.12679","url":null,"abstract":"<p><p>Artificial Intelligence (AI) powered machines are increasingly mediating our work and many of our managerial, economic, and cultural interactions. While technology enhances individual capability in many ways, how do we know that the sociotechnical system as a whole, consisting of a complex web of hundreds of human-machine interactions, is exhibiting collective intelligence? Research on human-machine interactions has been conducted within different disciplinary silos, resulting in social science models that underestimate technology and vice versa. Bringing together these different perspectives and methods at this juncture is critical. To truly advance our understanding of this important and quickly evolving area, we need vehicles to help research connect across disciplinary boundaries. This paper advocates for establishing an interdisciplinary research domain-Collective Human-Machine Intelligence (COHUMAIN). It outlines a research agenda for a holistic approach to designing and developing the dynamics of sociotechnical systems. In illustrating the kind of approach, we envision in this domain, we describe recent work on a sociocognitive architecture, the transactive systems model of collective intelligence, that articulates the critical processes underlying the emergence and maintenance of collective intelligence and extend it to human-AI systems. We connect this with synergistic work on a compatible cognitive architecture, instance-based learning theory and apply it to the design of AI agents that collaborate with humans. We present this work as a call to researchers working on related questions to not only engage with our proposal but also develop their own sociocognitive architectures and unlock the real potential of human-machine intelligence.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"189-216"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9697847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shifting Between Models of Mind: New Insights Into How Human Minds Give Rise to Experiences of Spiritual Presence and Alternative Realities.","authors":"Kara Weisman, Tanya Marie Luhrmann","doi":"10.1111/tops.70002","DOIUrl":"10.1111/tops.70002","url":null,"abstract":"<p><p>Phenomenal experiences of immaterial spiritual beings-hearing the voice of God, seeing the spirit of an ancestor-are a valuable and largely untapped resource for the field of cognitive science. Such experiences, we argue, are experiences of the mind, tied to mental models and cognitive-epistemic attitudes about the mind, and thus provide a striking example of how, with the right combination of mental models and cognitive-epistemic attitudes, one's own thoughts and inner sensations can be experienced as coming from somewhere or someone else. In this paper, we present results from a large-scale study of U.S. adults (N = 1779) that provides new support for our theory that spiritual experiences are facilitated by a dynamic interaction between mental models and cognitive-epistemic attitudes: A person is more likely to hear God speak if they have the epistemic flexibility and cultural support to shift, temporarily, away from a mundane model of mind into a more \"porous\" way of thinking and being. This, in turn, lays the foundation for a meditation on how mental models and cognitive-epistemic attitudes might also interact to facilitate other phenomena of interest to cognitive science, such as fiction writing and scientific discovery.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"144-179"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143587711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Murray S Bennett, Laiton Hedley, Jonathon Love, Joseph W Houpt, Scott D Brown, Ami Eidels
{"title":"Human Performance in Competitive and Collaborative Human-Machine Teams.","authors":"Murray S Bennett, Laiton Hedley, Jonathon Love, Joseph W Houpt, Scott D Brown, Ami Eidels","doi":"10.1111/tops.12683","DOIUrl":"10.1111/tops.12683","url":null,"abstract":"<p><p>In the modern world, many important tasks have become too complex for a single unaided individual to manage. Teams conduct some safety-critical tasks to improve task performance and minimize the risk of error. These teams have traditionally consisted of human operators, yet, nowadays, artificial intelligence and machine systems are incorporated into team environments to improve performance and capacity. We used a computerized task modeled after a classic arcade game to investigate the performance of human-machine and human-human teams. We manipulated the group conditions between team members; sometimes, they were instructed to collaborate, compete, or work separately. We evaluated players' performance in the main task (gameplay) and, in post hoc analyses, participant behavioral patterns to inform group strategies. We compared game performance between team types (human-human vs. human-machine) and group conditions (competitive, collaborative, independent). Adapting workload capacity analysis to human-machine teams, we found performance under both team types and all group conditions suffered a performance efficiency cost. However, we observed a reduced cost in collaborative over competitive teams within human-human pairings, but this effect was diminished when playing with a machine partner. The implications of workload capacity analysis as a powerful tool for human-machine team performance measurement are discussed.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"324-348"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9770733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-beliefs, Transactive Memory Systems, and Collective Identification in Teams: Articulating the Socio-Cognitive Underpinnings of COHUMAIN.","authors":"Ishani Aggarwal, Gabriela Cuconato, Nüfer Yasin Ateş, Nicoleta Meslec","doi":"10.1111/tops.12681","DOIUrl":"10.1111/tops.12681","url":null,"abstract":"<p><p>Socio-cognitive theory conceptualizes individual contributors as both enactors of cognitive processes and targets of a social context's determinative influences. The present research investigates how contributors' metacognition or self-beliefs, combine with others' views of themselves to inform collective team states related to learning about other agents (i.e., transactive memory systems) and forming social attachments with other agents (i.e., collective team identification), both important teamwork states that have implications for team collective intelligence. We test the predictions in a longitudinal study with 78 teams. Additionally, we provide interview data from industry experts in human-artificial intelligence teams. Our findings contribute to an emerging socio-cognitive architecture for COllective HUman-MAchine INtelligence (i.e., COHUMAIN) by articulating its underpinnings in individual and collective cognition and metacognition. Our resulting model has implications for the critical inputs necessary to design and enable a higher level of integration of human and machine teammates.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"217-247"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9758826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do We Collaborate With What We Design?","authors":"Katie D Evans, Scott A Robbins, Joanna J Bryson","doi":"10.1111/tops.12682","DOIUrl":"10.1111/tops.12682","url":null,"abstract":"<p><p>The use of terms like \"collaboration\" and \"co-workers\" to describe interactions between human beings and certain artificial intelligence (AI) systems has gained significant traction in recent years. Yet, it remains an open question whether such anthropomorphic metaphors provide either a fertile or even a purely innocuous lens through which to conceptualize designed commercial products. Rather, a respect for human dignity and the principle of transparency may require us to draw a sharp distinction between real and faux peers. At the heart of the concept of collaboration lies the assumption that the collaborating parties are (or behave as if they are) of similar status: two agents capable of comparable forms of intentional action, moral agency, or moral responsibility. In application to current AI systems, this not only seems to fail ontologically but also from a socio-political perspective. AI in the workplace is primarily an extension of capital, not of labor, and the AI \"co-workers\" of most individuals will likely be owned and operated by their employer. In this paper, we critically assess both the accuracy and desirability of using the term \"collaboration\" to describe interactions between humans and AI systems. We begin by proposing an alternative ontology of human-machine interaction, one which features not two equivalently autonomous agents, but rather one machine that exists in a relationship of heteronomy to one or more human agents. In this sense, while the machine may have a significant degree of independence concerning the means by which it achieves its ends, the ends themselves are always chosen by at least one human agent, whose interests may differ from those of the individuals interacting with the machine. We finally consider the motivations and risks inherent to the continued use of the term \"collaboration,\" exploring its strained relation to the concept of transparency, and consequences for the future of work.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"392-411"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10064918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lixiao Huang, Jared Freeman, Nancy J Cooke, Myke C Cohen, Xiaoyun Yin, Jeska Clark, Matt Wood, Verica Buchanan, Christopher Corral, Federico Scholcover, Anagha Mudigonda, Lovein Thomas, Aaron Teo, John Colonna-Romano
{"title":"Establishing Human Observer Criterion in Evaluating Artificial Social Intelligence Agents in a Search and Rescue Task.","authors":"Lixiao Huang, Jared Freeman, Nancy J Cooke, Myke C Cohen, Xiaoyun Yin, Jeska Clark, Matt Wood, Verica Buchanan, Christopher Corral, Federico Scholcover, Anagha Mudigonda, Lovein Thomas, Aaron Teo, John Colonna-Romano","doi":"10.1111/tops.12648","DOIUrl":"10.1111/tops.12648","url":null,"abstract":"<p><p>Artificial social intelligence (ASI) agents have great potential to aid the success of individuals, human-human teams, and human-artificial intelligence teams. To develop helpful ASI agents, we created an urban search and rescue task environment in Minecraft to evaluate ASI agents' ability to infer participants' knowledge training conditions and predict participants' next victim type to be rescued. We evaluated ASI agents' capabilities in three ways: (a) comparison to ground truth-the actual knowledge training condition and participant actions; (b) comparison among different ASI agents; and (c) comparison to a human observer criterion, whose accuracy served as a reference point. The human observers and the ASI agents used video data and timestamped event messages from the testbed, respectively, to make inferences about the same participants and topic (knowledge training condition) and the same instances of participant actions (rescue of victims). Overall, ASI agents performed better than human observers in inferring knowledge training conditions and predicting actions. Refining the human criterion can guide the design and evaluation of ASI agents for complex task environments and team composition.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"349-373"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9659226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}