Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto
{"title":"The Inner Loop of Collective Human-Machine Intelligence.","authors":"Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto","doi":"10.1111/tops.12642","DOIUrl":null,"url":null,"abstract":"<p><p>With the rise of artificial intelligence (AI) and the desire to ensure that such machines work well with humans, it is essential for AI systems to actively model their human teammates, a capability referred to as Machine Theory of Mind (MToM). In this paper, we introduce the inner loop of human-machine teaming expressed as communication with MToM capability. We present three different approaches to MToM: (1) constructing models of human inference with well-validated psychological theories and empirical measurements; (2) modeling human as a copy of the AI; and (3) incorporating well-documented domain knowledge about human behavior into the above two approaches. We offer a formal language for machine communication and MToM, where each term has a clear mechanistic interpretation. We exemplify the overarching formalism and the specific approaches in two concrete example scenarios. Related work that demonstrates these approaches is highlighted along the way. The formalism, examples, and empirical support provide a holistic picture of the inner loop of human-machine teaming as a foundational building block of collective human-machine intelligence.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"248-267"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093933/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Topics in Cognitive Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/tops.12642","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/2/20 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
With the rise of artificial intelligence (AI) and the desire to ensure that such machines work well with humans, it is essential for AI systems to actively model their human teammates, a capability referred to as Machine Theory of Mind (MToM). In this paper, we introduce the inner loop of human-machine teaming expressed as communication with MToM capability. We present three different approaches to MToM: (1) constructing models of human inference with well-validated psychological theories and empirical measurements; (2) modeling human as a copy of the AI; and (3) incorporating well-documented domain knowledge about human behavior into the above two approaches. We offer a formal language for machine communication and MToM, where each term has a clear mechanistic interpretation. We exemplify the overarching formalism and the specific approaches in two concrete example scenarios. Related work that demonstrates these approaches is highlighted along the way. The formalism, examples, and empirical support provide a holistic picture of the inner loop of human-machine teaming as a foundational building block of collective human-machine intelligence.
期刊介绍:
Topics in Cognitive Science (topiCS) is an innovative new journal that covers all areas of cognitive science including cognitive modeling, cognitive neuroscience, cognitive anthropology, and cognitive science and philosophy. topiCS aims to provide a forum for: -New communities of researchers- New controversies in established areas- Debates and commentaries- Reflections and integration The publication features multiple scholarly papers dedicated to a single topic. Some of these topics will appear together in one issue, but others may appear across several issues or develop into a regular feature. Controversies or debates started in one issue may be followed up by commentaries in a later issue, etc. However, the format and origin of the topics will vary greatly.