{"title":"增强认知和人机交互","authors":"M. Crosby, J. Scholtz, T. Downs","doi":"10.1109/HICSS.2003.1174283","DOIUrl":null,"url":null,"abstract":"The theme of this minitrack is how people use robots or computer systems to facilitate their performance. We sought papers that concern problems in augmented cognition and human-robot interaction. A goal of augmented cognition is to reduce the complexity of tasks. Suggested ways of accomplishing this goal include utilizing technology to adapt either the task or the way the task is represented. We encouraged papers related to all facets of augmented cognition and human-robot interaction and are pleased to present papers that not only describe systems designed to augment cognition but also provide empirical studies, field studies and case studies that evaluate these systems. Scholtz introduces the human-robot interaction portion of this minitrack by describing the rationale behind the theory and evaluation of these interactions. In describing their version of embedded interfaces for human-robot interaction, Daly, Cho, Martin and Payton show how they use techniques from augmented reality to communicate information from large numbers of small scale robots operating as a coordinated swarm. In their paper on human-robot interaction for intelligent assisted viewing during teleoperation, McKee and Brooks report finding a simpler reactive algorithm to replace their visual acts algorithm. Experimental evidence showed the newer algorithm performed as well as the previous algorithm as well as encouraging the operator to be more aware of depth information. Nicolescu and Mataric linked perception and action in a unique architecture for representation of robots' behaviors. Kawamura, Nilas, Muguruma and Johnson describe efforts to develop an adaptive graphical user interface for mixed-initiative interaction between a human and robot. In a practical application, Bruemmer, Marble, Dudenhoeffer, Anderson, and McKay present a case study that examines the human-robot dynamic of a teleoperated task. They outline a mixed-initiative command and control architecture for hazardous environments. Biagioni and Sasaki propose and analyze efficient wireless sensor placement to satisfy communication and data collection requirements. Several of the papers employ either user models or physiological measurements to assess skills or mental processes. For example, Brezillon, focuses on","PeriodicalId":159242,"journal":{"name":"36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Augmented cognition and human-robot interaction\",\"authors\":\"M. Crosby, J. Scholtz, T. Downs\",\"doi\":\"10.1109/HICSS.2003.1174283\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The theme of this minitrack is how people use robots or computer systems to facilitate their performance. We sought papers that concern problems in augmented cognition and human-robot interaction. A goal of augmented cognition is to reduce the complexity of tasks. Suggested ways of accomplishing this goal include utilizing technology to adapt either the task or the way the task is represented. We encouraged papers related to all facets of augmented cognition and human-robot interaction and are pleased to present papers that not only describe systems designed to augment cognition but also provide empirical studies, field studies and case studies that evaluate these systems. Scholtz introduces the human-robot interaction portion of this minitrack by describing the rationale behind the theory and evaluation of these interactions. In describing their version of embedded interfaces for human-robot interaction, Daly, Cho, Martin and Payton show how they use techniques from augmented reality to communicate information from large numbers of small scale robots operating as a coordinated swarm. In their paper on human-robot interaction for intelligent assisted viewing during teleoperation, McKee and Brooks report finding a simpler reactive algorithm to replace their visual acts algorithm. Experimental evidence showed the newer algorithm performed as well as the previous algorithm as well as encouraging the operator to be more aware of depth information. Nicolescu and Mataric linked perception and action in a unique architecture for representation of robots' behaviors. Kawamura, Nilas, Muguruma and Johnson describe efforts to develop an adaptive graphical user interface for mixed-initiative interaction between a human and robot. In a practical application, Bruemmer, Marble, Dudenhoeffer, Anderson, and McKay present a case study that examines the human-robot dynamic of a teleoperated task. They outline a mixed-initiative command and control architecture for hazardous environments. Biagioni and Sasaki propose and analyze efficient wireless sensor placement to satisfy communication and data collection requirements. Several of the papers employ either user models or physiological measurements to assess skills or mental processes. For example, Brezillon, focuses on\",\"PeriodicalId\":159242,\"journal\":{\"name\":\"36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-02-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HICSS.2003.1174283\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HICSS.2003.1174283","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The theme of this minitrack is how people use robots or computer systems to facilitate their performance. We sought papers that concern problems in augmented cognition and human-robot interaction. A goal of augmented cognition is to reduce the complexity of tasks. Suggested ways of accomplishing this goal include utilizing technology to adapt either the task or the way the task is represented. We encouraged papers related to all facets of augmented cognition and human-robot interaction and are pleased to present papers that not only describe systems designed to augment cognition but also provide empirical studies, field studies and case studies that evaluate these systems. Scholtz introduces the human-robot interaction portion of this minitrack by describing the rationale behind the theory and evaluation of these interactions. In describing their version of embedded interfaces for human-robot interaction, Daly, Cho, Martin and Payton show how they use techniques from augmented reality to communicate information from large numbers of small scale robots operating as a coordinated swarm. In their paper on human-robot interaction for intelligent assisted viewing during teleoperation, McKee and Brooks report finding a simpler reactive algorithm to replace their visual acts algorithm. Experimental evidence showed the newer algorithm performed as well as the previous algorithm as well as encouraging the operator to be more aware of depth information. Nicolescu and Mataric linked perception and action in a unique architecture for representation of robots' behaviors. Kawamura, Nilas, Muguruma and Johnson describe efforts to develop an adaptive graphical user interface for mixed-initiative interaction between a human and robot. In a practical application, Bruemmer, Marble, Dudenhoeffer, Anderson, and McKay present a case study that examines the human-robot dynamic of a teleoperated task. They outline a mixed-initiative command and control architecture for hazardous environments. Biagioni and Sasaki propose and analyze efficient wireless sensor placement to satisfy communication and data collection requirements. Several of the papers employ either user models or physiological measurements to assess skills or mental processes. For example, Brezillon, focuses on