CHIL计算克服技术混乱

A. Waibel
{"title":"CHIL计算克服技术混乱","authors":"A. Waibel","doi":"10.1145/1107548.1107551","DOIUrl":null,"url":null,"abstract":"After building computers that paid no intention to communicating with humans, we have in recent years developed ever more sophisticated interfaces that put the \"human in the loop\" of computers. These interfaces have improved usability by providing more appealing output (graphics, animations), more easy to use input methods (mouse, pointing, clicking, dragging) and more natural interaction modes (speech, vision, gesture, etc.). Yet the productivity gains that have been promised have largely not been seen and human-machine interaction still remains a partially frustrating and tedious experience, full of techno-clutter and excessive attention required by the technical artifact.In this talk, I will argue, that we must transition to a third paradigm of computer use, in which we let people interact with people, and move the machine into the background to observe the humans' activities and to provide services implicitly, that is, -to the extent possible- without explicit request. Putting the \"Computer in the Human Interaction Loop\" (CHIL), instead of the other way round, however, brings formidable technical challenges. The machine must now always observe and understand humans, model their activities, their interaction with other humans, the human state as well as the state of the space they are in, and finally, infer intentions and needs. From a perceptual user interface point of view, we must process signals from sensors that are always on, frequently inappropriately positioned, and subject to much greater variablity. We must also not only recognize WHAT was seen or said in a given space, but also a broad range of additional information, such as the WHO, WHERE, HOW, TO WHOM, WHY, WHEN of human interaction and engagement.In this talk, I will describe a variety of multimodal interface technologies that we have developed to answer these questions and some preliminary CHIL type services that take advantage of such perceptual interfaces.","PeriodicalId":391548,"journal":{"name":"sOc-EUSAI '05","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CHIL computing to overcome techno-clutter\",\"authors\":\"A. Waibel\",\"doi\":\"10.1145/1107548.1107551\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"After building computers that paid no intention to communicating with humans, we have in recent years developed ever more sophisticated interfaces that put the \\\"human in the loop\\\" of computers. These interfaces have improved usability by providing more appealing output (graphics, animations), more easy to use input methods (mouse, pointing, clicking, dragging) and more natural interaction modes (speech, vision, gesture, etc.). Yet the productivity gains that have been promised have largely not been seen and human-machine interaction still remains a partially frustrating and tedious experience, full of techno-clutter and excessive attention required by the technical artifact.In this talk, I will argue, that we must transition to a third paradigm of computer use, in which we let people interact with people, and move the machine into the background to observe the humans' activities and to provide services implicitly, that is, -to the extent possible- without explicit request. Putting the \\\"Computer in the Human Interaction Loop\\\" (CHIL), instead of the other way round, however, brings formidable technical challenges. The machine must now always observe and understand humans, model their activities, their interaction with other humans, the human state as well as the state of the space they are in, and finally, infer intentions and needs. From a perceptual user interface point of view, we must process signals from sensors that are always on, frequently inappropriately positioned, and subject to much greater variablity. We must also not only recognize WHAT was seen or said in a given space, but also a broad range of additional information, such as the WHO, WHERE, HOW, TO WHOM, WHY, WHEN of human interaction and engagement.In this talk, I will describe a variety of multimodal interface technologies that we have developed to answer these questions and some preliminary CHIL type services that take advantage of such perceptual interfaces.\",\"PeriodicalId\":391548,\"journal\":{\"name\":\"sOc-EUSAI '05\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"sOc-EUSAI '05\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1107548.1107551\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"sOc-EUSAI '05","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1107548.1107551","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在制造出无意与人类交流的计算机之后,我们近年来开发了更复杂的界面,将“人类置于计算机的循环中”。这些界面通过提供更吸引人的输出(图形、动画)、更易于使用的输入法(鼠标、指向、点击、拖动)和更自然的交互模式(语音、视觉、手势等),提高了可用性。然而,承诺的生产力提高在很大程度上并没有实现,人机交互仍然是一种令人沮丧和乏味的体验,充满了技术混乱和技术工件需要的过度关注。在这次演讲中,我将论证,我们必须过渡到计算机使用的第三种范式,在这种范式中,我们让人们与人们互动,并将机器移到后台来观察人类的活动,并隐性地提供服务,也就是说,在可能的范围内,没有明确的要求。然而,将“计算机置于人机交互循环”(CHIL)中,而不是反过来,带来了巨大的技术挑战。机器现在必须始终观察和理解人类,模拟他们的活动,他们与他人的互动,人类的状态以及他们所在空间的状态,最后,推断意图和需求。从感知用户界面的角度来看,我们必须处理来自传感器的信号,这些传感器总是开着的,经常不适当地定位,并且受到更大的变化。我们不仅要认识到在给定空间中看到或说了什么,还要认识到广泛的附加信息,如人类互动和参与的WHO、WHERE、HOW、TO WHO、WHY、WHEN。在这次演讲中,我将描述我们为回答这些问题而开发的各种多模态接口技术,以及一些利用这种感知接口的初步CHIL类型服务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CHIL computing to overcome techno-clutter
After building computers that paid no intention to communicating with humans, we have in recent years developed ever more sophisticated interfaces that put the "human in the loop" of computers. These interfaces have improved usability by providing more appealing output (graphics, animations), more easy to use input methods (mouse, pointing, clicking, dragging) and more natural interaction modes (speech, vision, gesture, etc.). Yet the productivity gains that have been promised have largely not been seen and human-machine interaction still remains a partially frustrating and tedious experience, full of techno-clutter and excessive attention required by the technical artifact.In this talk, I will argue, that we must transition to a third paradigm of computer use, in which we let people interact with people, and move the machine into the background to observe the humans' activities and to provide services implicitly, that is, -to the extent possible- without explicit request. Putting the "Computer in the Human Interaction Loop" (CHIL), instead of the other way round, however, brings formidable technical challenges. The machine must now always observe and understand humans, model their activities, their interaction with other humans, the human state as well as the state of the space they are in, and finally, infer intentions and needs. From a perceptual user interface point of view, we must process signals from sensors that are always on, frequently inappropriately positioned, and subject to much greater variablity. We must also not only recognize WHAT was seen or said in a given space, but also a broad range of additional information, such as the WHO, WHERE, HOW, TO WHOM, WHY, WHEN of human interaction and engagement.In this talk, I will describe a variety of multimodal interface technologies that we have developed to answer these questions and some preliminary CHIL type services that take advantage of such perceptual interfaces.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信