{"title":"An improved algorithm for hairstyle dynamics","authors":"Wenjun Lao, Dehui Kong, Baocai Yin","doi":"10.1109/ICMI.2002.1167052","DOIUrl":"https://doi.org/10.1109/ICMI.2002.1167052","url":null,"abstract":"This paper introduces an efficient and flexible hair modeling method to develop intricate hairstyle dynamics. A prominent contribution of the present work is that it proposes an evaluation approach for the spring coefficient, i.e., spring coefficient can be obtained through a combination of the large deflection deformation model and spring hinge model. This is based on the fact that there is a directly proportional relationship between the spring coefficient and stiffness coefficient, a variable determined by hair shape. What is more, the damping coefficient is no longer regarded as a constant, but a function of hair density, and this treatment has turned out to be successful in solving the problem of hair-hair collision. As a result, a dynamic model, which fits a great variety of hairstyles, is proposed.","PeriodicalId":208377,"journal":{"name":"Proceedings. Fourth IEEE International Conference on Multimodal Interfaces","volume":"63 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114093835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3-D articulated pose tracking for untethered diectic reference","authors":"D. Demirdjian, Trevor Darrell","doi":"10.1109/ICMI.2002.1167005","DOIUrl":"https://doi.org/10.1109/ICMI.2002.1167005","url":null,"abstract":"Arm and body pose are useful cues for diectic reference - users naturally extend their arms to objects of interest in a dialog. We present recent progress on untethered sensing of articulated arm and body configuration using robust stereo vision techniques. These techniques allow robust, accurate, real-time tracking of 3D position and orientation. We demonstrate users' performance with our system on object selection tasks and describe our initial efforts to integrate this system into a multimodal conversational dialog framework.","PeriodicalId":208377,"journal":{"name":"Proceedings. Fourth IEEE International Conference on Multimodal Interfaces","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121241444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Body-based interfaces","authors":"Changseok Cho, Huichul Yang, G. Kim, S. Han","doi":"10.1109/ICMI.2002.1167040","DOIUrl":"https://doi.org/10.1109/ICMI.2002.1167040","url":null,"abstract":"This research explores different ways to use features of one's own body for interacting with computers. In the future, such \"body-based\" interfaces may be put into good use for wearable computing or virtual reality systems as part of a 3D multi-modal interface, freeing the user from holding interaction devices. We have identified four types of body-based interfaces: the Body-inspired-metaphor uses various parts of the body metaphorically for interaction; the Body-as-interaction-surface simply uses parts of the body as points of interaction; Mixed-mode mixes the former two; Object-mapping spatially maps the interaction object to the human body. These four body-based interfaces were applied to three different applications (and associated tasks) and were tested for their performance and utility. It was generally found that, while the body-inspired-metaphor produced the lowest error rate, it required a longer task completion time and caused more fatigue due to the longer hand moving distance. On the other hand, the body-as-interaction-surface was the fastest, but produced many more errors.","PeriodicalId":208377,"journal":{"name":"Proceedings. Fourth IEEE International Conference on Multimodal Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130514576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeffrey Nichols, B. Myers, T. Harris, R. Rosenfeld, S. Shriver, Michael Higgins, Joseph Hughes
{"title":"Requirements for automatically generating multi-modal interfaces for complex appliances","authors":"Jeffrey Nichols, B. Myers, T. Harris, R. Rosenfeld, S. Shriver, Michael Higgins, Joseph Hughes","doi":"10.1109/ICMI.2002.1167024","DOIUrl":"https://doi.org/10.1109/ICMI.2002.1167024","url":null,"abstract":"Several industrial and academic research groups are working to simplify the control of appliances and services by creating a truly universal remote control. Unlike the preprogrammed remote controls available today, these new controllers download a specification from the appliance or service and use it to automatically generate a remote control interface. This promises to be a useful approach because the specification can be made detailed enough to generate both speech and graphical interfaces. Unfortunately, generating good user interfaces can be difficult. Based on user studies and prototype implementations, this paper presents a set of requirements that we have found are needed for automatic interface generation systems to create high-quality user interfaces.","PeriodicalId":208377,"journal":{"name":"Proceedings. Fourth IEEE International Conference on Multimodal Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130626544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust noisy speech recognition with adaptive frequency bank selection","authors":"Ye Tian, Ji Wu, Zuoying Wang, Dajin Lu","doi":"10.1109/icmi.2002.1166972","DOIUrl":"https://doi.org/10.1109/icmi.2002.1166972","url":null,"abstract":"","PeriodicalId":208377,"journal":{"name":"Proceedings. Fourth IEEE International Conference on Multimodal Interfaces","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115762106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}