{"title":"Texture overlay for virtual clothing based on PCA of silhouettes","authors":"Jun Ehara, H. Saito","doi":"10.1109/ISMAR.2006.297805","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297805","url":null,"abstract":"In this paper, we propose a method for overlaying an arbitrary texture image onto a surface of a plain T-shirt worn by a user. For overlaying arbitrary textures onto the surface of the T-shirt, we need to know the deformation of the surface. For estimating the deformation of the surface from the input images, we use a two-phase process: learning and searching. In the learning phase, the system learns the relationship between the deformation of the surface and the silhouette of the T-shirt region in the image. A database of a number of training images in which a person wearing a T-shirt with markers moves through a variety of positions is used for this learning. Using the database, the system can learn the relationship between the shape of the silhouette and the surface deformation that is provided by the 2D positions of the markers on the surface of the T-shirt. In the searching phase, the silhouette of the user's T-shirt is extracted from the input image, and then, a search for a similar silhouette in the database is conducted in the subspace of the silhouette, which is computed using a PCA of the database. By using the proposed method for estimating the deformation of the surface of the T-shirt, we perform experiments for overlaying virtual clothing.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128962278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transitional interface: concept, issues and framework","authors":"R. Grasset, J. Looser, M. Billinghurst","doi":"10.1109/ISMAR.2006.297819","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297819","url":null,"abstract":"Transitional Interfaces have emerged as a new way to interact and collaborate between different interactive spaces such as reality, virtual reality and augmented reality environments. In this paper we explore this concept further. We introduce a descriptive model of the concept, its collaborative aspect and how it can be generalized to describe natural and continuous transitions between contexts (e.g. across space, scale, viewpoints, and representation).","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"94 2 Suppl 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129007977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Satoh, Kazuki Takemoto, Shinji Uchiyama, Hiroyuki Yamamoto
{"title":"A registration evaluation system using an industrial robot","authors":"K. Satoh, Kazuki Takemoto, Shinji Uchiyama, Hiroyuki Yamamoto","doi":"10.1109/ISMAR.2006.297797","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297797","url":null,"abstract":"This paper describes an evaluation system using an industrial robot, constructed for the purpose of evaluating registration technology for Mixed Reality. In this evaluation system, the tip of the robot arm plays the role of the user's head, where a head- mounted display is mounted. By using an industrial robot, we can obtain the ground truth of the camera pose with a high level of accuracy and robustness. Additionally, we have the ability to play back the same specified operations repeatedly under identical conditions. In addition to the system implementation, we propose evaluation methods for motion robustness, relative orientation robustness, relative distance robustness, jitter, and an overall evaluation. We verify the validity of this system through some experiments.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121844963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Irawati, S. Green, M. Billinghurst, Andreas Dünser, H. Ko
{"title":"\"Move the couch where?\" : developing an augmented reality multimodal interface","authors":"S. Irawati, S. Green, M. Billinghurst, Andreas Dünser, H. Ko","doi":"10.1109/ISMAR.2006.297812","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297812","url":null,"abstract":"This paper describes an augmented reality (AR) multimodal interface that uses speech and paddle gestures for interaction. The application allows users to intuitively arrange virtual furniture in a virtual room using a combination of speech and gestures from a real paddle. Unlike other multimodal AR applications, the multimodal fusion is based on the combination of time-based and semantic techniques to disambiguate a users speech and gesture input. We describe our AR multimodal interface architecture and discuss how the multimodal inputs are semantically integrated into a single interpretation by considering the input time stamps, the object properties, and the user context.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121704749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of three input techniques for selection and annotation of physical objects through an augmented reality view","authors":"B. Thomas","doi":"10.1109/ISMAR.2006.297791","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297791","url":null,"abstract":"This paper presents results from a study into the usability issues of two tasks (selection and annotation of a physical object) for users operating mobile augmented reality systems. The study compared the following three different modes of cursor manipulation: a handheld mouse, a head cursor, and an image-plane vision-tracked device. The selection task was evaluated based on number of mouse button clicks, completion time, and a subjective survey. The annotation task was evaluated based on accuracy of the annotation, completion time, and a subjective survey.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114938266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}