{"title":"输入和移动交互的对象识别研讨会","authors":"H. Yeo, Gierad Laput, N. Gillian, A. Quigley","doi":"10.1145/3098279.3119839","DOIUrl":null,"url":null,"abstract":"Today we can see an increasing number of object recognition systems of very different sizes, portability, embedability and form factors which are starting to become part of the ubiquitous, tangible, mobile and wearable computing ecosystems that we might make use of in our daily lives. These systems rely on a variety of technologies including computer vision, radar, acoustic sensing, tagging and smart objects. Such systems open up a wide-range of new forms of touchless and mobile interaction. With systems deployed in mobile products then using everyday objects that can be found in the office or home, we can realise new applications and novel types of interaction. Object based interactions might revolutionise how people interact with a computer. System could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by Object Recognition for Input and Mobile Interaction.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Workshop on object recognition for input and mobile interaction\",\"authors\":\"H. Yeo, Gierad Laput, N. Gillian, A. Quigley\",\"doi\":\"10.1145/3098279.3119839\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Today we can see an increasing number of object recognition systems of very different sizes, portability, embedability and form factors which are starting to become part of the ubiquitous, tangible, mobile and wearable computing ecosystems that we might make use of in our daily lives. These systems rely on a variety of technologies including computer vision, radar, acoustic sensing, tagging and smart objects. Such systems open up a wide-range of new forms of touchless and mobile interaction. With systems deployed in mobile products then using everyday objects that can be found in the office or home, we can realise new applications and novel types of interaction. Object based interactions might revolutionise how people interact with a computer. System could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by Object Recognition for Input and Mobile Interaction.\",\"PeriodicalId\":120153,\"journal\":{\"name\":\"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3098279.3119839\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3098279.3119839","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Workshop on object recognition for input and mobile interaction
Today we can see an increasing number of object recognition systems of very different sizes, portability, embedability and form factors which are starting to become part of the ubiquitous, tangible, mobile and wearable computing ecosystems that we might make use of in our daily lives. These systems rely on a variety of technologies including computer vision, radar, acoustic sensing, tagging and smart objects. Such systems open up a wide-range of new forms of touchless and mobile interaction. With systems deployed in mobile products then using everyday objects that can be found in the office or home, we can realise new applications and novel types of interaction. Object based interactions might revolutionise how people interact with a computer. System could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand. Although the last few years have seen an increasing amount of research in this area, knowledge about this subject remains under explored, fragmented, and cuts across a set of related but heterogeneous issues. This workshop brings together researchers and practitioners interested in the challenges posed by Object Recognition for Input and Mobile Interaction.