给玩具注入生命

Songchun Fan, Hyojeong Shin, Romit Roy Choudhury
{"title":"给玩具注入生命","authors":"Songchun Fan, Hyojeong Shin, Romit Roy Choudhury","doi":"10.1145/2565585.2565606","DOIUrl":null,"url":null,"abstract":"This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, and when possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her \"gesture vocabulary\", motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise -- we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.","PeriodicalId":360291,"journal":{"name":"Proceedings of the 15th Workshop on Mobile Computing Systems and Applications","volume":"181 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Injecting life into toys\",\"authors\":\"Songchun Fan, Hyojeong Shin, Romit Roy Choudhury\",\"doi\":\"10.1145/2565585.2565606\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, and when possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her \\\"gesture vocabulary\\\", motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise -- we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.\",\"PeriodicalId\":360291,\"journal\":{\"name\":\"Proceedings of the 15th Workshop on Mobile Computing Systems and Applications\",\"volume\":\"181 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 15th Workshop on Mobile Computing Systems and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2565585.2565606\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 15th Workshop on Mobile Computing Systems and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2565585.2565606","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

这篇论文设想了一个未来,智能手机可以插入玩具中,比如泰迪熊,使它们与孩子们互动。我们的想法是利用智能手机的传感器来感知孩子们的手势、暗示和反应,并通过声学、振动和可能的智能手机显示屏进行交互。本文试图探索这一愿景,思考应用程序,并为解决一些挑战迈出第一步。我们对实际孩子的有限测量表明,每个孩子在他/她的“手势词汇”方面都是非常独特的,这激发了个性化模型的需求。为了学习这些模型,我们采用基于信号处理的方法,首先识别手机传感器流中手势的存在,然后学习其模式以进行可靠的分类。我们的方法不需要人工监督(即,不要求孩子做出任何特定的手势);手机通过观察和反馈进行检测和学习。我们的原型,虽然远不是一个完整的系统,但展示了希望——我们现在相信,一种无监督的传感方法可以实现新型的儿童玩具互动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Injecting life into toys
This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, and when possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her "gesture vocabulary", motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise -- we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信