Lei Shi, Ross McLachlan, Yuhang Zhao, Shiri Azenkot
{"title":"神奇的触摸:与3D打印图形交互","authors":"Lei Shi, Ross McLachlan, Yuhang Zhao, Shiri Azenkot","doi":"10.1145/2982142.2982153","DOIUrl":null,"url":null,"abstract":"Graphics like maps and models are important learning materials. With recently developed projects, we can use 3D printers to make tactile graphics that are more accessible to blind people. However, current 3D printed graphics can only convey limited information through their shapes and textures. We present Magic Touch, a computer vision-based system that augments printed graphics with audio files associated with specific locations, or hotspots, on the model. A user can access an audio file associated with a hotspot by touching it with a pointing gesture. The system detects the user's gesture and determines the hotspot location with computer vision algorithms by comparing a video feed of the user's interaction with the digital representation of the model and its hotspots. To enable MT, a model designer must add a single tracker with fiducial tags to a model. After the tracker is added, MT only requires an RGB camera, so it can be easily deployed on many devices such as mobile phones, laptops and smart glasses.","PeriodicalId":306165,"journal":{"name":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":"{\"title\":\"Magic Touch: Interacting with 3D Printed Graphics\",\"authors\":\"Lei Shi, Ross McLachlan, Yuhang Zhao, Shiri Azenkot\",\"doi\":\"10.1145/2982142.2982153\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graphics like maps and models are important learning materials. With recently developed projects, we can use 3D printers to make tactile graphics that are more accessible to blind people. However, current 3D printed graphics can only convey limited information through their shapes and textures. We present Magic Touch, a computer vision-based system that augments printed graphics with audio files associated with specific locations, or hotspots, on the model. A user can access an audio file associated with a hotspot by touching it with a pointing gesture. The system detects the user's gesture and determines the hotspot location with computer vision algorithms by comparing a video feed of the user's interaction with the digital representation of the model and its hotspots. To enable MT, a model designer must add a single tracker with fiducial tags to a model. After the tracker is added, MT only requires an RGB camera, so it can be easily deployed on many devices such as mobile phones, laptops and smart glasses.\",\"PeriodicalId\":306165,\"journal\":{\"name\":\"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2982142.2982153\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2982142.2982153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Graphics like maps and models are important learning materials. With recently developed projects, we can use 3D printers to make tactile graphics that are more accessible to blind people. However, current 3D printed graphics can only convey limited information through their shapes and textures. We present Magic Touch, a computer vision-based system that augments printed graphics with audio files associated with specific locations, or hotspots, on the model. A user can access an audio file associated with a hotspot by touching it with a pointing gesture. The system detects the user's gesture and determines the hotspot location with computer vision algorithms by comparing a video feed of the user's interaction with the digital representation of the model and its hotspots. To enable MT, a model designer must add a single tracker with fiducial tags to a model. After the tracker is added, MT only requires an RGB camera, so it can be easily deployed on many devices such as mobile phones, laptops and smart glasses.