Anran Wang, Z. Li, Chunyi Peng, G. Shen, Gan Fang, B. Zeng
{"title":"InFrame++: Achieve Simultaneous Screen-Human Viewing and Hidden Screen-Camera Communication","authors":"Anran Wang, Z. Li, Chunyi Peng, G. Shen, Gan Fang, B. Zeng","doi":"10.1145/2742647.2742652","DOIUrl":null,"url":null,"abstract":"Recent efforts in visible light communication over screen-camera links have exploited the display for data communication. Such practices, albeit convenient, have led to contention between space allocated for users and content reserved for devices, in addition to their visual anti-aesthetics and distractedness. In this paper, we propose INFRAME++, a system that enables concurrent, dual-mode, full-frame communication for both users and devices. INFRAME++ leverages the spatial-temporal flicker-fusion property of human vision system and the fast frame rate of modern display. It multiplexes data onto full-frame video contents through novel complementary frame composition, hierarchical frame structure, and CDMA-like modulation. It thus ensures opportunistic and unobtrusive screen-camera data communication without affecting the primary video-viewing experience for human users. Our prototype and experiments have confirmed its effectiveness of delivering data to devices in its visual communication with imperceptible video artifacts for viewers. INFRAME++ is able to achieve 150-240 kbps at 120FPS over a 24? LCD monitor with one data frame per 12 display frames. It supports up to 360kbps while data:video is 1:6.","PeriodicalId":191203,"journal":{"name":"Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services","volume":"262 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"105","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2742647.2742652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 105
Abstract
Recent efforts in visible light communication over screen-camera links have exploited the display for data communication. Such practices, albeit convenient, have led to contention between space allocated for users and content reserved for devices, in addition to their visual anti-aesthetics and distractedness. In this paper, we propose INFRAME++, a system that enables concurrent, dual-mode, full-frame communication for both users and devices. INFRAME++ leverages the spatial-temporal flicker-fusion property of human vision system and the fast frame rate of modern display. It multiplexes data onto full-frame video contents through novel complementary frame composition, hierarchical frame structure, and CDMA-like modulation. It thus ensures opportunistic and unobtrusive screen-camera data communication without affecting the primary video-viewing experience for human users. Our prototype and experiments have confirmed its effectiveness of delivering data to devices in its visual communication with imperceptible video artifacts for viewers. INFRAME++ is able to achieve 150-240 kbps at 120FPS over a 24? LCD monitor with one data frame per 12 display frames. It supports up to 360kbps while data:video is 1:6.