Madhurima Pore, Koosha Sadeghi, Vinaya Chakati, Ayan Banerjee, S. Gupta
{"title":"Enabling Real-Time Collaborative Brain-Mobile Interactive Applications on Volunteer Mobile Devices","authors":"Madhurima Pore, Koosha Sadeghi, Vinaya Chakati, Ayan Banerjee, S. Gupta","doi":"10.1145/2799650.2799660","DOIUrl":null,"url":null,"abstract":"Commercially available wearable brain sensors and devices that convert smartphones into virtual reality systems open up the potential to implement real time collaborative brain-mobile interactive applications. These applications may derive psychological contexts using electroencephalogram (EEG) collected in a wireless setting, and provide individualized sensory feedback through devices such as Google Cardboard. Psychological contexts are affected not only by user's own behavior but also by her interaction with the environment and possibly other individuals. Hence, deriving psychological context information not only requires sensing of an individual's brain but also data from her neighbors. Further, the data needs to be processed by computationally intensive machine learning algorithms which may not be executed within desired latency using resource limited mobile devices. In such a scenario, real time computation of psychological contexts and administration of sensory feedback may be infeasible. In this work, we consider the idea of offloading psychological context estimation and sensory feedback computation to volunteer mobile devices and study the feasibility of large scale real-time adhoc brain-mobile interface applications. We present the BraiNet architecture, which can be used to write a custom application to perform computation on brain data and gain group level aggregate inferences and provide feedback. Further, heavy computation related to the brain signal processing can be offloaded to networked mobile devices for adhoc real-time execution without the need for a dedicated server. We show the usage of BraiNet to develop \"Neuro Movie\" (nMovie), that modulates movie frames based on individuals subconscious preferences.","PeriodicalId":275880,"journal":{"name":"Proceedings of the 2nd International Workshop on Hot Topics in Wireless","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Workshop on Hot Topics in Wireless","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2799650.2799660","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
Commercially available wearable brain sensors and devices that convert smartphones into virtual reality systems open up the potential to implement real time collaborative brain-mobile interactive applications. These applications may derive psychological contexts using electroencephalogram (EEG) collected in a wireless setting, and provide individualized sensory feedback through devices such as Google Cardboard. Psychological contexts are affected not only by user's own behavior but also by her interaction with the environment and possibly other individuals. Hence, deriving psychological context information not only requires sensing of an individual's brain but also data from her neighbors. Further, the data needs to be processed by computationally intensive machine learning algorithms which may not be executed within desired latency using resource limited mobile devices. In such a scenario, real time computation of psychological contexts and administration of sensory feedback may be infeasible. In this work, we consider the idea of offloading psychological context estimation and sensory feedback computation to volunteer mobile devices and study the feasibility of large scale real-time adhoc brain-mobile interface applications. We present the BraiNet architecture, which can be used to write a custom application to perform computation on brain data and gain group level aggregate inferences and provide feedback. Further, heavy computation related to the brain signal processing can be offloaded to networked mobile devices for adhoc real-time execution without the need for a dedicated server. We show the usage of BraiNet to develop "Neuro Movie" (nMovie), that modulates movie frames based on individuals subconscious preferences.