Marina Cidota, Stephan Lukosch, Dragos Datcu, Heide Lukosch
{"title":"Comparing the Effect of Audio and Visual Notifications on Workspace Awareness Using Head-Mounted Displays for Remote Collaboration in Augmented Reality","authors":"Marina Cidota, Stephan Lukosch, Dragos Datcu, Heide Lukosch","doi":"10.1007/s41133-016-0003-x","DOIUrl":"10.1007/s41133-016-0003-x","url":null,"abstract":"<div><p>In many fields of activity, working in teams is necessary for completing tasks in a proper manner and often requires visual context-related information to be exchanged between team members. In such a collaborative environment, awareness of other people’s activity is an important feature of shared-workspace collaboration. We have developed an augmented reality framework for virtual colocation that supports visual communication between two people who are in different physical locations. We address these people as the remote user, who uses a laptop and the local user, who wears a head-mounted display with an RGB camera. The remote user can assist the local user in solving a spatial problem, by providing instructions in form of virtual objects in the view of the local user. For annotating the shared workspace, we use the state-of-the-art algorithm for localization and mapping without markers that provides “anchors” in the 3D space for placing virtual content. In this paper, we report on a user study that explores on how automatic audio and visual notifications about the remote user’s activities affect the local user’s workspace awareness. We used an existing game to research virtual colocation, addressing a spatial challenge on increasing levels of task complexity. The results of the user study show that participants clearly preferred visual notifications over audio or no notifications, no matter the level of the difficulty of the task.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-016-0003-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50018469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Communication Paradigm Using Subvocalized Speech: Translating Brain Signals into Speech","authors":"Kusuma Mohanchandra, Snehanshu Saha","doi":"10.1007/s41133-016-0001-z","DOIUrl":"10.1007/s41133-016-0001-z","url":null,"abstract":"<div><p>Recent science and technology studies in neuroscience, rehabilitation, and machine learning have focused attention on the EEG-based brain–computer interface (BCI) as an exciting field of research. Though the primary goal of the BCI has been to restore communication in the severely paralyzed, BCI for speech communication has acquired recognition in a variety of non-medical fields. These fields include silent speech communication, cognitive biometrics, and synthetic telepathy, to name a few. Though potentially a very sensitive issue on various counts, it is likely to revolutionize the whole system of communication. Considering the wide range of application, this paper presents innovative research on BCI for speech communication. Since imagined speech suffers from quite a few factors, we have chosen to focus on subvocalized speech for the current work. The current work is considered to be the first to utilize the subvocal verbalization for EEG-based BCI in speech communication. The electrical signals generated by the human brain during subvocalized speech are captured, analyzed, and interpreted as speech. Further, the processed EEG signals are used to drive a speech synthesizer, enabling communication and acoustical feedback for the user. We attempt to demonstrate and justify that the BCI is capable of providing good results. The basis of this effort is the presumption that, whether the speech is overt or covert, it always originates in the mind. The scalp maps provide evidence that subvocal speech prediction, from the neurological signals, is achievable. The statistical results obtained from the current study demonstrate that speech prediction is possible. EEG signals suffer from the curse of dimensionality due to the intrinsic biological and electromagnetic complexities. Therefore, in the current work, the subset selection method, using pairwise cross-correlation, is proposed to reduce the size of the data while minimizing loss of information. The prominent variances obtained from the SSM, based on principal representative features, were deployed to analyze multiclass EEG signals. A multiclass support vector machine is used for the classification of EEG signals of five subvocalized words extracted from scalp electrodes. Though the current work identifies many challenges, the promise of this technology is exhibited.</p></div>","PeriodicalId":100147,"journal":{"name":"Augmented Human Research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41133-016-0001-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50018468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}