{"title":"Disambiguated linear word translation in medium European languages","authors":"Márton Makrai","doi":"10.1109/COGINFOCOM.2015.7390618","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390618","url":null,"abstract":"An earlier paper used triangulated word translations as seed in linear translation between medium European languages. The present work improves upon it by handling word ambiguity both in the main (i.e. source and target) languages and in the pivot by training a multi-prototype vector space model in the former, filtering triangles based on scores computed by a linear model trained with direct (non-triangulated) translations, and finally translating the whole vocabulary with a linear translation trained on filtered triangles (in addition to direct ones).","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128531247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"\"Empirical identification\" of the creative cognitive unconscious processes in the collective individuation concerning the \"World-Clock models\": Part I. Pauli's World Clock dreams and some historical \"World-Clock models\"","authors":"P. Várlaki, P. Baranyi","doi":"10.1109/COGINFOCOM.2015.7390660","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390660","url":null,"abstract":"This paper discusses the empirical identification of hypothetical unconscious creative processes of collective individuation on the basis of Pauli-Jung \"World-Clock\" interpretations comparing them with their historical ante-descents starting from the \"revolutionary\" double rotating Sephirotic (partly astronomical partly pleromatic `World-Clocklike') circles as \"models\" of the Book Bahir and the Royal Mirror of St Stephen. According to our Jungian depth-psychological hypothesis the latter can identify the significant broadening of the creative reflective consciousness.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Q-learning vs. FRIQ-learning in the Maze problem","authors":"T. Tompa, S. Kovács","doi":"10.1109/COGINFOCOM.2015.7390652","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390652","url":null,"abstract":"The goal of this paper is to give a demonstrative example for introducing the benefits of the FRIQ-learning (Fuzzy Rule Interpolation-based Q-learning) versus the traditional discrete Q-learning. The chosen example is an easily scalable discrete state and discrete action space task the Maze problem. The main difference of the two studied reinforcement learning methods, that the traditional Q-learning has discrete state, action and Q-function representation. While the FRIQ-learning has continuous state, action space and a Fuzzy Rule Interpolation based Q-function representation. For comparing the convergence speed of the two methods, both will start from an empty knowledge base, zero Q-table for the Q-learning and empty rule-base for the FRIQ-learning and following the same policy stops at the same performance condition. In the example of the paper the Maze problem will be studied in different obstacle configurations and different scaling.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130823917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Hammerschmidt, T. Hermann, Alex Walender, Niels Kromker
{"title":"InfoPlant: Multimodal augmentation of plants for enhanced human-computer interaction","authors":"Jan Hammerschmidt, T. Hermann, Alex Walender, Niels Kromker","doi":"10.1109/COGINFOCOM.2015.7390646","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390646","url":null,"abstract":"In this work, we present and evaluate a novel ambient information display that is designed to provide unobtrusive yet engaging feedback. The basis of this display is a natural, living plant, which is augmented in several ways to enable it to indicate information in various different ways. We describe the design and the construction of the InfoPlant, discuss its different modalities and present two demonstrator systems, including a novel eco-feedback display. A subsequent study showed that the InfoPlant was indeed perceived as unobtrusive by the large majority of participants and that it was easily accepted as a possible new entity in a living-room context. Also, the provided feedback was assessed as generally very helpful and that it would make users aware of their resource consumption and could have an influence on their consumption behavior.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130894025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dorottya Bodolai, László Gazdi, B. Forstner, Luca Szegletes
{"title":"Supervising Biofeedback-based serious games","authors":"Dorottya Bodolai, László Gazdi, B. Forstner, Luca Szegletes","doi":"10.1109/COGINFOCOM.2015.7390603","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390603","url":null,"abstract":"Children with learning disabilities require the supervision of special expert teachers during their learning process. The recent years, educational pieces of software for students with dyslexia, dysgraphia, dyscalculia or Attention Deficit Hyperactivity Disorder (ADHD) were developed. The problem of using such applications is twofold. First, there is no solution for distance supervising those students who have no direct access to the appropriate experts. Second, in classroom situations or frontal education, the attention of the supervisor is shared. In this paper we present a framework that gives universal solution to supervising students learning with mobile applications. In addition, biofeedback technologies are applied to represent the mental state of the children for the teacher, enabling her to infer their mood to keep the learning process on an optimal track.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134416394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye tracking precision in a virtual CAVE environment","authors":"M. Köles, K. Hercegfi","doi":"10.1109/COGINFOCOM.2015.7390611","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390611","url":null,"abstract":"Engaging in collaborative work using a virtual CAVE environment can be a stimulating experience for the user. However, recording eye movements in such an environment poses new challenges for the experimenter. In our study we used a collaborative information management task where one of the participants was in a CAVE. A three wall setup with electromagnetic head tracking and head mounted eye tracker was used. Each time there was a subtask inserted before and after the main task where specific points in space were pointed out to the participant to measure gaze tracking accuracy. In this paper we report our experiences and some preliminary results about eye tracking accuracy in the CAVE environment.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132443736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Shakhnov, L. Zinchenko, V. Makarchuk, V. Verstov
{"title":"Visual analytics support for the SOI VLSI layout design for multiple patterning technology","authors":"V. Shakhnov, L. Zinchenko, V. Makarchuk, V. Verstov","doi":"10.1109/COGINFOCOM.2015.7390566","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390566","url":null,"abstract":"In the paper, we discuss visualization techniques for SOI VLSI layout design. Our goal is visual analytics support of time-consuming SOI VLSI layout design process. Our analytics are based on graph models for VLSI layout representation. We propose classification and clustering approaches for data visualization. We illustrate our approach for contradictions visualization for multiple patterning technology case study.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114345758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classifying document categories based on physiological measures of analyst responses","authors":"Christopher Chow, Tom Gedeon","doi":"10.1109/COGINFOCOM.2015.7390631","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390631","url":null,"abstract":"Improvements in the collection and analysis of physiological signals has increased the potential for computer systems to assist human analysts in various workplace tasks. We have constructed a data set of documents with three main categories of documents, being related to national security, natural disasters and computer science, ranging from stressful to non-stressful. We include some documents which contain more than one of these categories and some which contain none of these categories. The document collection is designed to mimic the range of documents an intelligence analyst would need to read quickly and categorize in the few days after the seizure of computers from suspects in a national security investigation. Our participants were university students, primarily our own computer science students, hence the inclusion of the computer science category. We found that on our dataset our participants were 79% correct on average, which we could replicate with 88% accuracy, that is, by a 70% correctness on the underlying task. The worst results by our participants was on the computer science task which was surprising, but this did not reduce the performance of our replicating the results using AI techniques.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122828851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Server sounds and network noises","authors":"S. Rinderle-Ma, Tobias Hildebrandt","doi":"10.1109/COGINFOCOM.2015.7390562","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390562","url":null,"abstract":"For server and network administrators, it is a challenge to keep an overview of their systems to detect potential intrusions and security risks in real-time as well as in retrospect. Most security tools leverage our inherent ability for pattern detection by visualizing different types of security data. Several studies suggest that complementing visualization with sonification (the presentation of data using sound) can alleviate some of the challenges of visual monitoring (such as the need for constant visual focus). This paper therefore provides an overview of the current state of research regarding auditory-based and multimodal tools in computer security. Most existing research in this area is geared towards supporting users in real-time network and server monitoring, while there are only few approaches that are designed for retrospective data analysis. There exist several sonification-based tools in a mature state, but their effectiveness has hardly been tested in formal user and usability studies. Such studies are however needed to provide a solid basis for deciding which type of sonification is most suitable for which kind of scenarios and how to best combine the two modalities, visualization and sonification, to support users in their daily routines.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"352 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122849969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stavroula-Evita Fotinea, E. Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, P. Maragos, C. Tzafestas
{"title":"The MOBOT human-robot communication model","authors":"Stavroula-Evita Fotinea, E. Efthimiou, Maria Koutsombogera, Athanasia-Lida Dimou, Theodore Goulas, P. Maragos, C. Tzafestas","doi":"10.1109/COGINFOCOM.2015.7390590","DOIUrl":"https://doi.org/10.1109/COGINFOCOM.2015.7390590","url":null,"abstract":"This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions, i.e. verbal and non-verbal signals, in a specific context with distinct Human-Robot interaction states. These states are captured and represented in terms of communicative behavioural patterns that influence, and in turn are adapted to the interaction flow with the goal to feed a multimodal human-robot communication system. This multimodal HRI model is defined upon, and ensures the usability of a multimodal sensory corpus acquired as a primary source of data retrieval, analysis and testing of mobility assistive robot prototypes.","PeriodicalId":377891,"journal":{"name":"2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122860497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}