{"title":"Facial Expression Editing in Face Sketch Using Shape Space Theory","authors":"Chenlei Lv, Zhongke Wu, Xingce Wang, Dan Zhang, Xiangyuan Liu, Mingquan Zhou","doi":"10.1109/CW.2018.00019","DOIUrl":"https://doi.org/10.1109/CW.2018.00019","url":null,"abstract":"Facial expression editing in face sketch is an important and challenging problem in computer vision community as facial animation and modeling. For criminal investigation and portrait drawing, automatic expression editing tools for face sketch improve work efficiency obviously and reduce professional requirements for users. In this paper, we propose a novel method for facial expression editing in face sketch using shape space theory. The new facial expressions in the sketch images can be regenerated automatically. The method includes two components: 1) face sketch modeling; 2) expression editing. The face sketch modeling constructs 3D face sketch data from 3D facial database to match the 2D face sketch. Using facial landmarks, the \"shape\" of the face sketch is represented in shape space. The shape space is a manifold space which removes the rigid transform group. In shape space, the accurate 3D face sketch model is obtained which is consistent to the original 2D face sketch. For expression editing, we change the parameters of 3D face sketch model in the shape space to obtain new expressions. The expression transfer in 3D face sketch model can be mapped into the 2D face sketch. The advantages of our method are: full-automatic in modeling process; no requirements of drawing skills to user and friendly interaction; robustness to head poses and different scales. In experiments, we use the 3D facial database, FaceWareHouse, to construct the 3D face sketch model and use face sketch images from database: CUHK Face sketch Database (CUFS) to show the performance of expression editing. Experimental results demonstrate that our method can effectively edit facial expressions in face sketch with high consistency and fidelity.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Automatic Optical Inspection of Soldering Defects","authors":"Wenting Dai, A. Mujeeb, Marius Erdt, A. Sourin","doi":"10.1109/CW.2018.00074","DOIUrl":"https://doi.org/10.1109/CW.2018.00074","url":null,"abstract":"This paper proposes a method for automatic image-based classification of solder joint defects in the context of Automatic Optical Inspection (AOI) of Printed Circuit Boards (PCBs). Machine learning-based approaches are frequently used for image-based inspection. However, a main challenge is to manually create sufficiently large labeled training databases to allow for high accuracy of defect detection. Creating such large training databases is time-consuming, expensive, and often unfeasible in industrial production settings. In order to address this problem, an active learning framework is proposed which starts with only a small labeled subset of training data. The labeled dataset is then enlarged step-by-step by combining K-means clustering with active user input to provide representative samples for the training of an SVM classifier. Evaluations on two databases with insufficient and shifting solder joints samples have shown that the proposed method achieved high accuracy while requiring only minimal user input. The results also demonstrated that the proposed method outperforms random and representative sampling by ~ 3.2% and ~ 2.7%, respectively, and it outperforms the uncertainty sampling method by ~ 0.5%.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124222713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Powering Up Attentional Focus: Validating a School-Based Deep Breathing Intervention with Mobile EEG—A Pilot Exploration","authors":"Kiat Hui Khng, R. Mane","doi":"10.1109/CW.2018.00051","DOIUrl":"https://doi.org/10.1109/CW.2018.00051","url":null,"abstract":"Electrophysiological and neuroimaging data are important sources of information for validating the efficacy or effects of interventions. Many interventions for children are carried out in the schools especially if they are educationally relevant. However, factors such as high costs and physical constraints have typically limited the use of electrophysiological and neuroimaging tools to laboratory settings. Despite their reduced capabilities, the appearance of low-cost, quick-to-set-up mobile equipment in recent years have renewed the possibility of applying such techniques to monitor effects in school-based interventions. The current study explores the utility of a low-cost, mobile electroencephalography (EEG) headset system in detecting neurophysiological effects of a school-based deep breathing intervention, found in a previous behavioral study to be efficacious in reducing self-reported state anxiety and enhancing test performance in children. As part of a larger pilot study, EEG, respiration, and behavioral data were collected from a group of right-handed 11-year-olds as they performed a flanker task of attentional focus twice, once with a deep breathing intervention and once without. Results from power spectral analyses suggest that the low-cost, low-resolution, mobile EEG system is able to detect power spectra differences associated with flanker interference and intervention effects.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127623930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computer-Aided Sugoroku Games in the Edo Period Using Interactive Techniques for Museum Exhibits","authors":"A. Soga, Masahito Shiba, Takuzi Suzuki","doi":"10.1109/CW.2018.00033","DOIUrl":"https://doi.org/10.1109/CW.2018.00033","url":null,"abstract":"The purpose of this study is to raise interest in a kind of Japanese board game Sugoroku in the Edo period, and to support exhibits of it at museums. We developed a computer-aided Sugoroku games using modern interactive techniques. In this system, the user rolls a dice-type device equipped with a microcomputer. Since the system detects the values of the dice-type device, the players can simply play by just throwing the die. By projecting the game's progress on the Sugoroku sheet with a ceiling projector, the system shows the current positions of the players and the candidate destinations. With this guide, they can play Sugoroku games even without knowing the rules. The system was used at a special exhibition of the National Museum of Japanese History for eight weeks. We evaluated our computer-aided Sugoroku games with visitors on three days. Almost half of the visitors marked the best score for all items, indicating that this system was successfully accepted by them.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130080680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Pocket Gait Recognition","authors":"Patrick A. H. Bours, Thilo Denzer","doi":"10.1109/CW.2018.00067","DOIUrl":"https://doi.org/10.1109/CW.2018.00067","url":null,"abstract":"Gait authentication using a mobile phone's acceleration sensor offers a convenient, user-friendly and subtle procedure of authenticating individuals to their mobile phone. This study analyses the possibility of cross-pocket gait recognition, which means creating the reference and the probe with the accelerometer sensor in different trouser pockets (left and right). The results of our analysis show that there is a significant performance degradation when comparing same-pocket gait recognition with cross-pocket gait recognition. In our analysis we have used a new distance metric that shows to give good (same-pocket) performance results compared to known analysis methods. We have also shown that a multi-reference template can give excellent performance without any performance degradation for cross-pocket gait recognition.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130858100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rhu Keystroke Touchscreen Benchmark","authors":"Mohamad El-Abed, Mostafa Dafer, C. Rosenberger","doi":"10.1109/CW.2018.00072","DOIUrl":"https://doi.org/10.1109/CW.2018.00072","url":null,"abstract":"Biometric systems are currently widely used in many applications to control and verify individual's identity. Keystroke dynamics modality has been shown as a promising solution that would be used in many applications such as e-payment and banking applications. However, such systems suffer from several performance limitations (such as cross-devices problem) that prevent their widespread of use in real applications. The objective of this paper is to provide researchers and developers with a public touchscreen-based benchmark collected using a mobile phone and a tablet (both portrait and landscape orientation each). Such a benchmark can be used to assess keystroke-based matching algorithms. Furthermore, It is mainly developed to measure the robustness of keystroke matching algorithms vis-'a-vis cross-devices and orientation variations. An online visualizer for the database is also given to researchers allowing them to visualize the acquired keystroke signals.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133556410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Appearance-Driven Method for Converting Polygon Soup Building Models for 3D Geospatial Applications","authors":"Kan Chen, H. Johan, Marius Erdt","doi":"10.1109/CW.2018.00075","DOIUrl":"https://doi.org/10.1109/CW.2018.00075","url":null,"abstract":"Polygon soup building models are fine for visualization purposes such as in games and movies. They, however, are not suitable for 3D geospatial applications which require geometrical analysis, since they lack connectivity information and may contain intersections internally between their parts. In this paper, we propose an appearance-driven method to interactively convert an input polygon soup building model to a two-manifold mesh, which is more suitable for 3D geospatial applications. Since a polygon soup model is not suitable for geometrical analysis, our key idea is to extract and utilize the visual appearance of the input building model for the conversion. We extract the silhouettes and use them to identify the features of the building. We then generate horizontal cross sections based on the locations of the features and then reconstruct the building by connecting two neighbouring cross sections. We propose to integrate various rasterization techniques to facilitate the conversion. Experimental results show the effectiveness of the proposed method.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130074292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Copyright notice]","authors":"","doi":"10.1109/cw.2018.00003","DOIUrl":"https://doi.org/10.1109/cw.2018.00003","url":null,"abstract":"","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123826294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Weib, Daniel Hepperle, Andreas Siess, Matthias Wölfel
{"title":"What User Interface to Use for Virtual Reality? 2D, 3D or Speech–A User Study","authors":"Y. Weib, Daniel Hepperle, Andreas Siess, Matthias Wölfel","doi":"10.1109/CW.2018.00021","DOIUrl":"https://doi.org/10.1109/CW.2018.00021","url":null,"abstract":"In virtual reality different demands on the user interface have to be addressed than on classic screen applications. That's why established strategies from other digital media cannot be transferred unreflected and at least adaptation is required. So one of the leading questions is: which form of interface is preferable for virtual reality? Are 2D interfaces—that are mostly used in combination with mouse or touch interactions— the means of choice, although they do not use the medium's full capabilities? What about 3D interfaces that can be naturally integrated into the virtual space? And last but not least: are speech interfaces, the fastest and most natural form of human interaction/communication, which have recently established themselves in other areas (e.g. digital assistants), ready to conquer the world of virtual reality? To answer these question this work compares these three approaches based on a quantitative user study and highlights advantages and disadvantages of the respective interfaces for virtual reality applications.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"778 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134509948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vianney Renata, Fan Li, Ching-Hung Lee, Chun-Hsien Chen
{"title":"Investigation on the Correlation between Eye Movement and Reaction Time under Mental Fatigue Influence","authors":"Vianney Renata, Fan Li, Ching-Hung Lee, Chun-Hsien Chen","doi":"10.1109/CW.2018.00046","DOIUrl":"https://doi.org/10.1109/CW.2018.00046","url":null,"abstract":"With the recent development of eye tracking technology, research in eye movement and pattern has increased due to its potential to be a non-obstructive physiological measure tool. This study attempts to understand to which extent the eye behavior is relatable with human's mental chronometry in responding to changes subjected to different levels of mental fatigue. An analysis of the eye movement metrics when interacting with multiple short performance-based tasks under different states of mental fatigue is performed. It is concluded that the eye movement has influence in the resulting reaction time and the mental fatigue state of the individual. Thus, indicating the relationship as a strong potential to predict an individual's mental fatigue state. Another finding is that the relationship between the eye movement metrics and mental chronometry becomes stronger as the subjective mental fatigue level increases.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114631981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}