2019 International Conference on Cyberworlds (CW)最新文献

筛选
英文 中文
How does Augmented Reality Improve the Play Experience in Current Augmented Reality Enhanced Smartphone Games? 增强现实如何改善当前增强现实智能手机游戏的游戏体验?
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00079
Matthias Wölfel, Melinda C. Braun, Sandra Beuck
{"title":"How does Augmented Reality Improve the Play Experience in Current Augmented Reality Enhanced Smartphone Games?","authors":"Matthias Wölfel, Melinda C. Braun, Sandra Beuck","doi":"10.1109/CW.2019.00079","DOIUrl":"https://doi.org/10.1109/CW.2019.00079","url":null,"abstract":"This paper investigates the current state of handheld augmented reality (AR) gaming apps available on the App Store (iOS) and the Play Store (Android). To be able to directly compare the differences between games played with and without AR, only games in which the AR mode can be switched on/off were investigated. Because the main scope of this paper is on the evaluation of the experience provided by AR, parts of the game experience questionnaire (GEQ) have been included in the empirical study. It showed that AR has big potential to improve immersion or flow in the game-play. This paper also identifies differences in the implementation of AR features and investigates how and what parameter in the GEQ can be positively influenced.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114587948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Electroencephalography Based Motor Imagery Classification Using Unsupervised Feature Selection 基于脑电图的无监督特征选择运动图像分类
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00047
Abdullah Al Shiam, M. Islam, Toshihisa Tanaka, M. I. Molla
{"title":"Electroencephalography Based Motor Imagery Classification Using Unsupervised Feature Selection","authors":"Abdullah Al Shiam, M. Islam, Toshihisa Tanaka, M. I. Molla","doi":"10.1109/CW.2019.00047","DOIUrl":"https://doi.org/10.1109/CW.2019.00047","url":null,"abstract":"The major challenge in Brain Computer Interface (BCI) is to obtain reliable classification accuracy of motor imagery (MI) task. This paper mainly focuses on unsupervised feature selection for electroencephalography (EEG) classification leading to BCI implementation. The multichannel EEG signal is decomposed into a number of subband signals. The features are extracted from each subband by applying spatial filtering technique. The features are combined into a common feature space to represent the effective event MI classification. It may inevitably include some irrelevant features yielding the increase of dimension and mislead the classification system. The unsupervised discriminative feature selection (UDFS) is employed here to select the subset of extracted features. It effectively selects the dominant features to improve classification accuracy of motor imagery task acquired by EEG signals. The classification of MI tasks is performed by support vector machine. The performance of the proposed method is evaluated using publicly available dataset obtained from BCI Competition III (IVA). The experimental results show that the performance of this method is better than that of the recently developed algorithms.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134094401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Interactive System for Modeling Fish Shapes 鱼类形状建模的交互式系统
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00076
Masayuki Tamiya, Y. Dobashi
{"title":"An Interactive System for Modeling Fish Shapes","authors":"Masayuki Tamiya, Y. Dobashi","doi":"10.1109/CW.2019.00076","DOIUrl":"https://doi.org/10.1109/CW.2019.00076","url":null,"abstract":"Recently, computer graphics is widely used in movies and games, etc., and modeling three-dimensional virtual objects is important for synthesizing realistic images. Since modeling realistic objects often requires special skills and takes long time, many methods have been developed to help the user generate models such as plants and buildings. However, little attention has been paid to the modeling of fish shapes because of the complexity of their shapes. We propose an interactive system for modeling a realistic fish shape from a single image. We also introduce a method called Direct Manipulation Blendshapes for improving the usability of our system.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127112180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person Identification from Visual Aesthetics Using Gene Expression Programming 基于基因表达编程的视觉美学人物识别
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00053
Brandon Sieu, M. Gavrilova
{"title":"Person Identification from Visual Aesthetics Using Gene Expression Programming","authors":"Brandon Sieu, M. Gavrilova","doi":"10.1109/CW.2019.00053","DOIUrl":"https://doi.org/10.1109/CW.2019.00053","url":null,"abstract":"The last decade has witnessed an increase in online human interactions, covering all aspects of personal and professional activities. Identification of people based on their behavior rather than physical traits is a growing industry, spanning diverse spheres such as online education, e-commerce and cyber security. One prominent behavior is the expression of opinions, commonly as a reaction to images posted online. Visual aesthetic is a soft, behavioral biometric that refers to a person's sense of fondness to a certain image. Identifying individuals using their visual aesthetics as discriminatory features is an emerging domain of research. This paper introduces a new method for aesthetic feature dimensionality reduction using gene expression programming. The advantage of this method is that the resulting system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40000 images demonstrates a 94% accuracy of identity recognition based solely on users' aesthetic preferences. This outperforms the best-known method by 13.5%.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126084260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic Image Enhancement Taking into Account User Preference 考虑用户偏好的自动图像增强
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00070
Yuri Murata, Y. Dobashi
{"title":"Automatic Image Enhancement Taking into Account User Preference","authors":"Yuri Murata, Y. Dobashi","doi":"10.1109/CW.2019.00070","DOIUrl":"https://doi.org/10.1109/CW.2019.00070","url":null,"abstract":"In these days, we can take many pictures everyday and everywhere with mobile devices such as smartphones. After taking a picture, we often modify it by using some image enhancement tools so that the appearance of the picture becomes favorable to his/her own preference. However, since there are many parameters in the enhancement functions, it is not an easy task to find an appropriate parameter set to obtain the desired result. Some tools have a function that automatically determine the parameters but they do not take into account the user's preference. In this paper, we present a system to address this problem. Our system first estimates the user's preference by using RankNet. Next, the image enhancement parameters are optimized to maximize the estimated preference. We show some experimental results to demonstrate the usefulness of our system.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116534723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Stylized Line Drawing of 3D Models using CNN 使用CNN的3D模型的风格化线条绘制
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00015
Mitsuhiro Uchida, S. Saito
{"title":"Stylized Line Drawing of 3D Models using CNN","authors":"Mitsuhiro Uchida, S. Saito","doi":"10.1109/CW.2019.00015","DOIUrl":"https://doi.org/10.1109/CW.2019.00015","url":null,"abstract":"Techniques to render 3D models like hand-drawings are often required. In this paper, we propose an approach that generates line-drawing with various styles by machine learning. We train two Convolutional neural networks (CNNs), of which one is a line extractor from the depth and normal images of a 3D object, and the other is a line thickness applicator. The following process to CNNs interprets the thickness of the lines as intensity to control properties of a line style. Using the obtained intensities, non-uniform line styled drawings are generated. The results show the efficiency of combining the machine learning method and the interpreter.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human Movements Classification Using Multi-channel Surface EMG Signals and Deep Learning Technique 基于多通道表面肌电信号和深度学习技术的人体运动分类
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00051
Jianhua Zhang, C. Ling, Sunan Li
{"title":"Human Movements Classification Using Multi-channel Surface EMG Signals and Deep Learning Technique","authors":"Jianhua Zhang, C. Ling, Sunan Li","doi":"10.1109/CW.2019.00051","DOIUrl":"https://doi.org/10.1109/CW.2019.00051","url":null,"abstract":"Electromyography (EMG) signals can be used for human movements classification. Nonetheless, due to their nonlinear and time-varying properties, it is difficult to classify the EMG signals and it is critical to use appropriate algorithms for EMG feature extraction and pattern classification. In literature various machine learning (ML) methods have been applied to the EMG signal classification problem in question. In this paper, we extracted four time-domain features of the EMG signals and use a generative graphical model, Deep Belief Network (DBN), to classify the EMG signals. A DBN is a fast, greedy deep learning algorithm that can rapidly find a set of optimal weights of a deep network with many hidden layers. To evaluate the DBN model, we acquired EMG signals, extracted their time-domain features, and then utilized the DBN model to classify human movements. The real data analysis results are presented to show the effectiveness of the proposed deep learning technique for both binary and 4-class recognition of human movements using the measured 8-channel EMG signals. The proposed DBN model may find applications in design of EMG-based user interfaces.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125494127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Detection of Humanoid Robot Design Preferences Using EEG and Eye Tracker 基于脑电图和眼动仪的仿人机器人设计偏好检测
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00044
Yisi Liu, Fan Li, L. Tang, Zirui Lan, Jian Cui, O. Sourina, Chun-Hsien Chen
{"title":"Detection of Humanoid Robot Design Preferences Using EEG and Eye Tracker","authors":"Yisi Liu, Fan Li, L. Tang, Zirui Lan, Jian Cui, O. Sourina, Chun-Hsien Chen","doi":"10.1109/CW.2019.00044","DOIUrl":"https://doi.org/10.1109/CW.2019.00044","url":null,"abstract":"Currently, many modern humanoid robots have little appeal due to their simple designs and bland appearances. To provide recommendations for designers and improve the designs of humanoid robots, a study of human's perception on humanoid robot designs is conducted using Electroencephalogram (EEG), eye tracking information and questionnaires. We proposed and carried out an experiment with 20 subjects to collect the EEG and eye tracking data to study their reaction to different robot designs and the corresponding preference towards these designs. This study can possibly give us some insights on how people react to the aesthetic designs of different humanoid robot models and the important traits in a humanoid robot design, such as the perceived smartness and friendliness of the robots. Another point of interest is to investigate the most prominent feature of the robot, such as the head, facial features and the chest. The result shows that the head and facial features are the focus. It is also discovered that more attention is paid to the robots that appear to be more appealing. Lastly, it is affirmed that the first impressions of the robots generally do not change over time, which may imply that a good humanoid robot design impress the observers at first sight.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114890485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Music in the Air with Leap Motion Controller 音乐在空中与跳跃运动控制器
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/cw.2019.00018
A. Sourin
{"title":"Music in the Air with Leap Motion Controller","authors":"A. Sourin","doi":"10.1109/cw.2019.00018","DOIUrl":"https://doi.org/10.1109/cw.2019.00018","url":null,"abstract":"Not many people know about the first electronic musical instrument-the theremin-and can play it. The idea of this instrument is very groundbreaking: it is played without physical contact with it and in the same way as we sing but by using hands in place of our vocal cords. In this paper we consider how to implement the theremin with a computer using very different physical principles of optical hand tracking and by adding advantages of visual interfaces. The goal of this research is to eventually fulfill the dream of the inventor to make the theremin a musical instrument for everyone and to prove that everyone can play music.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127057391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Vehicle Rear-Lamp Detection at Nighttime via Probabilistic Bitwise Genetic Algorithm 基于概率位遗传算法的夜间车辆尾灯检测
2019 International Conference on Cyberworlds (CW) Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00027
Takumi Nakane, Tatsuya Takeshita, Shogo Tokai, Chao Zhang
{"title":"Vehicle Rear-Lamp Detection at Nighttime via Probabilistic Bitwise Genetic Algorithm","authors":"Takumi Nakane, Tatsuya Takeshita, Shogo Tokai, Chao Zhang","doi":"10.1109/CW.2019.00027","DOIUrl":"https://doi.org/10.1109/CW.2019.00027","url":null,"abstract":"Rear-lamp detection of a vehicle at nighttime is an important technique for advanced driver-assistance systems. We present a detection method by employing a variant of genetic algorithm, which utilizes bitwise genetic operation instead of classic crossover and mutation. That is, the detection task is cast to a localization problem under an evolutionary optimization framework. Specifically, geometric parameters of a rectangle pair form a model to represent the detected rear-lamp pair. The fitness function for evaluating each candidate solution is combinatorial, which consists of multiple fitness functions designed under handcrafted rules from the observation. In addition, the solution space is narrowed down by extracting the red-light sources, which yields in more efficient solution exploration. Experiment with a publicly available dataset which involves images captured in various traffic situations shows the effectiveness of our method qualitatively and quantitatively.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132526982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信