M. Masango, Francois Mouton, P. Antony, Bokang Mangoale
{"title":"Web Defacement and Intrusion Monitoring Tool: WDIMT","authors":"M. Masango, Francois Mouton, P. Antony, Bokang Mangoale","doi":"10.1109/CW.2017.55","DOIUrl":"https://doi.org/10.1109/CW.2017.55","url":null,"abstract":"Websites have become a form of information distributors; usage of websites has seen a significant rise in the amount of information circulated on the Internet. Some businesses have created websites that display services the business renders or information about that particular product; businesses make use of the Internet to expand business opportunities or advertise the services they render on a global scale. This does not only apply to businesses, other entities such as celebrities, socialites, bloggers and vloggers are using the Internet to expand personal or business opportunities too. These entities make use of websites that are hosted by a web host. The contents of the website is stored on a web server. However, not all websites undergo penetration testing which leads to them being vulnerable. Penetration testing is a costly exercise that most companies or website owners find they cannot afford. With web defacement still one of the most common attacks on websites, these attacks aim at altering the content of the web pages or to make the website inactive. This paper proposes a Web Defacement and Intrusion Monitoring Tool, that could be a possible solution to the rapid identification of altered or deleted web pages. The proposed tool will have web defacement detection capabilities that may be used for intrusion detection too. The proposed solution will also be used to regenerate the original content of a website after the website has been defaced.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"141 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134161527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VR Cardiovascular Blood Simulation as Decision Support for the Future Cyber Hospital","authors":"Mark Ian Holland, S. Pop, N. John","doi":"10.1109/CW.2017.49","DOIUrl":"https://doi.org/10.1109/CW.2017.49","url":null,"abstract":"Planning treatment of acute cardiac events that limit the blood supply to major organs is particularly difficult for interventional cardiologists. The treatment of pathologies, such as vascular stenosis, can have numerous unforeseen consequences as the blood resumes its flow. Their decisions are largely based on 2D medical imagery and their own experience. This work in progress presents a virtual reality blood simulation tool that will augment and improve a clinicians decision-making arsenal, and an outline of the technologies required for this component of the cyber hospital of the future. The tool displays and provides interaction with vital information on how blood may continue to flow through the cardiovascular system after treatment. The blood flow will be simulated using a bespoke implementation of the increasingly effective smoothed particle hydrodynamics model.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114532487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Method for Autonomous Positioning Avatars in a Group","authors":"F. Kuijk","doi":"10.1109/CW.2017.40","DOIUrl":"https://doi.org/10.1109/CW.2017.40","url":null,"abstract":"In this paper, we describe a method to position a group of avatars in a virtual environment. The method aims at a group setting that seems natural for a group of people attending a guided tour and was developed in particular to assist participants by autonomously positioning their avatars on each stop of a virtual tour. The geometry of the virtual environment is key input, but also engagement of participants and possible social networks are taken into account. Consequently, it may serve to position avatars in similar type of situations.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"2020 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wheelchair-MR: A Mixed Reality Wheelchair Training Environment","authors":"Thomas W. Day","doi":"10.1109/CW.2017.12","DOIUrl":"https://doi.org/10.1109/CW.2017.12","url":null,"abstract":"In previous work we have demonstrated that Virtual Reality can be used to help train driving skills for users of a powered wheelchair. However, cybersickness was a particular problem. This work-in-progress paper presents a Mixed Reality alternative to our wheelchair training software, which overcomes this problem. The design and implementation of this application is discussed. Early results shows some promise and overcomes the cybersickness issue. More work is needed before a larger scale study can be undertaken.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122225794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caie Xu, Shota Fushimi, M. Toyoura, Jiayi Xu, Honglin Li, Xiaoyang Mao
{"title":"Synthesis of Facial Images Based on Relevance Feedback","authors":"Caie Xu, Shota Fushimi, M. Toyoura, Jiayi Xu, Honglin Li, Xiaoyang Mao","doi":"10.1109/CW.2017.53","DOIUrl":"https://doi.org/10.1109/CW.2017.53","url":null,"abstract":"We propose a dialogic system based on a relevance feedback strategy that allows for the semiautomatic synthesis of a facial image that only exists in a user's mind. The user is presented with several facial images and judges whether each one resembles the face that he or she is imagining. Based on the feedback from the user, a set of sample facial images are used to train an Optimum-Path Forest classifying the relevance of facial images. An interpolation method is then employed to synthesize new facial images that closely resemble the imagined face. A series of experiments are conducted to evaluate and verify the effectiveness and efficiency of the proposed technique.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125168679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zirui Lan, O. Sourina, Lipo Wang, Reinhold Scherer, G. Müller-Putz
{"title":"Unsupervised Feature Learning for EEG-based Emotion Recognition","authors":"Zirui Lan, O. Sourina, Lipo Wang, Reinhold Scherer, G. Müller-Putz","doi":"10.1109/CW.2017.19","DOIUrl":"https://doi.org/10.1109/CW.2017.19","url":null,"abstract":"Spectral band power features are one of the most widely used features in the studies of electroencephalogram (EEG)-based emotion recognition. The power spectral density of EEG signals is partitioned into different bands such as delta, theta, alpha and beta band etc. Though based on neuroscientific findings, the partition of frequency bands is somewhat on an ad-hoc basis, and the definition of frequency ranges of the bands of interest can vary between studies. On the other hand, it is also arguable that one definition of power bands could perform equally well on all subjects. In this paper, we propose to use autoencoder to automatically learn from each subject the salient frequency components from power spectral density estimated as periodogram by Fast Fourier Transform (FFT). We propose a network architecture especially for EEG feature extraction, one that adopts hidden unit clustering with added pooling neuron per cluster. The classification accuracy with features extracted by our proposed method is benchmarked against that with standard power features. Experimental results show that our proposed feature extraction method achieves accuracy ranging from 44% to 59% for three-emotion classification. We also see a 4-20% accuracy improvement over standard band power features.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127750593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OpenGLD - A Multi-user Single State Architecture for Multiplayer Game Development","authors":"Karsten Pedersen, Wen Tang, C. Gatzidis","doi":"10.1109/CW.2017.45","DOIUrl":"https://doi.org/10.1109/CW.2017.45","url":null,"abstract":"Multi-user applications can be complex to develop due to their large or intricate nature. Many of the issues encountered are related to performance and security. These issues are exacerbated when the scale of the application increases. This paper introduces a novel distributed architecture called OpenGL|D (OpenGL Distributed). This technology enables an application to pass through the graphical calls between a Virtual Machine (VM) and the graphics processing unit (GPU) on the native host across a network. This ability allows applications to run inside a virtual machine (VM), whilst still benefiting from hardware accelerated performance from the GPU for the computationally intensive graphical processing. This allows for the development of 3D software requiring no dependencies on specific hardware or technology other than ANSI C and a network stack, demonstrating our approach to platform agnostic development and digital preservation.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127975548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yisi Liu, Salem Chandrasekaran Harihara Subramaniam, O. Sourina, S. Liew, Gopala Krishnan, D. Konovessis, H. E. Ang
{"title":"EEG-based Mental Workload and Stress Recognition of Crew Members in Maritime Virtual Simulator: A Case Study","authors":"Yisi Liu, Salem Chandrasekaran Harihara Subramaniam, O. Sourina, S. Liew, Gopala Krishnan, D. Konovessis, H. E. Ang","doi":"10.1109/CW.2017.37","DOIUrl":"https://doi.org/10.1109/CW.2017.37","url":null,"abstract":"Many studies have shown that the majority of maritime accidents/incidents are attributed to human errors as the initiating cause. Efforts have been made to study human factors that can result in a safer maritime transportation. Among all techniques, Electroencephalogram (EEG) has the advantages such as high time resolution, possibility to continuously monitor brain states with high accuracy, recognition of human mental workload, emotion, stress, vigilance, etc. In this paper, we designed and carried out an experiment to collect the EEG signals to study stress and sharing of the mental workload among crew members during collaboration tasks performance on the ship's bridge virtual simulator. Four maritime trainees were monitored in the experiment. Each of them had a role such as an officer on watch, captain, pilot, or steersman. The results show that the captain had the highest stress and workload. However, the other three trainees experienced low workload and stress due to shared work and responsibility. The EEG is a promising evaluation tool to be used in the human factors study in the maritime domain.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130028789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User Friendly Calibration for Tracking of Optical Stereo See-Through Head Worn Displays for Augmented Reality","authors":"F. Bernard, T. Engelke, Arjan Kuijper","doi":"10.1109/CW.2017.14","DOIUrl":"https://doi.org/10.1109/CW.2017.14","url":null,"abstract":"In recent time devices like Google Glass and Oculus Rift gained a lot of public attention. So the field of Virtual and Augmented Reality has become a more and more attractive field of study. Optical Stereo See-Through Head Worn Displays (OST-HWD or OST-HMD) can be used for Augmented Reality, but have to be calibrated. This means, that one has to find a configuration, that aligns the image shown on the displays with the environment, which is observed by the built-in camera. If this is not done, the augmented virtual image would not align with the real world. In this paper, the process of this calibration approach is divided into two stages, hardware and user calibration, but with less constraints for the positions of the cameras, which makes it easier to use. We aim at a more user friendly suite for the calibration of OST-HWD devices. Therefore both of the aforementioned stages are combined in a new quick step-by-step installation wizard, which is written in HTML and JavaScript to ensure easy usability. We apply a new minimization model in order to simplify and robustify the calculations of the virtual plane. In addition to that the required hardware components, including camera and calibration rig, were simplified. The implemented software has been evaluated for its results of the computed virtual plane, intrinsic data and eye positions of the user. Finally a user study was conducted to rate the usability of the calibration process.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"65 s252","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113953496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RAD-AR: RADiotherapy - Augmented Reality","authors":"F. Cosentino, N. John, J. Vaarkamp","doi":"10.1109/CW.2017.56","DOIUrl":"https://doi.org/10.1109/CW.2017.56","url":null,"abstract":"We have developed an augmented reality tool for radiotherapy to view the real world scene, i.e. the patient on a treatment couch, combined with computer graphics content, such as planning image data and any defined outlines of organ structures. We have deployed our software to a number of consumer electronics devices (iPad, Android tablets, MS HoloLens). We suggest that, in contrast to other augmented reality tools explored for radiotherapy [1], due to the wide availability and low cost of the hardware platforms considered, associated with their increasing computational and graphic power, our system has strong potential as a tool for visualization of medical information for clinicians and other radiotherapy professionals, as a device for patient positioning for radiotherapy treatment, and as an educational tool for patients to visualize their treatment and demonstrate to patients e.g. the importance of compliance with instructions around bladder filling and rectal suppositories. Accuracy of virtual content placement and user evaluation of our system has been experimentally investigated.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115815394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}