{"title":"Development and application of a surgical process simulation system using VR technology","authors":"Lang Zhou, R. Sato","doi":"10.1109/GCCE50665.2020.9291758","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9291758","url":null,"abstract":"For medical students, instruction via on-site observation of surgery is often a mandatory part of their training. Being present in an operating room allows students to deepen their understanding of various surgical procedures while giving them an opportunity to experience the hands-on environment and general atmosphere. However, traditional instruction methods through observation contain many disadvantages that make learning surgical procedures difficult and problematic. For example, during long sessions of endoscopic surgery, the pictures taken display only a small portion of the internal body, making it difficult for students to fully understand the details of the entire process. Locating the positions of internal organs and understanding the direction of instrument movement is also inconvenient. However, with the aid of a virtual reality support system, students can learn through real-time interactive demonstrations of the surgical process that include details about operating position and other specificities. The result is increased accessibility to facilitate the understanding of the entire surgery process coupled with the reduction of fatigue and other distractions that arise from prolonged observation in a traditional operating room. So far, in medical research, the \"Production of patient explanation video for transrectal prostate biopsy\" jointly researched and produced with the Department of Renal Urology, Kansai Medical University, has been reviewed by the Research Ethics Review Board (IRB) and obtained permission for use Later, through actual use by patients and medical practitioners, great results have been obtained.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115119911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparative Approach to Naïve Bayes Classifier and Support Vector Machine for Email Spam Classification","authors":"Thae Ma Ma, K. Yamamori, A. Thida","doi":"10.1109/GCCE50665.2020.9291921","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9291921","url":null,"abstract":"Spam or unsolicited emails that are used by spammers can cause huge loss to both the email users and the email server. Therefore, in order to detect spam emails not to enter into our mailbox, a developed email spam classification system is required. This paper proposes two popular machine learning methods, Naïve Bayes Classifier and Support Vector Machine, to classify the emails into spam or ham based on the body or content of the emails. In Naïve Bayes Classifier, independent words are considered as features. Support Vector Machine can be used to represent an email in vector space in which each feature means one dimension. Finally, two methods are compared in terms of precision, recall, F-measure performance metrics with the aim of finding the best method.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115288691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of an Interactive Robot for More Natural Communication","authors":"T. Miyamoto, M. Yamada, Tsanming Ou, Yuko Hoshino","doi":"10.1109/GCCE50665.2020.9291842","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9291842","url":null,"abstract":"Robots that interact with humans was thought to be important for nonverbal communication with users. Therefore, we propose an inexpensive interactive robot that can move eyes and lips by combining displays as if you were talking. Here, we describe the system configuration of the prototype robot. In the future, we will prepare interlocutors to study what kind of psychological effect it will have. And, we plan to realize a robot that can communicate with a viewer with eye contact and can talk with him/her by incorporating a camera into the robot.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123161884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Study on Minimum Spectral Error Analysis of Speech","authors":"Takuma Hayasaka, Takashi Nose, A. Ito","doi":"10.1109/GCCE50665.2020.9291840","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9291840","url":null,"abstract":"Conventional source-filter vocoders, such as WORLD, can quickly synthesize speech. However, the quality of synthetic speech is degraded due to speech parameters extraction errors. Therefore, this paper proposes minimum spectral error analysis, a speech analysis method that extracts speech parameters using Analysis-by-Synthesis (A-b-S), to improve the quality of speech synthesized by WORLD. We update speech parameters to minimize the error between the amplitude spectra of natural and synthetic speech. We developed the calculation process of the amplitude spectrum of synthetic speech from speech parameters to perform this analysis. A preliminary experiment shows that we have successfully constructed the calculation process.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116707855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SNS Fatigue Extraction by Analyzing Twitter Data","authors":"Tohma Okafuji, Yuanyuan Wang, Yukiko Kawai","doi":"10.1109/GCCE50665.2020.9292029","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9292029","url":null,"abstract":"SNS fatigue has become a problem in SNS, i.e., Twitter and Facebook. In this work, we define it as “physical and mental fatigue caused by using SNS” This is one of the most widely used stress experiences on Twitter among young people. Therefore, we analyze the causes of SNS fatigue on Twitter to extract SNS fatigue using tweet data, we aim to create an index to determine SNS fatigue to reduce SNS stress in the future. In this paper, we collected questionnaires about how much stress was felt by 10 Twitter users on 25 events that could cause stress in Twitter usage. Then, we classified the causes of SNS fatigue into three main labels by a principal component analysis. For extracting SNS fatigue, we collect tweets and label those collected tweets to extract feature words of the tweets for each label. Also, we create a classifier for the causes of SNS fatigue using a machine learning algorithm. In this way, SNS fatigue prediction and SNS stress reduction can be expected using feature words for SNS fatigue. Finally, we verified the effectiveness of feature word extraction for SNS fatigue and the classification accuracy of the causes of SNS fatigue.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117167383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Varying human whistle pitch without varying oral cavity shape while enlarging the larynx using a vocal tract model","authors":"Kotaro Ishiguro, Mikio Mori","doi":"10.1109/GCCE50665.2020.9291892","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9291892","url":null,"abstract":"The principle of sound production in human whistling is not adequately understood. In a previous study, we built a physical model of a human vocal tract on whistling developed using an X-ray computed tomography (CT) image of the vocal tract during whistling. This study investigates the effects of variations in the larynx on whistling. To this end, we expanded the cross-sectional plate area corresponding to the larynx of the vocal tract model during human whistling. We found that whistle pitch can vary without varying the oral cavity shape when the larynx was enlarged.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121233879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Processing and Statistical Analysis Approach to Predict Calving Time in Dairy Cows","authors":"Swe Zar Maw, Thi Thi Zin, P. Tin","doi":"10.1109/GCCE50665.2020.9291919","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9291919","url":null,"abstract":"An accurate prediction of calving time in dairy cows is one of the most important factors to make an optimal reproduction process in dairy farming. This paper proposes an image processing and statistical analysis approach to predict calving time in dairy cows. Specifically, we extract the behavior changes patterns of the expected cows by using simple effective motion history images (MHI) a few days before the occurrence of calving event from the video sequences taken in the maternity bans. We then classify extracted features with support vector machine (SVM) and analyze the behavior changes by using statistical method, Hidden Markov model (HMM) for prediction process. To confirm the validity of proposed method, we perform some experiments by installing 360-degree view cameras at the top of calving bans. At the first stage, we analyzed the behaviors of 25 dairy cows for 72 hours before giving birth. As a result, we find that the proposed method is promising.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127204957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Verification of Visibility Improvement by Perceptual Sensitivity Correction in Line Display","authors":"Yuuki Machida, Naoki Kawasaki, Takayuki Misu, Keiichi Abe, Hiroshi Sugimura, Makiko Okumura","doi":"10.1109/GCCE50665.2020.9292013","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9292013","url":null,"abstract":"In this paper, we have developed a multi-tone controllable line display with 64 LEDs, with which we can perceive images in the retina during eye movement called saccade. We evaluated the image dependence on visibility for this flashing line display. In the experiment, we performed a subjective evaluation of the visibility of four images by comparing linear gradation with gradation that changed exponentially based on the Weber-Fechner law of perceptual sensitivity. As a result, the visibility was improved in all images by considering the perceptual sensitivity.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124831242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spoken Term Detection Based on Acoustic Models Trained in Multiple Languages for Zero-Resource Language","authors":"Satoru Mizuochi, Yuya Chiba, Takashi Nose, A. Ito","doi":"10.1109/GCCE50665.2020.9291761","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9291761","url":null,"abstract":"In this paper, we study a spoken term detection method for zero-resource languages by using rich-resource languages. The examined method combines phonemic posteriorgrams (PPGs) extracted from phonemic classifiers of multiple languages and detects a query word based on dynamic time warping. As a result, the method showed better detection performance in a zero-resource language compared with the method using PPGs of a single language.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125097779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Design of a Concealed Fingerprint Access Control System based on Commodity Smartphones and APP Software","authors":"Chih-Chiang Wang, Jing-Wein Wang, Tsung-Chieh Cheng","doi":"10.1109/GCCE50665.2020.9291799","DOIUrl":"https://doi.org/10.1109/GCCE50665.2020.9291799","url":null,"abstract":"A typical fingerprint access control system requires mounting of a fingerprint reader on the outside wall next to the entrance door. However, such an externally mounted fingerprint reader is vulnerable to sabotage and fake fingerprint spoofing attacks. In this work, we propose a concealed fingerprint access control system based on commodity smartphone devices and APP software. Our proposed system replaces the use of externally mounted fingerprint readers by exploiting smartphone’s camera function and Wi-Fi communication with cryptographic techniques. To further improve the system performance, we devise an enhanced fingerprint recognition method based on tensors of wavelet subbands of fingerprint images.","PeriodicalId":179456,"journal":{"name":"2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)","volume":"374 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126716231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}