{"title":"Stress Level Classifier: Taiwanese College Table Tennis Athletes’ Electroencephalography Analysis Based on Decision Trees","authors":"Pingping Cheng, Meng-Hsiun Tsai, Chung-Hao Hsueh, Sheng Kuang Wu","doi":"10.1109/ICPAI51961.2020.00019","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00019","url":null,"abstract":"This study aims to provide a method to quantify the stress level with numerical EEG values, identify key features of brainwave and assess the level of stress for table tennis players. The data of College’s Division 1 and Division 2 are collected and analyzed by the decision tree algorithms C4.5, CART, Random Forest and Random Tree. The result of Random Forest obtains the highest accuracy rate among each algorithm, which is 79.21% in all players, 79.3% in Division 1, and 80.68% in Division 2. According to the result of decision trees, the top attribute of the Division 1 players was Theta wave, which was different from the result of other data in the Division 2 players. Also reveal the deference of brainwaves between the Division 2 players and the Division 1 players while they are in high stressed competitions.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123974817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robot Eye: Automatic Object Detection And Recognition Using Deep Attention Network to Assist Blind People","authors":"Ervin Yohannes, Paul Lin, Chih-Yang Lin, T. Shih","doi":"10.1109/ICPAI51961.2020.00036","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00036","url":null,"abstract":"Detection and Recognition is a well-known topic in computer vision that still faces many unresolved issues. One of the main contributions of this research is a method to guide blind people around an outdoor environment with the assistance of a ZED stereo camera, a camera that can calculate depth information. In this paper, we propose a deep attention network to automatically detect and recognize objects. The objects are not only limited to general people or cars, but include convenience stores and traffic lights as well, in order to help blind people cross a road and make purchases in a store. Since public datasets are limited, we also create a novel dataset with images captured by the ZED stereo camera and collected from Google Street View. When testing with images of different resolutions, our method achieves an accuracy rate of about 81%, which is better than naive YOLO v3.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128040867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TrackNetV2: Efficient Shuttlecock Tracking Network","authors":"Nien-En Sun, Yu-Ching Lin, Shao-Ping Chuang, Tzu-Han Hsu, Dung-Ru Yu, Ho-Yi Chung, Tsì-Uí İk","doi":"10.1109/ICPAI51961.2020.00023","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00023","url":null,"abstract":"TrackNet, a deep learning network, was proposed to track high-speed and tiny objects such as tennis balls and shuttlecocks from videos. To conquer low image quality issues such as blur, afterimage, and short-term occlusion, some number of consecutive images are input together to detect an flying object. In this work, TrackNetV2 is proposed to improve the performance of TrackNet from various aspects, especially processing speed, prediction accuracy, and GPU memory usage. First of all, the processing speed is improved from 2.6 FPS to 31.8 FPS. The performance boost is achieved by reducing the input image size and re-engineering the network from a Multiple-In Single-Out (MISO) design to a Multiple-In Multiple-Out (MIMO) design. Then, to improve the prediction accuracy, a comprehensive dataset from diverse badminton match videos is collected and labeled for training and testing. The dataset consists of 55563 frames from 18 badminton match videos. In addition, the network mechanisms are composed of not only VGG16 and upsampling layers but also U-net. Last, to reduce GPU memory usage, the data structure of the heatmap layer is remodeled from a pixel-wise one-hot encoding 3D array to a real-valued 2D array. To reflect the change of the heatmap representation, the loss function is redesigned from a RMSE-based function to a weighted cross-entropy based function. An overall validation shows that the accuracy, precision and recall of TrackNetV2 respectively reach 96.3%, 97.0% and 98.7% in the training phase and 85.2%, 97.2% and 85.4% in a test on a brand new match. The processing speed of the 3-in and 3-out version TrackNetV2 can reach 31.84 FPS. The dataset and source code of this work are available at https://nol.cs.nctu.edu.tw:234/open-source/TrackNetv2/.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133675414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheng-Xuan Lin, Bo-Yi Wu, Tzu-Hsuan Chou, Ying-Jia Lin, Hung-Yu kao
{"title":"Bidirectional Perspective with Topic Information for Stance Detection","authors":"Sheng-Xuan Lin, Bo-Yi Wu, Tzu-Hsuan Chou, Ying-Jia Lin, Hung-Yu kao","doi":"10.1109/ICPAI51961.2020.00009","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00009","url":null,"abstract":"Because of the convenience of the Internet, there are many websites or online news spread misinformation, cause panic and trepidation in society. Automatic fake news detection can classify fake news and help the society to clarify the information is true or false without human checking. Detecting fake news by analyzing the stance is one of the mainstream methods, stance detection has become a new popular research field in recent years. How to accurately detect stance has become the primary goal of detecting fake news. This research aims to detect the news stance accurately, and we propose a method based on a pre-trained BERT language model. Most of the previous work only used the knowledge of single inference direction when classifying the stance, which may lose some important information. Therefore, we propose a bidirectional inference stance detection model, which can leverage bidirectional perspective information to classify the stance more comprehensively. We also define the stance detection task as a hierarchical structure task, and use the hierarchical classification and incorporate the topic information to help the stance classification. Experiment results show that our model can classify the stance more accurately.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124043605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Learning in Empirical Asset Pricing Models","authors":"Huei-Wen Teng, Yu-Hsien Li, S. Chang","doi":"10.1109/ICPAI51961.2020.00030","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00030","url":null,"abstract":"Although machine learning has achieved great success in computer science, its performance in the canonical problem of asset pricing in finance is yet to be fully investigated. To compare machine learning techniques and traditional models, we use 8 macroeconomic predictors and 102 firm characteristics to predict stock returns in a monthly basis. It is shown that the neural network outperforms others: Specifically, when building bottom-up portfolios based on the predicted stock-level returns for both buy-and-hold and long-short strategies, XGBoost and neural networks produce portfolios with the highest Sharpe ratios. Limitations and challenges in using machine learning techniques in empirical asset pricing models are also discussed.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130210559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Endah Kristiani, Chao-Tung Yang, K. L. Phuong Nguyen
{"title":"Optimization of Deep Learning Inference on Edge Devices","authors":"Endah Kristiani, Chao-Tung Yang, K. L. Phuong Nguyen","doi":"10.1109/ICPAI51961.2020.00056","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00056","url":null,"abstract":"Concerning Artificial Intelligence (AI)-based applications, it is necessary to reduce latency in real-time inference. This paper implements and compares two separate models, Inception V3 and Mobilenet, using Intel Neural Compute Stick (NCS) 2 and Raspberry Pi 4 as the edge devices. The Model Optimizer (MO), which generates an Intermediate Rep- resentation (IR) of the network, is used for optimizing these models. Then, the IR models are inferences in the edge device. Finally, the comparison of frame per second speed (FPS) and precision is provided. The results show that the speed on Inception V3 is 9 frames per second, while that on Mobilenet is 24 frames per second. Simultaneously, the accuracy reaches 41.28% on Inception V3, but misclassifies for Nissan Altima 2014, and reaches 71.29% on Mobilenet with right classification for Toyota Camry 2014.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128855201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Segmentation to Enhance Frame Prediction in a Multi-Scale Spatial-Temporal Feature Extraction Network","authors":"Michael Mu-Chien Hsu, Richard Jui-Chun Shyur","doi":"10.1109/ICPAI51961.2020.00038","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00038","url":null,"abstract":"Designing a machine to predict future events is a challenging problem to even existing state-of-the-art approaches. It require great computation power either in adversarial training and in segmentation and optical flow. By combining conventional segmentation and the DNN we proposed in this paper, we have a simpler architecture which effectively and efficiently predicts both future frames and semantics more precise than the previous approaches. The input is a raw image sequence, and each frame of it is segmented for semantics, extracted for spatial features, analyzed for temporal features at different scales in a top down path; and then the prediction of frames and segmentation are synthesized in the bottom-up path. Results of our model show superiority of prediction to other state-of-the-art ones in (1) precision of frames, and (2) accuracy of segmentation masks.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128069528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SlowFast-GCN: A Novel Skeleton-Based Action Recognition Framework","authors":"Cheng-Hung Lin, Po-Yung Chou, Cheng-Hsien Lin, Min-Yen Tsai","doi":"10.1109/ICPAI51961.2020.00039","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00039","url":null,"abstract":"Human action recognition plays an important role in video surveillance, human-computer interaction, video understanding, and virtual reality. Different from two-dimensional object recognition, human action recognition is a dynamic object recognition with a time series relationship, and it faces many challenges from complex environments, such as color shift, light and shadow changes, and sampling angles. In order to improve the accuracy of human action recognition, many studies have proposed skeleton-based action recognition methods that are not affected by the background, but the current framework does not have much discussion on the integration of the time dimension.In this paper, we propose a novel SlowFast-GCN framework which combines the advantages of ST-GCN and SlowFastNet with dynamic human skeleton to improve the accuracy of human action recognition. The proposed framework uses two streams, one stream captures fine-grained motion changes, and the other stream captures static semantics. Through these two streams, we can merge the human skeleton features from two different time dimensions. Experimental results show that the proposed framework outperforms to state-of-the-art approaches on the NTU-RGBD dataset.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132559893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transferring a facial depression model to estimate mood in a natural web browsing task","authors":"Giri Basant Raj, J. Morita","doi":"10.1109/ICPAI51961.2020.00017","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00017","url":null,"abstract":"Because people are living in a stressful era, they are prone to common mental health problems, which cause them to experience low mood and loss of interest or pleasure. Although many suffer from depression/low mood, they hesitate to undergo clinical check-ups. Therefore, a systematic and efficient web-based system that automatically detects emotions is necessary. The purpose of this study was to design and develop a system that can automatically detect negative and positive mood states and to investigate the relationship between the depression and mood states of individuals. User’s facial expressions features are detected and analyzed in real-time and after the completion of using the system, the determined emotion is hence provided to the users. A facial depression model constructed from a dataset obtained in a human–agent interaction (HAI) experiment was applied to a general human-computer interaction (HCI) situation to classify negative and positive mood states. The model exhibits the highest accuracy rate for classifying mood states. These findings suggest that faces provide strong evidence of mood induction to depression and guidance to construct the automatic mental health care web-based system to know the preliminary mental state.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"993 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133166894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Stock Trend Prediction Models by Mining Relational Graphs of Stock Prices","authors":"Hung-Yang Li, V. Tseng, Philip S. Yu","doi":"10.1109/ICPAI51961.2020.00028","DOIUrl":"https://doi.org/10.1109/ICPAI51961.2020.00028","url":null,"abstract":"Stock trend prediction has been a subject that attracts lots of attentions from a diverse range of fields recently. Despite the advance by the cooperation with artificial intelligence and finance domains, a large number of works are still limited to the use of technical indicators to capture the principles of a stock price movement while few consider both historical patterns and the relations of its correlated stock. In this work, we propose a novel framework named RGStocknet (Relational Graph Stock Enhancing Network) that can boost performance on an arbitrary time series prediction backbone model. Our approach automatically extracts the relational graph into which the graph embedding model can be easily integrated. Treated as an additional input feature, company embedding from the graph embedding model aims to improve performance without the need for external resources of the knowledge graph. The experiment results show that the three benchmark baseline can benefit from our proposed RGStocknet module in relative performance gain on the S&P500 dataset with 2.97%, 2.48%, and 7.03% on profit-score and with 25.50%, 17.53%, and 12.75% on accuracy respectively. Applied to a real-world trading simulation environment, our approach also outperformed the backbone model and doubled the average return on ResNet over the buy and hold (BH) strategy from 4.42% to 7.38%. Visualization of the generated relational graph and company embedding also shows that the proposed method can capture the hidden dynamics of other correlated stocks and learn representation across the whole stock market. Moreover, the proposed method was shown to carry the potential to incorporate relations with external resources to achieve higher performance further.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130313042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}