{"title":"Utilizing Subject-Specific Discriminative EEG Features for Classification of Motor Imagery Directions","authors":"Kavitha P. Thomas, Neethu Robinson, A. P. Vinod","doi":"10.1109/ICAwST.2019.8923216","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923216","url":null,"abstract":"Electroencephalogram (EEG)-based BrainComputer Interface (BCI) technology needs efficient algorithms to find distinct EEG patterns/features to realize applications with distinct high-dimensional control signals. This paper proposes a novel feature extraction methodology for separating EEG patterns associated right hand motor imagery performed towards left and right directions. The most discriminative subject-specific feature set is chosen based on Fisher’s ratio of absolute phase values of EEG in 6 low frequency sub bands. Using this, the proposed BCI system is capable of providing better classification results than state-ofthe-art methodology with fixed channels, fusing absolute phase and spatial features from selected subject-specific discriminative channels. Experimental analysis shows that though parietal lobe is vital in providing distinguishable features, the channel set that provide maximum accuracy, is highly subject-specific. Hence, subject-specific BCI that can decode finer parameters of imagined movement are feasible and further research to understand the activations elicited in parietal lobe can contribute towards robust BCI systems.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125132092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuang Liu, Wenyi Wu, Siyu Zhai, Xiaoya Liu, Yufeng Ke, X. An, Dong Ming
{"title":"Improve the generalization of the cross-task emotion classifier using EEG based on feature selection and SVR","authors":"Shuang Liu, Wenyi Wu, Siyu Zhai, Xiaoya Liu, Yufeng Ke, X. An, Dong Ming","doi":"10.1109/ICAwST.2019.8923256","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923256","url":null,"abstract":"Emotion is a state that comprehensively represents human feeling, thought and behavior. In our daily life, emotion has played an increasingly important role, and emotion recognition has become a research focus. What’ more, the application has a broader perspective at home and abroad. Most existing studies identified emotion under specific tasks, but emotion classifiers are required to recognize emotion under any conditions in practice. Therefore, cross-task emotion recognition is a necessary step to move from the laboratory to the practical use. In this work, we designed three different induced tasks, picture-induced, music-induced and video-induced tasks. 13 (8 females and 5 males) participants were recruited and evoked to be positive, neutral and negative states respectively. The results using support vector regression highlighted that the correlation coefficient was higher for inter-task classification in video-induced and music-induced tasks, while deteriorated significantly in cross-task classification. Combining recursive feature screening and support vector regression to optimize features, the optimal feature set had better performance than all features employed, obtaining above 0.8 for correlation coefficient. These results indicated that SVR could achieve a better performance of cross-task emotion recognition, partly because it avoided the problem of emotion intensity mismatch in different tasks.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124301418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Madhusri, G. Kesavkrishna, R. Marimuthu, R. Sathyanarayanan
{"title":"Performance Comparison of Machine Learning Algorithms to Predict Labor Complications and Birth Defects Based On Stress","authors":"V. Madhusri, G. Kesavkrishna, R. Marimuthu, R. Sathyanarayanan","doi":"10.1109/ICAwST.2019.8923370","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923370","url":null,"abstract":"Stress affects physical as well as the mental health of the people and it follows that the stress is one of the major reasons behind the complications during pregnancy like hypertension. Hence it is necessary to ascertain the effects of stress on the health of the mother as well as the baby to find possible complications during pregnancy and delivery. It may also be useful to predict and avoid birth defects since there have been many instances where stress related complications have been known leading to cognitive disorders in the child. The goal of this study is to design and develop a prediction model for stress-based complications during pregnancy, based on the Physical, Social, Environmental and Biological factors. For this the dataset was generated using personalized interview-based survey administered to women who have undergone pregnancy and delivery in the past. The questions were based on the factors mentioned above. The data generated is used to check the correctness of the hypothesis and to evaluate the performance of the proposed stress prediction model using different machine learning algorithms like Support Vector Machine (SVM), Naive Bayes (NB), K-Nearest Neighbor (KNN) and Decision Tree (DT). The experimental results proved that the proposed model achieved an accuracy of 90% when Naive Bayes algorithm was used. The other algorithms produced lesser results but stillclose.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123705492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge Acquisition with Deep Fuzzy Inference Model and Its Application to a Medical Diagnosis","authors":"Y. Mori, Hirosato Seki, M. Inuiguchi","doi":"10.1109/ICAwST.2019.8923443","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923443","url":null,"abstract":"In this paper, we reduce the number of fuzzy rules in the fuzzy inference model and acquire knowledge as fuzzy rules. The number of input items used for the inference model is reduced by randomly selecting the number of input items in each layer. Therefore, it turns out that the number of rules in the whole of this model can be reduced more than that of rules in an inference model that uses all the original input items at one time. However, in the previous model by Zhang, although the consequent part of the fuzzy rule was learned, the antecedent part was not learned. Since we need to deal with the situation where there is no prior knowledge in the problem to apply and it will be necessary to acquire knowledge from data, it is required to learn the antecedent part. In this paper, we propose a learning method for the antecedent fuzzy sets in fuzzy rules in order to obtain relationship between input and output of the learning data from the actual data. Then, as an example, the proposed method is applied to medical diagnosis of diabetes, the accuracy of the previous method is compared with that of the proposed method.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129767287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementation of multi-modal interface for VR application","authors":"Yushi Machidori, Ko Takayama, Kaoru Sugita","doi":"10.1109/ICAwST.2019.8923551","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923551","url":null,"abstract":"Recently, some Head Mount Displays (HMD) are released for consumers. A general VR system is provided to a virtual experience with the virtual world according to user’s responses organized by three types of components such as an input system, an output system and a simulation system. The input system is used as a controller, a mouse, a keyboard and head tracking device. These devices are used to physical operations on the real world, but these devices are invisible on the virtual world during using the HMD. In this paper, we introduce a multi-modal interface to VR application supporting both voice and gesture interface on a general HMD. We also discuss about a prototype system to use low cost devices such as the HMD, a gesture input device, a general PC and a USB microphone.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132727436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning for Automatic Identification of Nodule Morphology Features and Prediction of Lung Cancer","authors":"Weilun Wang, G. Chakraborty","doi":"10.1109/ICAwST.2019.8923147","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923147","url":null,"abstract":"Lung Cancer is the most common and deadly cancer in the world. Correct prognosis affects the survival rate of patient. The most important symptom for early diagnosis is nodules images in CT scan. Diagnosis performed in hospital is divided into 2 steps : (1) Firstly, detect nodules from CT scan. (2) Secondly, evaluate the morphological features of nodules and give the diagnostic results.In this work, we proposed an automatic lung cancer prognosis system. The system has 3 steps : (1) In the first step, we trained two models, one based on convolutional neural network (CNN), and the other recurrent neural network (RNN), to detect nodules in CT scan. (2) In the second step, convolutional neural networks (CNN) are trained to evaluate the value of nine morphological features of nodules. (3) In the final step, logistic regression between values of features and cancer probability is trained using XGBoost model. In addition, we give an analysis of which features are important for cancer prediction. Overall, we achieved 82.39% accuracy for lung cancer prediction. By logistic regression analysis, we find that features of diameter, spiculation and lobulation are useful for reducing false positive.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132202834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kushankur Ghosh, Arghasree Banerjee, Sankhadeep Chatterjee, S. Sen
{"title":"Imbalanced Twitter Sentiment Analysis using Minority Oversampling","authors":"Kushankur Ghosh, Arghasree Banerjee, Sankhadeep Chatterjee, S. Sen","doi":"10.1109/ICAwST.2019.8923218","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923218","url":null,"abstract":"Micro-Blogging platforms have become one of the popular medium which reflects opinion/sentiment of social events and entities. Machine learning based sentiment analyses have been proven to be successful in finding people’s opinion using redundantly available data. However, current study has pointed out that the data being used to train such machine learning models could be highly imbalanced. In the current study live tweets from Twitter have been used to systematically study the effect of class imbalance problem in sentiment analysis. Minority oversampling method is employed here to manage the imbalanced class problem. Two well-known classifiers Support Vector Machine and Multinomial Naïve Bayes have been used for classifying tweets into positive or negative sentiment classes. Results have revealed that minority oversampling based methods can overcome the imbalanced class problem to a greater extent.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133116554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equilibrium Selective Role Coordination for Autonomous Driving","authors":"N. Iwahashi","doi":"10.1109/ICAwST.2019.8923170","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923170","url":null,"abstract":"Role coordination is crucial in multi-agent collaboration because the collaboration may fail if the roles played by agents are inconsistent. In this paper, we present a role coordination method, Equilibrium Selective Role Coordination (ESRC), for decentralized continuous mutual action control in autonomous driving. In ESRC, the roles of agents are represented by game-theoretic equilibrium points that the agents try to achieve. ESRC comprises three hierarchical functions: (1) action due to given dynamics and constraints, (2) prediction of mutual actions, and (3) selection of roles. Corresponding to this functional hierarchy, three-layered mutual belief hierarchy is adopted. Each agent acts to achieve equilibrium with other agents while selecting an equilibrium point as an appropriate role assignment adaptively and online to reduce risk. The results of simulation experiments conducted demonstrate that our proposed method could produce appropriate actions even in complicated situations where several possible collisions needed to be considered. ESRC can be used to model a wide range of decentralized multi-agent based phenomena, such as human-robot physical interactions, dialogues, economic activities, artificial muscles, and neural information dynamics.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128205007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Smart Insoles and RGB Camera for Identifying Stationary Human Targets","authors":"Sevendi Eldrige Rifki Poluan, Yan-Ann Chen","doi":"10.1109/ICAwST.2019.8923255","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923255","url":null,"abstract":"Identity recognition is an important component for creating a personalized service in loT applications. Current prevailing technologies have to pre-train with large datasets or need the privacy-sensitive information from users such as facial features, voice features, and fingerprint. In this work, we address the problem of identifying stationary humans (less movements) targets, which cannot be solved by other motion-based fusion mechanisms. In the future IoT world, many wearable sensors on human beings are foreseeable. We exploit RGB camera and smart insoles to design a system for dealing with the stationary identity recognition. We utilize machine learning algorithms to explore the correlation of lower body postures from the viewpoints of heterogeneous sensors. Then we can make the identity matching according to the trained models. Evaluation results show that our mechanism can achieve good performance if users' postures are differentiable.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134075408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. A. P. Ratna, F. A. Ekadiyanto, Ihsan Ibrahim, Diyanatul Husna, Fathimah Rahimullah
{"title":"Investigating Parallelization of Cross-language Plagiarism Detection System Using The Winnowing Algorithm in Cloud Based Implementation","authors":"A. A. P. Ratna, F. A. Ekadiyanto, Ihsan Ibrahim, Diyanatul Husna, Fathimah Rahimullah","doi":"10.1109/ICAwST.2019.8923539","DOIUrl":"https://doi.org/10.1109/ICAwST.2019.8923539","url":null,"abstract":"The computational performance of cross-language plagiarism detection system using winnowing algorithm developed at the Electrical Engineering Department, Universitas Indonesia became an issue for real world application. This research is investigating the parallelization of such system implemented on a lab-scale multicore based private cloud platform using OpenStack. Parallelization was done on the portion of the program where the paragraphs of reference documents are processed. The result of execution time of the overall parallelized computation was able to reach speed up of 1.07 to 3.52 times compared to the execution time of the original serial computation.","PeriodicalId":156538,"journal":{"name":"2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121841794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}