{"title":"Estimation of Facial Expression Intensity for Lifelog Videos Retrieval","authors":"Yamato Shinohara, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00030","DOIUrl":"https://doi.org/10.1109/CSII.2018.00030","url":null,"abstract":"Facial expression intensity has been proposed to estimate the intensity of facial expression for the purpose of retrieving impressive scenes from lifelog videos. However, estimation of facial expression intensity involves manual work, and can only be relatively evaluated. We propose a new estimation method of facial expression intensity for reducing manual work, and for absolute evaluation. We estimate the proposed expression intensity for the lifelog videos used in previous research and MMI datasets, and compare it with the previous research, and evaluate the proposed method. As a result, it is shown it is possible to reduce the manual work while maintaining the estimation accuracy.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115750867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Publisher's Information","authors":"","doi":"10.1109/csii.2018.00038","DOIUrl":"https://doi.org/10.1109/csii.2018.00038","url":null,"abstract":"","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115357109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurement of Line-of-Sight Detection Using Pixel Quantity Variation and Application for Autism","authors":"T. Niwa, Ippei Torii, N. Ishii","doi":"10.1109/CSII.2018.00020","DOIUrl":"https://doi.org/10.1109/CSII.2018.00020","url":null,"abstract":"In this study, we develop a tool to support physically disabled people's communication and an assessment tool to measure the intelligence index of autistic children, which uses eye movements with image processing. For the measurement of eye movements, we newly developed a pixel center of gravity method that detects in which the direction of the eye movement is shown in the point where the weights of the black pixels moved. This method is different from using the conventional black eye detection or ellipse detection. The method enables accurate detection even when a physically handicapped person uses. On the other hand, the assessment tool that measures the intelligence index of autistic children prepares dedicated goggles that combines light emitting diodes and near-infrared cameras. It is a study to measure the response speed of left and right eye movements by applying the results so far and to explore the relationship with autism.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114812801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Ishii, Ippei Torii, K. Iwata, Kazuya Ogagiri, Toyoshiro Nakashima
{"title":"Generation of Convex Cones Based on Nearest Neighbor Relations","authors":"N. Ishii, Ippei Torii, K. Iwata, Kazuya Ogagiri, Toyoshiro Nakashima","doi":"10.1109/CSII.2018.00022","DOIUrl":"https://doi.org/10.1109/CSII.2018.00022","url":null,"abstract":"Dimension reduction of data is an important issue in the data processing and it is needed for the analysis of higher dimensional data in the application domain. Rough set is fundamental and useful to reduce higher dimensional data to lower one for the classification. We develop generation of reducts based on nearest neighbor relation for the classification. In this paper, the nearest neighbor relation is shown to play a fundamental role for the classification from the geometric easoning of reducts by convex cones. Then, it is shown that reducts are generated based on the convex cones construction. Finally, using nearest neighbor relation, algebraic operations are derived on the degenerate convex cones.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131391951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taishun Kusaka, Yukinobu Miyamoto, Akira Hasegawa, Shunki Takami, K. Iwata, N. Ito
{"title":"Effective Fusion of Disaster-Relief Agent in RoboCupRescue Simulation","authors":"Taishun Kusaka, Yukinobu Miyamoto, Akira Hasegawa, Shunki Takami, K. Iwata, N. Ito","doi":"10.1109/CSII.2018.00021","DOIUrl":"https://doi.org/10.1109/CSII.2018.00021","url":null,"abstract":"The RoboCupRescue Simulation project is one of the responses to recent large-scale natural disasters. In particular, the project provides a platform for studying disaster-relief agents and simulations. We designed and implemented an agent based on the results of a combinational experiment of various modules taken from teams that participated at RoboCup 2017. We developed a new fusional agent with better modules in the Agent Development Framework. This paper presents the results of the combination experiment in detail. We confirm that our fusional agent based on the experimental results obtained a better score than the champion agent at RoboCup 2017.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of Dangerous Behavior by Estimation of Head Pose and Moving Direction","authors":"K. Miyoshi, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00028","DOIUrl":"https://doi.org/10.1109/CSII.2018.00028","url":null,"abstract":"We propose a detection system of hazardous behavior using depth information, focusing attention on head position and movement direction. The purpose of this system is to estimate the line-of-sight direction from the head pose and to detect the dangerous behavior that the movement direction is greatly different from the head direction. In the experiment, the risk of behavior was classified into three levels from the direction of the head and the direction of movement, and the accuracy of recognition was confirmed. Experimental results showed the validity of the accuracy of detecting dangerous behavior in this system.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133155890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the CSII 2018 Program Chair","authors":"","doi":"10.1109/csii.2018.00006","DOIUrl":"https://doi.org/10.1109/csii.2018.00006","url":null,"abstract":"","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115736110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synchronizing Method of Music and Movie Clips Considering Temporal Harmony","authors":"Toshihiro Ozaki, T. Hochin, Hiroki Nomiya","doi":"10.1109/CSII.2018.00027","DOIUrl":"https://doi.org/10.1109/CSII.2018.00027","url":null,"abstract":"A synchronization method is proposed to match music and movie clips. For this end, harmonic intervals of a movie are proposed. These correspond to BPM of a music material. Harmonic intervals are obtained based on the changes in the motion of images. In the video analysis, we propose a method for recognizing objects and a method for tracking moving objects even when the background moves. The proposed method is evaluated through a subjective evaluation experiment. The experimental result shows the proposed method is effective.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133612389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personalized Impression-Based Music Information Retrieval Method","authors":"Yuta Uenoyama, A. Ogino","doi":"10.1109/CSII.2018.00032","DOIUrl":"https://doi.org/10.1109/CSII.2018.00032","url":null,"abstract":"Along with the spread of music distribution service, there are a growing interest in music information retrieval (MIR) systems. This research proposed a music search method appropriate to more suitable impression for impression by personalizing sound model used in previous research. In this research, we use data collected from 15 subjects and select songs suitable for individuals. This method presumes the impression of music by using a personal sound model and lyric model according to the rules of previous research. The impression to be estimated is based on three impressions which the subject clearly distinguished from the prior questionnaire. 14 subjects who are 20's have evaluated the impressions of three music which the proposed system predicted. The results show that more than 75% of subjects, except one impression, are consistent with the suggested impression based on the method.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132742248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvement of Emotional Video Scene Retrieval System for Lifelog Videos Based on Facial Expression Intensity","authors":"Kazuya Sugawara, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00026","DOIUrl":"https://doi.org/10.1109/CSII.2018.00026","url":null,"abstract":"Lifelog has been proposed, in which various data of daily life are acquired and accumulated, and utilized later. However, it is a problem that we can not immediately retrieve the necessary data from a large amount of accumulated data, so the lifelog data are not effectively used. This paper deals with lifelog videos. In order to make it easy to search the scene that the user wants to watch from the lifelog videos, Morikuni tried to construct a system that could search the scene considered to be important with a change in facial expression of the person and to present it in an easy-to-understand manner. After that, \"facial expression intensity\" which is a numerical representation of facial expressions was devised, and Maeda designed and constructed a video scene retrieval system for lifelog videos based on the facial expression intensity. In this paper, we aim to improve the user interface of this retrieval system and establish a method to estimate the threshold values of the facial expression intensity level. We propose and implement a method to calculate the threshold values using the k-means clustering. We compare the performance of the threshold values with the threshold values of the previous method, and show that the performance was improved.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134524533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}