2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)最新文献

筛选
英文 中文
Estimation of Facial Expression Intensity for Lifelog Videos Retrieval 面部表情强度估计在生活日志视频检索中的应用
Yamato Shinohara, Hiroki Nomiya, T. Hochin
{"title":"Estimation of Facial Expression Intensity for Lifelog Videos Retrieval","authors":"Yamato Shinohara, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00030","DOIUrl":"https://doi.org/10.1109/CSII.2018.00030","url":null,"abstract":"Facial expression intensity has been proposed to estimate the intensity of facial expression for the purpose of retrieving impressive scenes from lifelog videos. However, estimation of facial expression intensity involves manual work, and can only be relatively evaluated. We propose a new estimation method of facial expression intensity for reducing manual work, and for absolute evaluation. We estimate the proposed expression intensity for the lifelog videos used in previous research and MMI datasets, and compare it with the previous research, and evaluate the proposed method. As a result, it is shown it is possible to reduce the manual work while maintaining the estimation accuracy.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115750867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Publisher's Information 出版商的信息
{"title":"Publisher's Information","authors":"","doi":"10.1109/csii.2018.00038","DOIUrl":"https://doi.org/10.1109/csii.2018.00038","url":null,"abstract":"","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115357109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measurement of Line-of-Sight Detection Using Pixel Quantity Variation and Application for Autism 基于像素量变化的视线检测测量及其在自闭症中的应用
T. Niwa, Ippei Torii, N. Ishii
{"title":"Measurement of Line-of-Sight Detection Using Pixel Quantity Variation and Application for Autism","authors":"T. Niwa, Ippei Torii, N. Ishii","doi":"10.1109/CSII.2018.00020","DOIUrl":"https://doi.org/10.1109/CSII.2018.00020","url":null,"abstract":"In this study, we develop a tool to support physically disabled people's communication and an assessment tool to measure the intelligence index of autistic children, which uses eye movements with image processing. For the measurement of eye movements, we newly developed a pixel center of gravity method that detects in which the direction of the eye movement is shown in the point where the weights of the black pixels moved. This method is different from using the conventional black eye detection or ellipse detection. The method enables accurate detection even when a physically handicapped person uses. On the other hand, the assessment tool that measures the intelligence index of autistic children prepares dedicated goggles that combines light emitting diodes and near-infrared cameras. It is a study to measure the response speed of left and right eye movements by applying the results so far and to explore the relationship with autism.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114812801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generation of Convex Cones Based on Nearest Neighbor Relations 基于最近邻关系的凸锥生成
N. Ishii, Ippei Torii, K. Iwata, Kazuya Ogagiri, Toyoshiro Nakashima
{"title":"Generation of Convex Cones Based on Nearest Neighbor Relations","authors":"N. Ishii, Ippei Torii, K. Iwata, Kazuya Ogagiri, Toyoshiro Nakashima","doi":"10.1109/CSII.2018.00022","DOIUrl":"https://doi.org/10.1109/CSII.2018.00022","url":null,"abstract":"Dimension reduction of data is an important issue in the data processing and it is needed for the analysis of higher dimensional data in the application domain. Rough set is fundamental and useful to reduce higher dimensional data to lower one for the classification. We develop generation of reducts based on nearest neighbor relation for the classification. In this paper, the nearest neighbor relation is shown to play a fundamental role for the classification from the geometric easoning of reducts by convex cones. Then, it is shown that reducts are generated based on the convex cones construction. Finally, using nearest neighbor relation, algebraic operations are derived on the degenerate convex cones.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131391951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective Fusion of Disaster-Relief Agent in RoboCupRescue Simulation RoboCupRescue仿真中救灾Agent的有效融合
Taishun Kusaka, Yukinobu Miyamoto, Akira Hasegawa, Shunki Takami, K. Iwata, N. Ito
{"title":"Effective Fusion of Disaster-Relief Agent in RoboCupRescue Simulation","authors":"Taishun Kusaka, Yukinobu Miyamoto, Akira Hasegawa, Shunki Takami, K. Iwata, N. Ito","doi":"10.1109/CSII.2018.00021","DOIUrl":"https://doi.org/10.1109/CSII.2018.00021","url":null,"abstract":"The RoboCupRescue Simulation project is one of the responses to recent large-scale natural disasters. In particular, the project provides a platform for studying disaster-relief agents and simulations. We designed and implemented an agent based on the results of a combinational experiment of various modules taken from teams that participated at RoboCup 2017. We developed a new fusional agent with better modules in the Agent Development Framework. This paper presents the results of the combination experiment in detail. We confirm that our fusional agent based on the experimental results obtained a better score than the champion agent at RoboCup 2017.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Dangerous Behavior by Estimation of Head Pose and Moving Direction 基于头部姿态和移动方向的危险行为检测
K. Miyoshi, Hiroki Nomiya, T. Hochin
{"title":"Detection of Dangerous Behavior by Estimation of Head Pose and Moving Direction","authors":"K. Miyoshi, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00028","DOIUrl":"https://doi.org/10.1109/CSII.2018.00028","url":null,"abstract":"We propose a detection system of hazardous behavior using depth information, focusing attention on head position and movement direction. The purpose of this system is to estimate the line-of-sight direction from the head pose and to detect the dangerous behavior that the movement direction is greatly different from the head direction. In the experiment, the risk of behavior was classified into three levels from the direction of the head and the direction of movement, and the accuracy of recognition was confirmed. Experimental results showed the validity of the accuracy of detecting dangerous behavior in this system.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133155890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Message from the CSII 2018 Program Chair CSII 2018项目主席的讲话
{"title":"Message from the CSII 2018 Program Chair","authors":"","doi":"10.1109/csii.2018.00006","DOIUrl":"https://doi.org/10.1109/csii.2018.00006","url":null,"abstract":"","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115736110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synchronizing Method of Music and Movie Clips Considering Temporal Harmony 考虑时间和谐的音乐和电影剪辑同步方法
Toshihiro Ozaki, T. Hochin, Hiroki Nomiya
{"title":"Synchronizing Method of Music and Movie Clips Considering Temporal Harmony","authors":"Toshihiro Ozaki, T. Hochin, Hiroki Nomiya","doi":"10.1109/CSII.2018.00027","DOIUrl":"https://doi.org/10.1109/CSII.2018.00027","url":null,"abstract":"A synchronization method is proposed to match music and movie clips. For this end, harmonic intervals of a movie are proposed. These correspond to BPM of a music material. Harmonic intervals are obtained based on the changes in the motion of images. In the video analysis, we propose a method for recognizing objects and a method for tracking moving objects even when the background moves. The proposed method is evaluated through a subjective evaluation experiment. The experimental result shows the proposed method is effective.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133612389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Impression-Based Music Information Retrieval Method 基于个性化印象的音乐信息检索方法
Yuta Uenoyama, A. Ogino
{"title":"Personalized Impression-Based Music Information Retrieval Method","authors":"Yuta Uenoyama, A. Ogino","doi":"10.1109/CSII.2018.00032","DOIUrl":"https://doi.org/10.1109/CSII.2018.00032","url":null,"abstract":"Along with the spread of music distribution service, there are a growing interest in music information retrieval (MIR) systems. This research proposed a music search method appropriate to more suitable impression for impression by personalizing sound model used in previous research. In this research, we use data collected from 15 subjects and select songs suitable for individuals. This method presumes the impression of music by using a personal sound model and lyric model according to the rules of previous research. The impression to be estimated is based on three impressions which the subject clearly distinguished from the prior questionnaire. 14 subjects who are 20's have evaluated the impressions of three music which the proposed system predicted. The results show that more than 75% of subjects, except one impression, are consistent with the suggested impression based on the method.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132742248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improvement of Emotional Video Scene Retrieval System for Lifelog Videos Based on Facial Expression Intensity 基于面部表情强度的生活日志视频情感场景检索系统改进
Kazuya Sugawara, Hiroki Nomiya, T. Hochin
{"title":"Improvement of Emotional Video Scene Retrieval System for Lifelog Videos Based on Facial Expression Intensity","authors":"Kazuya Sugawara, Hiroki Nomiya, T. Hochin","doi":"10.1109/CSII.2018.00026","DOIUrl":"https://doi.org/10.1109/CSII.2018.00026","url":null,"abstract":"Lifelog has been proposed, in which various data of daily life are acquired and accumulated, and utilized later. However, it is a problem that we can not immediately retrieve the necessary data from a large amount of accumulated data, so the lifelog data are not effectively used. This paper deals with lifelog videos. In order to make it easy to search the scene that the user wants to watch from the lifelog videos, Morikuni tried to construct a system that could search the scene considered to be important with a change in facial expression of the person and to present it in an easy-to-understand manner. After that, \"facial expression intensity\" which is a numerical representation of facial expressions was devised, and Maeda designed and constructed a video scene retrieval system for lifelog videos based on the facial expression intensity. In this paper, we aim to improve the user interface of this retrieval system and establish a method to estimate the threshold values of the facial expression intensity level. We propose and implement a method to calculate the threshold values using the k-means clustering. We compare the performance of the threshold values with the threshold values of the previous method, and show that the performance was improved.","PeriodicalId":202365,"journal":{"name":"2018 5th International Conference on Computational Science/ Intelligence and Applied Informatics (CSII)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134524533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信