2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)最新文献

筛选
英文 中文
Free-Head Appearance-Based Eye Gaze Estimation on Mobile Devices 移动设备上基于自由头部外观的眼球注视估计
Jigang Liu, Bu-Sung Lee, D. Rajan
{"title":"Free-Head Appearance-Based Eye Gaze Estimation on Mobile Devices","authors":"Jigang Liu, Bu-Sung Lee, D. Rajan","doi":"10.1109/ICAIIC.2019.8669057","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669057","url":null,"abstract":"Eye gaze tracking plays an important role in human-computer interaction applications. In recent years, many research have been performed to explore gaze estimation methods to handle free-head movement, most of which focused on gaze direction estimation. Gaze point estimation on the screen is another important application. In this paper, we proposed a two-step training network, called GazeEstimator, to improve the estimation accuracy of gaze location on mobile devices. The first step is to train an eye landmarks localization network on 300W-LP dataset [1], and the second step is to train a gaze estimation network on GazeCapture dataset [2]. Some processing operations are performed between the two networks for data cleaning. The first network is able to localize eye precisely on the image, while the gaze estimation network use only eye images and eye grids as inputs, and it is robust to facial expressions and occlusion.Compared with state-of-the-art gaze estimation method, iTracker, our proposed deep network achieves higher accuracy and is able to estimate gaze location even in the condition that the full face cannot be detected.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115693873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Towards a Cybersecurity Investment Assessment method using Primitive Cognitive Network Process 基于原始认知网络过程的网络安全投资评估方法
K. Yuen
{"title":"Towards a Cybersecurity Investment Assessment method using Primitive Cognitive Network Process","authors":"K. Yuen","doi":"10.1109/ICAIIC.2019.8668842","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8668842","url":null,"abstract":"Cybercrime is everywhere in cyberspace. Cybersecurity is a set of technologies, processes and policies to protect the enterprise information asset to prevent business loss assets from the cyber attackers. Planning a sustainable and reliable cybersecurity system is an essential initial step since the large amount of investment on the system has long lasting impact. This paper introduces a multiple criteria decision making method using the Primitive Cognitive Network Process (PCNP) for the cyber-security investment decision making.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128945413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Study on Facial Expression and First Impression Through Machine Learning 基于机器学习的面部表情和第一印象研究
Won-Chan Lee, DaeKyun Yoon
{"title":"A Study on Facial Expression and First Impression Through Machine Learning","authors":"Won-Chan Lee, DaeKyun Yoon","doi":"10.1109/ICAIIC.2019.8669065","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669065","url":null,"abstract":"The first impression is very important in human relationships. It forms in a short time and lasts long. The influence of the first impression is so great that many people want to have a good first impression. Unfortunately, the specific way to make a good first impression was not revealed. Instead, there are many abstract ways like make a smile. However, it is not clear what kind of smile is fit. Because there are so many kinds of smile. So, it’s not a specific way for anyone. In this research, we are going to analyze facial expressions and first impression with numerical data by utilizing machine learning. After finding the feature that determines the good first impression and proposes a specific way to make a good first impression.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127594183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic Detection of Hate Speech on Facebook Using Sentiment and Emotion Analysis 使用情绪和情绪分析自动检测Facebook上的仇恨言论
Axel Rodriguez, C. Argueta, Yi-Ling Chen
{"title":"Automatic Detection of Hate Speech on Facebook Using Sentiment and Emotion Analysis","authors":"Axel Rodriguez, C. Argueta, Yi-Ling Chen","doi":"10.1109/ICAIIC.2019.8669073","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669073","url":null,"abstract":"Hate speech has been an issue since the start of the Internet, but the advent of social media has brought it to unimaginable heights. To address such an important issue, in this paper, we explore a novel framework to effectively detect highly discussed topics that generate hate speech on Facebook. With the use of graph, sentiment, and emotion analysis techniques, we cluster and analyze posts on prominent Facebook pages. Consequently, the proposed framework is able to identify the pages that promote hate speech in the comment sections regarding sensitive topics automatically.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129993001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Fuzzy-Neural-Network Based Audio-Visual Fusion for Speech Recognition 基于模糊神经网络的视听融合语音识别
Gin-Der Wu, Hao-Shu Tsai
{"title":"Fuzzy-Neural-Network Based Audio-Visual Fusion for Speech Recognition","authors":"Gin-Der Wu, Hao-Shu Tsai","doi":"10.1109/ICAIIC.2019.8669019","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669019","url":null,"abstract":"Speech recognition is an important classification problem in signal processing. Its performance is easily affected by noisy environment due to movements of desks, door slams, etc. To solve the problem, a fuzzy-neural-network based audio-visual fusion is proposed in this study. Since human speech perception is bimodal, the input features include both audio and image information. In the fuzzy-neural-network, type-2 fuzzy sets are used in the antecedent parts to deal with the noisy data. Furthermore, a linear-discriminant-analysis (LDA) is applied in to the consequent parts to increase the “discriminability”. Compared with pure audio-based speech recognition, the fuzzy-neural-network based audio-visual fusion method is more robust in noisy environment.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116756436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Combined PDR and Wi-Fi Trilateration Algorithm for Indoor Localization 一种结合PDR和Wi-Fi的室内定位算法
Alwin Poulose, Odongo Steven Eyobu, D. Han
{"title":"A Combined PDR and Wi-Fi Trilateration Algorithm for Indoor Localization","authors":"Alwin Poulose, Odongo Steven Eyobu, D. Han","doi":"10.1109/ICAIIC.2019.8669059","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669059","url":null,"abstract":"Indoor localization using Wi-Fi or pedestrian dead reckoning (PDR) has several limitations in terms of Wi-Fi signal fluctuations and PDR drift errors. To overcome these limitations, we propose a sensor fusion framework for Wi-Fi and PDR systems. The proposed sensor fusion will overcome the PDR drift errors by analysing the Wi-Fi signal strength and the PDR results will compensate the Wi-Fi signal fluctuations. Based on the experiments conducted, results show that the proposed fusion indoor positioning algorithm shows high position accuracy over Wi-Fi localization and PDR systems when used independently. Our proposed combined position estimation algorithm achieves an improved average localization accuracy of 1.6 m when compared to the Wi-Fi and PDR systems when used independently.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Harmful Wildlife Detection System Utilizing Deep Learning for Radio Wave Sensing on Multiple Frequency Bands 基于深度学习的多频段无线电波传感有害野生动物检测系统
Ryota Ogami, Hiroshi Yamamoto, Takuya Kato, E. Utsunomiya
{"title":"Harmful Wildlife Detection System Utilizing Deep Learning for Radio Wave Sensing on Multiple Frequency Bands","authors":"Ryota Ogami, Hiroshi Yamamoto, Takuya Kato, E. Utsunomiya","doi":"10.1109/ICAIIC.2019.8668967","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8668967","url":null,"abstract":"In recent years, the number of accidents of damage to crops and injures caused by harmful wildlife in various places is increasing in Japan, hence research and development of techniques for observing ecology of the wildlife are attracting attention [1]. The existing observation system is mainly utilizing a camera device and an image processing [2]. However, the camera based system should treat a large capacity of data, hence it is not suitable in a place where a broadband communication line cannot be prepared. Therefore, in this research, we propose a new harmful wildlife detection system that can detect an approach of wildlife by utilizing a radio wave sensing. The proposed system obtains time series data of received signal strength of radio waves transmitted between a transmitter / receiver, and estimate the number/type of the wildlife by analyzing the data by utilizing a deep learning technology. Through the experimental evaluation, it has been clarified that the number / type of the wildlife can be identified to accuracy of higher than 90%.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116391722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Effective Feature-Based Automatic Modulation Classification Method Using DNN Algorithm 基于DNN算法的有效特征自动调制分类方法
Sang Hoon Lee, Kwang-Yul Kim, Jae Hyun Kim, Y. Shin
{"title":"Effective Feature-Based Automatic Modulation Classification Method Using DNN Algorithm","authors":"Sang Hoon Lee, Kwang-Yul Kim, Jae Hyun Kim, Y. Shin","doi":"10.1109/ICAIIC.2019.8669036","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669036","url":null,"abstract":"In this paper, we propose an effective feature-based automatic modulation classification (AMC) method using a deep neural network (DNN). In order to classify the modulation type, we consider effective features according to the modulation signals. The proposed method removes the meaningless features that have little influence on the classification and only uses the effective features that have high influence by analyzing the correlation coefficients. From the simulation results, we observe that the proposed method can make the AMC system low complexity.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122754314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Deep Learning Facial Expression Recognition based Scoring System for Restaurants 基于深度学习面部表情识别的餐馆评分系统
W. Chang, Miriam Schmelzer, Florian Kopp, Chia-Hao Hsu, Jian-Ping Su, Liang-Bi Chen, Ming-Che Chen
{"title":"A Deep Learning Facial Expression Recognition based Scoring System for Restaurants","authors":"W. Chang, Miriam Schmelzer, Florian Kopp, Chia-Hao Hsu, Jian-Ping Su, Liang-Bi Chen, Ming-Che Chen","doi":"10.1109/ICAIIC.2019.8668998","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8668998","url":null,"abstract":"Recently, the popularity of automated and unmanned restaurants has increased. Due to the absence of staff, there is no direct perception of the customers' impressions in order to find out what their experiences with the restaurant concept are like. For this purpose, this paper presents a rating system based on facial expression recognition with pre-trained convolutional neural network (CNN) models. It is composed of an Android mobile application, a web server, and a pre-trained AI-server. Both the food and the environment are supposed to be rated. Currently, three expressions (satisfied, neutral and disappointed) are provided by the scoring system.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"332 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128165628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Smart navigation with AI Engine for Li-Fi based Medical Indoor Environment 基于Li-Fi的医疗室内环境AI引擎智能导航
Ho Kyung Yu, J. Kim
{"title":"Smart navigation with AI Engine for Li-Fi based Medical Indoor Environment","authors":"Ho Kyung Yu, J. Kim","doi":"10.1109/ICAIIC.2019.8669041","DOIUrl":"https://doi.org/10.1109/ICAIIC.2019.8669041","url":null,"abstract":"Fourth industrial revolution is developing worldwide. Internet of Things(IoT), Big Data, and Artificial Intelligence(AI) technology, which are the core of the 4th industry, provide the navigation service in the indoor space. In this paper, we use a Light Fidelity(Li-Fi) Access Point(AP) installed in a dense network to collect accurate location information and create a traffic map. The AI engine using the hospital integrated information database suggests a smart indoor navigation system that uses Joint Named Entity Recognition and Disambiguation(J-NERD) to conveniently enter the symptoms and guide the user to the desired location by matching diseases.","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132100130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信