{"title":"[Title page iii]","authors":"","doi":"10.1109/ictai.2019.00002","DOIUrl":"https://doi.org/10.1109/ictai.2019.00002","url":null,"abstract":"","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130290810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information-Theoretic Ensemble Learning for DDoS Detection with Adaptive Boosting","authors":"M. Bhuyan, M. Ma, Y. Kadobayashi, E. Elmroth","doi":"10.1109/ICTAI.2019.00140","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00140","url":null,"abstract":"DDoS (Distributed Denial of Service) attacks pose a serious threat to the Internet as they use large numbers of zombie hosts to forward massive numbers of packets to the target host. Here, we present an adaptive boosting-based ensemble learning model for detecting low-and high-rate DDoS attacks by combining information divergence measures. Our model is trained against a baseline model that does not use labeled traffic data and draws on multiple baseline models developed in parallel to improve its accuracy. Incoming traffic is sampled time-periodically to characterize the normal behavior of input traffic. The model's performance is evaluated using the UmU testbed, MIT legitimate, and CAIDA DDoS datasets. We demonstrate that our model offers superior accuracy to established alternatives, reducing the incidence of false alarms and achieving an F1-score that is around 3% better than those of current state-of-the-art DDoS detection models.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129303338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SARC: Split-and-Recombine Networks for Knowledge-Based Recommendation","authors":"Weifeng Zhang, Yi Cao, Congfu Xu","doi":"10.1109/ICTAI.2019.00096","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00096","url":null,"abstract":"Utilizing knowledge graphs (KGs) to improve the performance of recommender systems has attracted increasing attention recently. Existing path-based methods rely heavily on manually designed meta-paths, while embedding-based methods focus on incorporating the knowledge graph embeddings (KGE) into recommender systems, but rarely model user-entity interactions, which can be used to enhance the performance of recommendation. To overcome the shortcomings of previous works, we propose SARC, an embedding-based model that utilizes a novel Split-And-ReCombine strategy for knowledge-based recommendation. Firstly, SARC splits the user-item-entity interactions into three 2-way interactions, i.e., the user-item, user-entity and item-entity interactions. Each of the 2-way interactions can be cast as a graph, and we use Graph Neural Networks (GNN) and KGE to model them. Secondly, SARC recombines the representation of users and items learned from the first step to generates recommendation. In order to distinguish the informative part and meaningless part of the representations, we utilize a gated fusion mechanism. The advantage of our SARC model is that through splitting, we can easily handle and make full use of the 2-way interactions, especially the user-entity interactions, and through recombining, we can extract the most useful information for recommendation. Extensive experiments on three real-world datasets demonstrate that SARC outperforms several state-of-the-art baselines.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125529539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature-Selected and -Preserved Sampling for High-Dimensional Stream Data Summary","authors":"Ling Lin, Qian Yu, Wen Ji, Yang Gao","doi":"10.1109/ICTAI.2019.00198","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00198","url":null,"abstract":"Along with the prosperity of the Mobile Internet, a large amount of stream data has emerged. Stream data cannot be completely stored in memory because of its massive volume and continuous arrival. Moreover, it should be accessed only once and handled in time due to the high cost of multiple accesses. Therefore, the intrinsic nature of stream data calls facilitates the development of a summary in the main memory to enable fast incremental learning and to allow working in limited time and memory. Sampling techniques are one of the commonly used methods for constructing data stream summaries. Given that the traditional random sampling algorithm deviates from the real data distribution and does not consider the true distribution of the stream data attributes, we propose a novel sampling algorithm based on feature-selected and -preserved algorithm. We first use matrix approximation to select important features in stream data. Then, the feature-preserved sampling algorithm is used to generate high-quality representative samples over a sliding window. The sampling quality of our algorithm could guarantee a high degree of consistency between the distribution of attribute values in the population (the entire data) and that in the sample. Experiments on real datasets show that the proposed algorithm can select a representative sample with high efficiency.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126830373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Swarm Filter - A Simple Deep Learning Component Inspired by Swarm Concept","authors":"Nguyen Ha Thanh, Le-Minh Nguyen","doi":"10.1109/ICTAI.2019.00221","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00221","url":null,"abstract":"Swarm is a research topic not only of biologists but also for computer scientists for years. With the idea of swarm intelligence in nature, optimal algorithms are proposed to solve different problems. In addition to the proactive aspect, a swarm can provide useful hints for identification problems. There are features that only exist when an individual belongs to a swarm. An idea came to us, deep learning networks have the ability to automatically select features, so they can extract the characteristics of a swarm for identification problems. This is a new idea in the combination of swarm characteristic with deep learning model. The previous studies combined swarm intelligence with neural networks to find the optimal parameters and architecture for the model. When performing our experiments, we were surprised that this simple architecture got a state-of-the-art result. This interesting discovery can be applied to other tasks using deep learning.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123208205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-graph Convolution Network with Jump Connection for Event Detection","authors":"Xiangbin Meng, Pengfei Wang, Haoran Yan, Liutong Xu, Jiafeng Guo, Yixing Fan","doi":"10.1109/ICTAI.2019.00108","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00108","url":null,"abstract":"Event detection is an important information extraction task in nature language processing. Recently, the method based on syntactic information and graph convolution network has been wildly used in event detection task and achieved good performance. For event detection, graph convolution network (GCN) based on dependency arcs can capture the sentence syntactic representations and the syntactic information, which is from candidate triggers to arguments. However, existing methods based on GCN with dependency arcs suffer from imbalance and redundant information in graph. To capture important and refined information in graph, we propose Multi-graph Convolution Network with Jump Connection (MGJ-ED). The multi-graph convolution network module adds a core subgraph splitted from dependency graph which selects important one-hop neighbors' syntactic information in breadth via GCN. Also the jump connection architecture aggregate GCN layers' representation with different attention score, which learns the importance of neighbors' syntactic information of different hops away in depth. The experimental results on the widely used ACE 2005 dataset shows the superiority of the other state-of-the-art methods.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123219932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-task Learning for Relation Extraction","authors":"Kai Zhou, Xiangfeng Luo, Hongya Wang, R. Xu","doi":"10.1109/ICTAI.2019.00210","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00210","url":null,"abstract":"Distantly supervised relation extraction leverages knowledge bases to label training data automatically. However, distant supervision may introduce incorrect labels, which harm the performance. Many efforts have been devoted to tackling this problem, but most of them treat relation extraction as a simple classification task. As a result, they ignore useful information that comes from related tasks, i.e., dependency parsing and entity type classification. In this paper, we first propose a novel Multi-Task learning framework for Relation Extraction (MTRE). We employ dependency parsing and entity type classification as auxiliary tasks and relation extraction as the target task. We learn these tasks simultaneously from training instances to take advantage of inductive transfer between auxiliary tasks and the target task. Then we construct a hierarchical neural network, which incorporates dependency and entity representations from auxiliary tasks into a more robust relation representation against the noisy labels. The experimental results demonstrate that our model improves the predictive performance substantially over single-task learning baselines.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123081134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Folino, G. Folino, M. Guarascio, L. Pontieri
{"title":"Learning Effective Neural Nets for Outcome Prediction from Partially Labelled Log Data","authors":"Francesco Folino, G. Folino, M. Guarascio, L. Pontieri","doi":"10.1109/ICTAI.2019.00196","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00196","url":null,"abstract":"The problem of inducing a model for forecasting the outcome of an ongoing process instance from historical log traces has attracted notable attention in the field of Process Mining. Approaches based on deep neural networks have become popular in this context, as a more effective alternative to previous feature-based outcome-prediction methods. However, these approaches rely on a pure supervised learning scheme, and unfit many real-life scenarios where the outcome of (fully unfolded) training traces must be provided by experts. Indeed, since in such a scenario only a small amount of labeled traces are usually given, there is a risk that an inaccurate or overfitting model is discovered. To overcome these issues, a novel outcome-discovery approach is proposed here, which leverages a fine-tuning strategy that learns general-enough trace representations from unlabelled log traces, which are then reused (and adapted) in the discovery of the outcome predictor. Results on real-life data confirmed that our proposal makes a more effective and robust solution for label-scarcity scenarios than current outcome-prediction methods.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126254546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fine-Grained Image Classification Combined with Label Description","authors":"Xiruo Shi, Liutong Xu, Pengfei Wang","doi":"10.1109/ICTAI.2019.00148","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00148","url":null,"abstract":"Fine-grained image classification faces huge challenges because fine-grained images are similar overall, and the distinguishable regions are difficult to find. Generally, in this task, label descriptions contain valuable semantic information that is accurately compatible with discriminative features of images (i.e., the description of the \"Rusty Black Bird\" corresponding to the morphological characteristics of its image). Bringing these descriptions into consideration is benefit to discern these similar images. Previous works, however, usually ignore label descriptions and just mine informative features from images, thus the performance may be limited. In this paper, we try to take both label descriptions and images into consideration, and we formalize the classification task into a matching task to address this issue. Specifically, Our model is based on a combination of Convolutional Neural Networks (CNN) over images and Graph Convolutional Networks(GCN) over label descriptions. We map the resulting image representations and text representations to the same dimension for matching and achieve the purpose of classification through the matching operation. Experimental results demonstrate that our approach can achieve the best performance compared with the state-of-the-art methods on the datasets of Stanford dogs and CUB-200-2011.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130127662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Expansion Convolution Method Based on Local Region Parameter Sharing","authors":"Qimao Yang, Jun Guo","doi":"10.1109/ICTAI.2019.00204","DOIUrl":"https://doi.org/10.1109/ICTAI.2019.00204","url":null,"abstract":"In this paper, a new convolution method for convolutional neural networks (CNNs) is proposed to improve the accuracy of image classification. To contain more efficient context, some of the parameters in the kernel are selectively expanded so as to be shared by the surrounding pixels. Thus, the convolution filter is enlarged meanwhile the number of the parameters is not increased. Compared to the traditional methods, the proposed method can restrain the over-fitting problem well. The experimental results on benchmarks show that the proposed method can achieve higher accuracies closed to the deeper networks, and get better accuracies in the case of the same network depth.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122591807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}