Pierre Bisquert, C. Cayrol, Florence Dupin de Saint-Cyr -- Bannay, M. Lagasquie-Schiex
{"title":"Goal-Driven Changes in Argumentation: A Theoretical Framework and a Tool","authors":"Pierre Bisquert, C. Cayrol, Florence Dupin de Saint-Cyr -- Bannay, M. Lagasquie-Schiex","doi":"10.1109/ICTAI.2013.96","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.96","url":null,"abstract":"This paper defines a new framework for dynamics in argumentation. In this framework, an agent can change an argumentation system (the target system) in order to achieve some desired goal. Changes consist in addition/removal of arguments or attacks between arguments and are constrained by theagent's knowledge encoded by another argumentation system. We present a software that computes the possible change operations for a given agent on a given target argumentation system in order to achieve some given goal.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121791258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Classification Accuracy with the Help of Feature Maximization Metric","authors":"Jean-Charles Lamirel","doi":"10.1109/ICTAI.2013.90","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.90","url":null,"abstract":"This paper deals with a new feature selection and feature contrasting approach for enhancing classification of both numerical and textual data. The method is experienced on different types of reference datasets. The paper illustrates that the proposed approach provides a very significant performance increase in all the studied cases clearly figuring out its generic character.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121858257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saïd Jabbour, Mehdi Khiari, L. Sais, Y. Salhi, Karim Tabia
{"title":"Symmetry-Based Pruning in Itemset Mining","authors":"Saïd Jabbour, Mehdi Khiari, L. Sais, Y. Salhi, Karim Tabia","doi":"10.1109/ICTAI.2013.78","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.78","url":null,"abstract":"In this paper, we show how symmetries, a fundamental structural property, can be used to prune the search space in itemset mining problems. Our approach is based on a dynamic integration of symmetries in APRIORI-like algorithms to prune the set of possible candidate patterns. More precisely, for a given itemset, symmetry can be applied to deduce other itemsets while preserving their properties. We also show that our symmetry-based pruning approach can be extended to the general Mannila and Toivonen pattern mining framework. Experimental results highlight the usefulness and the efficiency of our symmetry-based pruning approach.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126910338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Combination of Reasoners for Ontology Classification","authors":"Changlong Wang, Zhiyong Feng","doi":"10.1109/ICTAI.2013.75","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.75","url":null,"abstract":"Large scale ontology applications require efficient reasoning services, of which ontology classification is the fundamental reasoning task. The special EL reasoners are efficient, but they can not classify ontologies with axioms outside the OWL 2 EL profile. The general-purpose OWL 2 reasoners for expressive Description Logics are less efficient when classifying the OWL 2 EL ontologies. In this work, we propose a novel technique that combines an OWL 2 reasoner with an EL reasoner for classification of ontologies expressed in DL SROIQ. We develop an efficient task decomposition algorithm for identifying the minimal non-EL module that is assigned to the OWL 2 reasoner, and the bulk of the workload is assigned to the EL reasoner. Furthermore, this paper reports on the implementation of our approach in the ComR system which integrates the two types of reasoners in a black-box manner. The experimental results show that our method leads to a reasonable task assignment and can offer a substantial speed up (over 50%) in ontology classification.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129054532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Imbalanced Hypergraph Partitioning and Improvements for Consensus Clustering","authors":"John Robert Yaros, T. Imielinski","doi":"10.1109/ICTAI.2013.61","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.61","url":null,"abstract":"Hypergraph partitioning is typically defined as an optimization problem wherein vertices are placed in separate parts (of a partition) such that the fewest number of hyperedges will span multiple parts. To ensure that parts have sizes satisfying user requirements, constraints are typically imposed. Under such constraints, the problem is known to be NP-Hard, so heuristic methods are needed to find approximate solutions in reasonable time. Circuit layout has historically been one of the most prominent application areas and has seen a proliferation of tools designed to satisfy its needs. Constraints in these tools typically focus on equal size parts, allowing the user to specify a maximum tolerance for deviation from that equal size. A more generalized constraint allows the user to define fixed sizes and tolerances for each part. More recently, other domains have mapped problems to hypergraph partitioning and, perhaps due to their availability, have used existing tools to perform partitioning. In particular, consensus clustering easily fits a hypergraph representation where each cluster of each input clustering is represented by a hyperedge. Authors of such research have reported partitioning tends to only have good results when clusters can be expected to be roughly the same size, an unsurprising result given the tools' focus on equal sized parts. Thus, even though many datasets have \"natural\" part sizes that are mixed, the current toolset is ill-suited to find good solutions unless those part sizes are known a priori. We argue that the main issue rests in the current constraint definitions and their focus measuring imbalance on the basis of the largest/smallest part. We further argue that, due to its holistic nature, entropy best measures imbalance and can best guide the partition method to the natural part sizes with lowest cut for a given level of imbalance. We provide a method that finds good approximate solutions under an entropy constraint and further introduce the notion of a discount cut, which helps overcome local optima that frequently plague k-way partitioning algorithms. In comparison to today's popular tools, we show our method returns sizable improvements in cut size as the level of imbalance grows. In consensus clustering, we demonstrate that good solutions are more easily achieved even when part sizes are not roughly equal.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126770921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ESmodels: An Inference Engine of Epistemic Specifications","authors":"Zhizheng Zhang, Kaikai Zhao, Rongcun Cui","doi":"10.1109/ICTAI.2013.118","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.118","url":null,"abstract":"Epistemic specification (ES for short) is an extension of answer set programming (ASP for short). The extension is built around the introduction of modalities K and M, and then is capable of representing incomplete information in the presence of multiple belief sets. Although both syntax and semantics of ES are up in the air, the need for this extension has been illustrated with several examples in the literatures. In this paper, we present a new ES version with only modality K and the design of its inference engine ESmodels that aims to be efficient enough to promote the theoretical research and also practical use of ES. We first introduce the syntax and semantics of the new version of ES and show it is succinct but flexible by comparing it with existing ES versions. Then, we focus on the description of the algorithm and optimization approaches of the inference engine. Finally, we conclude with perspectives.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127770280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
María Andreína Francisco Rodríguez, P. Flener, J. Pearson
{"title":"Generation of Implied Constraints for Automaton-Induced Decompositions","authors":"María Andreína Francisco Rodríguez, P. Flener, J. Pearson","doi":"10.1109/ICTAI.2013.160","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.160","url":null,"abstract":"Automata, possibly with counters, allow many constraints to be expressed in a simple and high-level way. An automaton induces a decomposition into a conjunction of already implemented constraints. Generalised arc consistency is not generally maintained on decompositions induced by counter automata with more than one state or counter. To improve propagation of automaton-induced constraint decompositions, we use automated tools to derive loop invariants from the constraint checker corresponding to the given automaton. These loop invariants correspond to implied constraints, which can be added to the decomposition. We consider two global constraints and derive implied constraints to improve propagation even to the point of maintaining generalised arc consistency.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"82 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132679696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Man Zhu, Zhiqiang Gao, Jeff Z. Pan, Yuting Zhao, Ying Xu, Zhibin Quan
{"title":"Ontology Learning from Incomplete Semantic Web Data by BelNet","authors":"Man Zhu, Zhiqiang Gao, Jeff Z. Pan, Yuting Zhao, Ying Xu, Zhibin Quan","doi":"10.1109/ICTAI.2013.117","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.117","url":null,"abstract":"Recent years have seen a dramatic growth of semantic web on the data level, but unfortunately not on the schema level, which contains mostly concept hierarchies. The shortage of schemas makes the semantic web data difficult to be used in many semantic web applications, so schemas learning from semantic web data becomes an increasingly pressing issue. In this paper we propose a novel schemas learning approach -BelNet, which combines description logics (DLs) with Bayesian networks. In this way BelNet is capable to understand and capture the semantics of the data on the one hand, and to handle incompleteness during the learning procedure on the other hand. The main contributions of this work are: (i)we introduce the architecture of BelNet, and corresponding lypropose the ontology learning techniques in it, (ii) we compare the experimental results of our approach with the state-of-the-art ontology learning approaches, and provide discussions from different aspects.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133510477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Probabilistic Query Suggestion Approach without Using Query Logs","authors":"M. T. Shaikh, M. S. Pera, Yiu-Kai Ng","doi":"10.1109/ICTAI.2013.99","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.99","url":null,"abstract":"Commercial web search engines include a query suggestion module so that given a user's keyword query, alternative suggestions are offered and served as a guide to assist the user in formulating queries which capture his/her intended information need in a quick and simple manner. Majorityof these modules, however, perform an in-depth analysis oflarge query logs and thus (i) their suggestions are mostlybased on queries frequently posted by users and (ii) theirdesign methodologies cannot be applied to make suggestions oncustomized search applications for enterprises for which theirrespective query logs are not large enough or non-existent. To address these design issues, we have developed PQS, aprobabilistic query suggestion module. Unlike its counterparts, PQS is not constrained by the existence of query logs, sinceit solely relies on the availability of user-generated contentfreely accessible online, such as the Wikipedia.org documentcollection, and applies simple, yet effective, probabilistic-andinformation retrieval-based models, i.e., the Multinomial, BigramLanguage, and Vector Space Models, to provide usefuland diverse query suggestions. Empirical studies conductedusing a set of test queries and the feedbacks provided byMechanical Turk appraisers have verified that PQS makesmore useful suggestions than Yahoo! and is almost as goodas Google and Bing based on the relatively small difference inperformance measures achieved by Google and Bing over PQS.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131704651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier Barrachina, Piedad Garrido, Manuel Fogué, F. Martinez, Juan-Carlos Cano, C. Calafate, P. Manzoni
{"title":"Using Evolution Strategies to Reduce Emergency Services Arrival Time in Case of Accident","authors":"Javier Barrachina, Piedad Garrido, Manuel Fogué, F. Martinez, Juan-Carlos Cano, C. Calafate, P. Manzoni","doi":"10.1109/ICTAI.2013.127","DOIUrl":"https://doi.org/10.1109/ICTAI.2013.127","url":null,"abstract":"A critical issue, especially in urban areas, is the occurrence of traffic accidents, since it could generate traffic jams. Additionally, these traffic jams will negatively affect to the rescue process, increasing the emergency services arrival time, which can determine the difference between life or death for injured people involved in the accident. In this paper, we propose four different approaches addressing the traffic congestion problem, comparing them to obtain the best solution. Using V2I communications, we are able to accurately estimate the traffic density in a certain area, which represents a key parameter to perform efficient traffic redirection, thereby reducing the emergency services arrival time, and avoiding traffic jams when an accident occurs. Specifically, we propose two approaches based on the Dijkstra algorithm, and two approaches based on Evolution Strategies. Results indicate that the Density-Based Evolution Strategy system is the best one among all the proposed solutions, since it offers the lowest emergency services travel times.","PeriodicalId":140309,"journal":{"name":"2013 IEEE 25th International Conference on Tools with Artificial Intelligence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131242418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}