{"title":"Granular Computing","authors":"Georg Peters","doi":"10.4018/978-1-59904-849-9.ch115","DOIUrl":"https://doi.org/10.4018/978-1-59904-849-9.ch115","url":null,"abstract":"It is well accepted that in many real life situations information is not certain and precise but rather uncertain or imprecise. To describe uncertainty probability theory emerged in the 17th and 18th century. Bernoulli, Laplace and Pascal are considered to be the fathers of probability theory. Today probability can still be considered as the prevalent theory to describe uncertainty. However, in the year 1965 Zadeh seemed to have challenged probability theory by introducing fuzzy sets as a theory dealing with uncertainty (Zadeh, 1965). Since then it has been discussed whether probability and fuzzy set theory are complementary or rather competitive (Zadeh, 1995). Sometimes fuzzy sets theory is even considered as a subset of probability theory and therefore dispensable. Although the discussion on the relationship of probability and fuzziness seems to have lost the intensity of its early years it is still continuing today. However, fuzzy set theory has established itself as a central approach to tackle uncertainty. For a discussion on the relationship of probability and fuzziness the reader is referred to e.g. Dubois, Prade (1993), Ross et al. (2002) or Zadeh (1995). In the meantime further ideas how to deal with uncertainty have been suggested. For example, Pawlak introduced rough sets in the beginning of the eighties of the last century (Pawlak, 1982), a theory that has risen increasing attentions in the last years. For a comparison of probability, fuzzy sets and rough sets the reader is referred to Lin (2002). Presently research is conducted to develop a Generalized Theory of Uncertainty (GTU) as a framework for any kind of uncertainty whether it is based on probability, fuzziness besides others (Zadeh, 2005). Cornerstones in this theory are the concepts of information granularity (Zadeh, 1979) and generalized constraints (Zadeh, 1986). In this context the term Granular Computing was first suggested by Lin (1998a, 1998b), however it still lacks of a unique and well accepted definition. So, for example, Zadeh (2006a) colorfully calls granular computing “ballpark computing” or more precisely “a mode of computation in which the objects of computation are generalized constraints”.","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121365194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI and Ideas by Statistical Mechanics","authors":"L. Ingber","doi":"10.4018/9781599048499.ch009","DOIUrl":"https://doi.org/10.4018/9781599048499.ch009","url":null,"abstract":"A briefing (Allen, 2004) demonstrates the breadth and depth complexity required to address real diplomatic, information, military, economic (DIME) factors for the propagation/evolution of ideas through defined populations. An open mind would conclude that it is possible that multiple approaches may be required for multiple decision makers in multiple scenarios. However, it is in the interests of multiple decision-makers to as much as possible rely on the same generic model for actual computations. Many users would have to trust that the coded model is faithful to process their inputs. Similar to DIME scenarios, sophisticated competitive marketing requires assessments of responses of populations to new products. Many large financial institutions are now trading at speeds barely limited by the speed of light. They colocate their servers close to exchange floors to be able to turn quotes into orders to be executed within msecs. Clearly, trading at these speeds require automated algorithms for processing and making decisions. These algorithms are based on \"technical\" information derived from price, volume and quote (Level II) information. The next big hurdle to automated trading is to turn \"fundamental\" information into technical indicators, e.g., to include new political and economic news into such algorithms.","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122317446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Growing Self-Organizing Maps for Data Analysis","authors":"S. Delgado, C. Gonzalo, E. Martínez, A. Arquero","doi":"10.4018/978-1-59904-849-9.CH116","DOIUrl":"https://doi.org/10.4018/978-1-59904-849-9.CH116","url":null,"abstract":"Currently, there exist many research areas that produce large multivariable datasets that are difficult to visualize in order to extract useful information. Kohonen self organizing maps have been used successfully in the visualization and analysis of multidimensional data. In this work, a projection technique that compresses multidimensional datasets into two dimensional space using growing self-organizing maps is described. With this embedding scheme, traditional Kohonen visualization methods have been implemented using growing cell structures networks. New graphical map display have been compared with Kohonen graphs using two groups of simulated data and one group of real multidimensional data selected from a satellite scene.","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125488827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent Query Answering Mechanism in Multi Agent Systems","authors":"S. Turgay, Fahrettin Yaman","doi":"10.4018/978-1-59904-849-9.CH136","DOIUrl":"https://doi.org/10.4018/978-1-59904-849-9.CH136","url":null,"abstract":"The query answering system realizes the selection of the data, preparation, pattern discovering, and pattern development processes in an agent-based structure within the multi agent system, and it is designed to ensure communication between agents and an effective operation of agents within the multi agent system. The system is suggested in a way to process and evaluate fuzzy incomplete information by the use of fuzzy SQL query method. The modelled system gains the intelligent feature, thanks to the fuzzy approach and makes predictions about the future with the learning processing approach. The operation mechanism of the system is a process in which the agents within the multi agent system filter and evaluate both the knowledge in databases and the knowledge received externally by the agents, considering certain criteria. The system uses two types of knowledge. The first one is the data existing in agent databases within the system and the latter is the data agents received from the outer world and not included in the evaluation criteria. Upon receiving data from the outer world, the agent primarily evaluates it in knowledgebase, and then evaluates it to be used in rule base and finally employs a certain evaluation process to rule bases in order to store the knowledge in task base. Meanwhile, the agent also completes the learning process. This paper presents an intelligent query answering mechanism, a process in which the agents within the multi-agent system filter and evaluate both the knowledge in databases and the knowledge received externally by the agents. The following sections include some necessary literature review and the query answering approach Then follow the future trends and the conclusion. BACKGROUND","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128751494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Swarm Robotics","authors":"Amanda J. C. Sharkey","doi":"10.4018/978-1-59904-849-9.ch225","DOIUrl":"https://doi.org/10.4018/978-1-59904-849-9.ch225","url":null,"abstract":"","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132662222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Dempster-Shafer Theory","authors":"M. Beynon","doi":"10.4018/978-1-59904-849-9.CH068","DOIUrl":"https://doi.org/10.4018/978-1-59904-849-9.CH068","url":null,"abstract":"The initial work introducing Dempster-Shafer (D-S) theory is found in Dempster (1967) and Shafer (1976). Since its introduction the very name causes confusion, a more general term often used is belief functions (both used intermittently here). Nguyen (1978) points out, soon after its introduction, that the rudiments of D-S theory can be considered through distributions of random sets. More furtive comparison has been with the traditional Bayesian theory, where D-S theory has been considered a generalisation of it (Schubert, 1994). Cobb and Shenoy (2003) direct its attention to the comparison of D-S theory and the Bayesian formulisation. Their conclusions are that they have the same expressive power, but that one technique cannot simply take the role of the other. The association with artificial intelligence (AI) is clearly outlined in Smets (1990), who at the time, acknowledged the AI community has started to show interest for what they call the Dempster-Shafer model. It is of interest that even then, they highlight that there is confusion on what type of version of D-S theory is considered. D-S theory was employed in an event driven integration reasoning scheme in Xia et al. (1997), associated with automated route planning, which they view as a very important branch in applications of AI. Liu (1999) investigated Gaussian belief functions and specifically considered their proposed computation scheme and its potential usage in AI and statistics. Huang and Lees (2005) apply a D-S theory model in natural-resource classification, comparing with it with two other AI models. Wadsworth and Hall (2007) considered D-S theory in a combination with other techniques to investigate site-specific critical loads for conservation agencies. Pertinently, they outline its positioning with respect to AI (p. 400); The approach was developed in the AI (artificial intelligence) community in an attempt to develop systems that could reason in a more human manner and particularly the ability of human experts to “diagnose” situations with limited information. This statement is pertinent here, since emphasis within the examples later given is more towards the general human decision making problem and the handling of ignorance in AI. Dempster and Kong (1988) investigated how D-S theory fits in with being an artificial analogy for human reasoning under uncertainty. An example problem is considered, the murder of Mr. White, where witness evidence is used to classify the belief in the identification of an assassin from considered suspects. The numerical analyses presented exposit a role played by D-S theory, including the different ways it can act on incomplete knowledge.","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"5 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134003897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mapping Ontologies by Utilising Their Semantic Structure","authors":"Yi Zhao, W. Halang","doi":"10.4018/978-1-59904-849-9.CH155","DOIUrl":"https://doi.org/10.4018/978-1-59904-849-9.CH155","url":null,"abstract":"As a key factor to enable interoperability in the Semantic Web (Berners-Lee, Hendler & Lassila, 2001), ontologies are developed by different organisations at a large scale, also in overlapping areas. Therefore, ontology mapping has come into forth to achieve knowledge sharing and semantic integration in an environment where knowledge and information are represented by different underlying ontologies. The ontology mapping problem can be defined as acquiring the relationships that hold between the entities of two ontologies. Mapping results can be used for various purposes such as schema/ontology integration, information retrieval, query mediation, or web service mapping. In this article, a method to map concepts and properties between ontologies is presented. First, syntactic analysis is applied based on token strings, and then semantic analysis is executed according to WordNet (Fellbaum, 1999) and tree-like graphs representing the structures of ontologies. The experimental results exemplify that our algorithm finds mappings with high precision.","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122211677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Learning Algorithm with LTS Error Function","authors":"A. Rusiecki","doi":"10.4018/978-1-59904-849-9.CH204","DOIUrl":"https://doi.org/10.4018/978-1-59904-849-9.CH204","url":null,"abstract":"Feedforward neural networks (FFNs) are often considered as universal tools and find their applications in areas such as function approximation, pattern recognition, or signal and image processing. One of the main advantages of using FFNs is that they usually do not require, in the learning process, exact mathematical knowledge about input-output dependencies. In other words, they may be regarded as model-free approximators (Hornik, 1989). They learn by minimizing some kind of an error function to fit training data as close as possible. Such learning scheme doesn’t take into account a quality of the training data, so its performance depends strongly on the fact whether the assumption, that the data are reliable and trustable, is hold. This is why when the data are corrupted by the large noise, or when outliers and gross errors appear, the network builds a model that can be very inaccurate. In most real-world cases the assumption that errors are normal and iid, simply doesn’t hold. The data obtained from the environment are very often affected by noise of unknown form or outliers, suspected to be gross errors. The quantity of outliers in routine data ranges from 1 to 10% (Hampel, 1986). They usually appear in data sets during obtaining the information and pre-processing them when, for instance, measurement errors, long-tailed noise, or results of human mistakes may occur. Intuitively we can define an outlier as an observation that significantly deviates from the bulk of data. Nevertheless, this definition doesn’t help in classifying an outlier as a gross error or a meaningful and important observation. To deal with the problem of outliers a separate branch of statistics, called robust statistics (Hampel, 1986, Huber, 1981), was developed. Robust statistical methods are designed to act well when the true underlying model deviates from the assumed parametric model. Ideally, they should be efficient and reliable for the observations that are very close to the assumed model and simultaneously for the observations containing larger deviations and outliers. The other way is to detect and remove outliers before the beginning of the model building process. Such methods are more universal but they do not take into account the specific type of modeling philosophy (e.g. modeling by the FFNs). In this article we propose new robust FFNs learning algorithm based on the least trimmed squares estimator.","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"48 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130476105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Intelligence for Information Retrieval","authors":"Thomas Mandl","doi":"10.4018/978-1-59904-849-9.CH023","DOIUrl":"https://doi.org/10.4018/978-1-59904-849-9.CH023","url":null,"abstract":"This article describes the most prominent approaches to apply artificial intelligence technologies to information retrieval (IR). Information retrieval is a key technology for knowledge management. It deals with the search for information and the representation, storage and organization of knowledge. Information retrieval is concerned with search processes in which a user needs to identify a subset of information which is relevant for his information need within a large amount of knowledge. The information seeker formulates a query trying to describe his information need. The query is compared to document representations which were extracted during an indexing phase. The representations of documents and queries are typically matched by a similarity function such as the Cosine. The most similar documents are presented to the users who can evaluate the relevance with respect to their problem (Belkin, 2000). The problem to properly represent documents and to match imprecise representations has soon led to the application of techniques developed within Artificial Intelligence to information retrieval.","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116698626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}