Hans Akkermans, Frank van Harmelen, Guus Schreiber, Bob Wielinga
{"title":"A Formalization of Knowledge‐Level Models for Knowledge Acquisition","authors":"Hans Akkermans, Frank van Harmelen, Guus Schreiber, Bob Wielinga","doi":"10.1002/j.1098-111x.1993.tb00003.x","DOIUrl":"https://doi.org/10.1002/j.1098-111x.1993.tb00003.x","url":null,"abstract":"This article defines second‐generation knowledge acquisition as a modeling activity that is knowledge‐level oriented. Knowledge‐level models of expert reasoning represent an important output of the knowledge‐acquisition process, since they describe, in a conceptual and implementation‐independent fashion, the different roles and types of knowledge required for a problem‐solving task. We argue that a formalization of such models enhances knowledge acquisition, and in particular the conceptualization phase, by rendering currently informal concepts and intuitions more precise, thus also contributing to a more solid basis for KBS design, validation, and maintenance. A framework is constructed for the formal specification of knowledge‐level models. The proposed formalism, called <jats:sc>ml<jats:sup>2</jats:sup></jats:sc>, has been inspired by the <jats:sc>kads</jats:sc> methodology for KBS development, and aims at expressing different roles and types of knowledge components through employing an order‐sorted logic, a modular structuring of theories, and a meta‐level organization of knowledge, comprising “enlarged” reflection rules and a “meaningful” naming relation. An application of the formal specification method to heuristic classification is given. Issues relating to the epistemological adequacy and the computational tractability of formalized knowledge‐level models are discussed.","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"48 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144546958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brian R. Gaines, Mildred L. G. Shaw, J. Brian Woodward
{"title":"Modeling as Framework for Knowledge Acquisition Methodologies and Tools","authors":"Brian R. Gaines, Mildred L. G. Shaw, J. Brian Woodward","doi":"10.1002/j.1098-111x.1993.tb00002.x","DOIUrl":"https://doi.org/10.1002/j.1098-111x.1993.tb00002.x","url":null,"abstract":"This article develops a classification of the sources and types of models developed in knowledge engineering, and uses it to provide a framework within which knowledge acquisition methodologies and tools can be discussed and analyzed. Much of the early work on knowledge acquisition assumed that human expertise is based on “mental models” of domains and problem‐solving techniques, and that these can be elicited and transferred to an expert system. The approach taken here is to focus instead on the knowledge engineer's modeling process, his or her conceptual models of systems associated with the expert's skill, and their sources and types. This leads to a comprehensive account of knowledge‐based system development encompassing classical systems analysis, cognitive processes, linguistic representations, and the formalization of knowledge for computer application.","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"8 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144547051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Container-Based Cloud Broker for Effective Service Provisioning in Multicloud Environment","authors":"Vinothiyalakshmi P., Rajganesh Nagarajan, Ramkumar Thirunavukarasu, Arun Pandian J., Evans Kotei","doi":"10.1155/int/1009713","DOIUrl":"https://doi.org/10.1155/int/1009713","url":null,"abstract":"<div>\u0000 <p>Container-based cloud brokers are third-party services that act as an intermediate entity between users and multiple cloud providers. The cloud brokers intended to perform discovery and provisioning of cloud services with an affordable pricing scheme. As cloud services can be provisioned on-demand basis for multiple users, the cloud brokers are unable to provide the most suited services to the users on time. To address this issue, the proposed work introduces a novel approach for efficient cloud service provisioning by utilizing container-based cloud service brokerage and implementing service arbitrage across various cloud providers. A microservice architecture-based service discovery mechanism is developed which incorporates a service registry for tracking newly available services from the providers. Docker containers are employed to orchestrate the services, which ensures streamlined management and deployment of offered services. Further, the proposed system recommends and evaluates the services to the cloud users based on probability matrices, mapping matrices, and user feedback. The performance of the proposed model is compared with existing techniques, namely, rough multidimensional matrix (RMDM) and similarity-enhanced hybrid group recommendation approach (HGRA). Experimental results show that the proposed model outperforms the existing models in terms of clustering accuracy and execution time.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1009713","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144519761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Qiyas, Muhammad Naeem, Zahid Khan, Samuel Okyer
{"title":"Interval-Valued Probabilistic Dual Hesitant Fuzzy Muirhead Mean Aggregation Operators and Their Applications in Regenerative Energy Source Selection","authors":"Muhammad Qiyas, Muhammad Naeem, Zahid Khan, Samuel Okyer","doi":"10.1155/int/8892299","DOIUrl":"https://doi.org/10.1155/int/8892299","url":null,"abstract":"<div>\u0000 <p>As an effective addition to the hesitant fuzzy set (HFS), a probabilistic dual hesitant fuzzy set (PDHFS) has been designed in this paper. PDHFS would be an improved version of the dual hesitant fuzzy set (DHFS) where both membership and nonmembership hesitant quality is considered for all its probability of existence. Additional information on the degree of acceptance or rejection contains such allocated probabilities. More conveniently, we create a comprehensive type of PDHFS called interval-valued PDHFS (IVPDHFS) to interpret the probability data that exist in the hesitancy. This study describes several basic operating laws by stressing the advantages and enriching the utility of IVPDHFS in MAGDM. To aggregate IVPDHF information in MAGDM problems and extend its applications, we present the Muirhead mean (MM) operator of IVPDHFSs and study some attractive properties of the suggested operator. Besides that, in order to compute attribute weights, a new organizational framework is designed by using partial knowledge of the decision makers (DMs). Subsequently, a standardized technique with the suggested operator for MAGDM is introduced, and the realistic usage of the operator is illustrated by the use of a problem of regenerative energy source selection. We discuss the influence of the parameter vector on the ranking results. Finally, to address the benefits and limitations of the recommended MAGDM approach, the findings of the proposal are contrasted with other approaches.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/8892299","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144519630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elnaz Radmand, Jamshid Pirgazi, Ali Ghanbari Sorkhi
{"title":"A Hybrid TLBO–XGBoost Model With Novel Labeling for Bitcoin Price Prediction","authors":"Elnaz Radmand, Jamshid Pirgazi, Ali Ghanbari Sorkhi","doi":"10.1155/int/6674437","DOIUrl":"https://doi.org/10.1155/int/6674437","url":null,"abstract":"<div>\u0000 <p>In the digital currency market, including Bitcoin, price prediction using artificial intelligence (AI) and machine learning (ML) is critical but challenging. Conventional methods such as technical analysis (based on historical market data) and fundamental analysis (based on economic variables) suffer from data noise, processing delays, and insufficient data. To make predictions more accurate, faster, and able to handle more data, the suggested method combines several steps: extracting important information, labeling it, choosing the best features, merging different models, and fine-tuning the model settings. Based on the price data, this approach initially generates 5 labels with a new labeling method based on the percentage of average price changes in several days and generates signals (hold, buy, sell, strong sell, and strong buy). Thereafter, it extracts 768 features from technical studies using the TA-Lib library and from an authoritative site. The TLBOA algorithm, which does not get stuck in the local optimum with two updates, was used to select and reduce features to 15 to avoid overfitting. A variety of ML models, including support vector machine and Naive Bayes, use these selected features for training. By using the evolutionary DE algorithm to optimize the XGBoost meta-parameters, we increased the accuracy by 1%–4%. The proposed strategy has performed better than other models, such as XGBoost with 85.66% and gradient boosting with 84.15%, and has achieved an accuracy of 91%–92%.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6674437","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144519631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Framework for Quantum-Based Deepfake Video Detection (Without Audio)","authors":"Atul Pandey, Bhawana Rudra, Rajesh Kumar Krishnan","doi":"10.1155/int/3990069","DOIUrl":"https://doi.org/10.1155/int/3990069","url":null,"abstract":"<div>\u0000 <p>Artificial intelligence (AI) has made human tasks easier compared to earlier days. It has revolutionized various domains, from paper drafting to video editing. However, some individuals exploit AI to create deceptive content, such as fake videos, audios, and images, to mislead others. To address this, researchers and large corporations have proposed solutions for detecting fake content using classical deep learning models. However, these models often suffer from a large number of trainable parameters, which leads to large model sizes and, consequently, computational intensive. To overcome these limitations, we propose various hybrid classical–quantum models that use a classical pre-trained model as a front-end feature extractor, followed by a quantum-based LSTM network, that is, QLSTM. These pre-trained models are based on the ResNet architecture, such as ResNet34, 50, and 101. We have compared the performance of the proposed models with their classical counterparts. These proposed models combine the strengths of classical and quantum systems for the detection of deepfake video (without audio). Our results indicate that the proposed models significantly reduce the number of trainable parameters, as well as quantum long short-term memory (QLSTM) parameters, which leads to a smaller model size than the classical models. Despite the reduced parameter, the performance of the proposed models is either superior to or comparable with that of their classical equivalent. The proposed hybrid quantum models, that is, ResNet34-QLSTM, ResNet50-QLSTM, and ResNet101-QLSTM, achieve a reduction of approximately 1.50%, 4.59%, and 5.24% in total trainable parameters compared to their equivalent classical models, respectively. Additionally, QLSTM linked with the proposed models reduces its trainable parameters by 99.02%, 99.16%, and 99.55%, respectively, compared to equivalent classical LSTM. This significant reduction highlights the efficiency of the quantum-based network in terms of resource usage. The trained model sizes of the proposed models are 81.35, 88.06, and 162.79, and their equivalent classical models are 82.59, 92.28, and 171.76 in MB, respectively.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3990069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julián Muñoz-Ordóñez, Carlos Cobos, Juan C. Vidal-Rojas, Francisco Herrera
{"title":"A Maturity Model for Practical Explainability in Artificial Intelligence-Based Applications: Integrating Analysis and Evaluation (MM4XAI-AE) Models","authors":"Julián Muñoz-Ordóñez, Carlos Cobos, Juan C. Vidal-Rojas, Francisco Herrera","doi":"10.1155/int/4934696","DOIUrl":"https://doi.org/10.1155/int/4934696","url":null,"abstract":"<div>\u0000 <p>The increasing adoption of artificial intelligence (AI) in critical domains such as healthcare, law, and defense demands robust mechanisms to ensure transparency and explainability in decision-making processes. While machine learning and deep learning algorithms have advanced significantly, their growing complexity presents persistent interpretability challenges. Existing maturity frameworks, such as Capability Maturity Model Integration, fall short in addressing the distinct requirements of explainability in AI systems, particularly where ethical compliance and public trust are paramount. To address this gap, we propose the Maturity Model for eXplainable Artificial Intelligence: Analysis and Evaluation (MM4XAI-AE), a domain-agnostic maturity model tailored to assess and guide the practical deployment of explainability in AI-based applications. The model integrates two complementary components: an analysis model and an evaluation model, structured across four maturity levels—operational, justified, formalized, and managed. It evaluates explainability across three critical dimensions: technical foundations, structured design, and human-centered explainability. MM4XAI-AE is grounded in the PAG-XAI framework, emphasizing the interrelated dimensions of practicality, auditability, and governance, thereby aligning with current reflections on responsible and trustworthy AI. The MM4XAI-AE model is empirically validated through a structured evaluation of thirteen published AI applications from diverse sectors, analyzing their design and deployment practices. The results show a wide distribution across maturity levels, underscoring the model’s capacity to identify strengths, gaps, and actionable pathways for improving explainability. This work offers a structured and scalable framework to standardize explainability practices and supports researchers, developers, and policymakers in fostering more transparent, ethical, and trustworthy AI systems.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/4934696","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144367476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Privacy-Preserving Federated Learning Framework for Ambient Temperature Estimation With Crowdsensing and Exponential Mechanism","authors":"Saeid Zareie, Rasool Esmaeilyfard, Pirooz Shamsinejadbabaki","doi":"10.1155/int/5531568","DOIUrl":"https://doi.org/10.1155/int/5531568","url":null,"abstract":"<div>\u0000 <p>Ambient temperature estimation plays a vital role in various domains, including environmental monitoring, smart cities, and energy-efficient systems. Traditional sensor-based methods suffer from high deployment costs and limited scalability, while centralized machine learning approaches raise significant privacy concerns. Recent crowdsensing-based systems leverage smartphone sensor data but face two major challenges: user privacy protection and unreliable participant contributions. To address these issues, this study proposes a privacy-preserving federated learning framework that integrates differential privacy with the exponential mechanism to ensure user anonymity during decentralized training. Furthermore, a novel utility-based filtering mechanism is employed to detect and exclude low-quality or adversarial data, enhancing model reliability. Advanced deep learning models, including long short–term memory (LSTM) and ensemble learning, are integrated to improve prediction accuracy in temporal and noisy environments. The dataset consists of mobile sensor data, including battery temperature, CPU usage, and environmental temperature measurements, collected from participants in real-world settings. The framework achieved high accuracy, with the LSTM model outperforming others (federated MAE: 1.292, MAPE: 0.0511) and performing comparably to centralized models (MAE: 1.179, MAPE: 0.0462) while ensuring privacy. The proposed framework showed comparable performance to centralized models while ensuring strong privacy guarantees. The integration of privacy-preserving mechanisms and robust data filtering enables a scalable and reliable solution suitable for practical deployment in large-scale ambient temperature estimation tasks.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/5531568","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144367218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acquiring Tactical and Strategic Knowledge with a Generalized Method for Chunking of Game Pieces","authors":"Steven Walczak, Douglas Dankei","doi":"10.1002/j.1098-111x.1993.tb00001.x","DOIUrl":"https://doi.org/10.1002/j.1098-111x.1993.tb00001.x","url":null,"abstract":"The physical configuration of playing pieces on a game board contains a plethora of information which can be used by the game player. Current computer game programs deal well with some positional and tactical information that is built into the program, but are incapable of acquiring and using strategic information. We present a technique for capturing strategic and tactical chunks or patterns of pieces in game domains. The chunking technique models the cognitive method employed by expert level human game players and acquires knowledge that is mostly domain independent. Induction is performed on the collection of chunks captured for a particular adversary to identify the playing style of that adversary.","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"229 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144304464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}