{"title":"Asymptotic efficiency of inferential models and a possibilistic Bernstein–von Mises theorem","authors":"Ryan Martin, Jonathan P. Williams","doi":"10.1016/j.ijar.2025.109389","DOIUrl":"10.1016/j.ijar.2025.109389","url":null,"abstract":"<div><div>The inferential model (IM) framework offers an alternative to the classical probabilistic (e.g., Bayesian and fiducial) uncertainty quantification in statistical inference. A key distinction is that classical uncertainty quantification takes the form of precise probabilities and offers only limited large-sample validity guarantees, whereas the IM's uncertainty quantification is imprecise in such a way that exact, finite-sample valid inference is possible. But are the IM's imprecision and finite-sample validity compatible with statistical efficiency? That is, can IMs be both finite-sample valid and asymptotically efficient? This paper gives an affirmative answer to this question via a new possibilistic Bernstein–von Mises theorem that parallels a fundamental Bayesian result. Among other things, our result shows that the IM solution is efficient in the sense that, asymptotically, its credal set is the smallest that contains the Gaussian distribution with variance equal to the Cramér–Rao lower bound. Moreover, a corresponding version of this new Bernstein–von Mises theorem is presented for problems that involve the elimination of nuisance parameters, which settles an open question concerning the relative efficiency of profiling-based versus extension-based marginalization strategies.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109389"},"PeriodicalIF":3.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiawei Wang , Fei Hao , Jie Gao , Li Zou , Zheng Pei
{"title":"Maximal hypercliques search based on concept-cognitive learning","authors":"Jiawei Wang , Fei Hao , Jie Gao , Li Zou , Zheng Pei","doi":"10.1016/j.ijar.2025.109386","DOIUrl":"10.1016/j.ijar.2025.109386","url":null,"abstract":"<div><div>Maximal hyperclique search, focused on finding the largest hypernode subsets in a hypergraph such that every combination of <em>r</em> nodes in these subsets forms a hyperedge, is a fundamental problem in hypergraph mining. However, compared to traditional graphs, the combinatorial explosion of hyperedges significantly increases the complexity of enumeration, especially as the <em>r</em>-value and the number of hypernodes grow, rapidly expanding the search space. Moreover, overlapping hyperedges in dense hypergraphs lead to substantial redundant checks, further exacerbating search inefficiency, making traditional methods inadequate for large-scale hypergraphs. To tackle these challenges, this paper proposes a novel approach MHSC that handles the maximal hyperclique search task in <em>r</em>-uniform hypergraph based on concept-cognitive learning. Concept-cognitive learning refers to the process of understanding and structuring knowledge through the formation of concepts and their interrelationships. Technically, the hypernode-neighbor structure of the hypergraph is first expressed as a formal context, and the required concepts are generated using the concept lattice algorithm. Based on the shared relationships between hypernodes represented by the hyperedges, a series of theorems are proposed to prune hypernodes that cannot form maximal hypercliques within the sets of 1-intent and 2-intent concepts, thereby narrowing the search space and reducing redundant computations. Furthermore, an optimization method termed MHSC+ is introduced. Extensive experiments conducted on both test datasets and real-world datasets demonstrate the effectiveness, efficiency, and applicability of the proposed algorithm.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109386"},"PeriodicalIF":3.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco De Pretis , Aldo Glielmo , Jürgen Landes
{"title":"Assessing inference to the best explanation posteriors for the estimation of economic agent-based models","authors":"Francesco De Pretis , Aldo Glielmo , Jürgen Landes","doi":"10.1016/j.ijar.2025.109388","DOIUrl":"10.1016/j.ijar.2025.109388","url":null,"abstract":"<div><div>Explanatory relationships between data and hypotheses have been suggested to play a role in the formation of posterior probabilities. This suggestion was tested in a toy environment and supported by simulations by David H. Glass. We here put forward a variety of inference to the best explanation approaches for determining posterior probabilities by intertwining Bayesian and inference to the best explanation approaches. We then simulate their performances for the estimation of parameters in the Brock and Hommes agent-based model for asset pricing in finance. We find that performances depend on circumstances and also on the evaluation metric. However, most of the time our suggested approaches outperform the Bayesian approach.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109388"},"PeriodicalIF":3.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kees van Berkel , Marcello D'Agostino , Sanjay Modgil
{"title":"A dialectical formalisation of preferred subtheories reasoning under resource bounds","authors":"Kees van Berkel , Marcello D'Agostino , Sanjay Modgil","doi":"10.1016/j.ijar.2025.109385","DOIUrl":"10.1016/j.ijar.2025.109385","url":null,"abstract":"<div><div><em>Dialectical Classical Argumentation</em> (Dialectical <em>Cl-Arg</em>) has been shown to satisfy rationality postulates under resource bounds. In particular, the consistency and non-contamination postulates are satisfied despite dropping the assumption of logical omniscience and the consistency and subset minimality checks on arguments' premises that are deployed by standard approaches to <em>Cl-Arg</em>. This paper studies Dialectical <em>Cl-Arg</em>'s formalisation of Preferred Subtheories (<em>PS</em>) non-monotonic reasoning under resource bounds. The contribution of this paper is twofold. First, we establish soundness and completeness for Dialectical <em>Cl-Arg</em>'s credulous consequence relation under the <em>preferred</em> semantics and credulous <em>PS</em> consequences. This result paves the way for the use of argument game proof theories and dialogues that establish membership of arguments in admissible (and so preferred) extensions, and hence the credulous <em>PS</em> consequences of a belief base. Second, we refine the non-standard characteristic function for Dialectical <em>Cl-Arg</em>, and use this refined function to show soundness for Dialectical <em>Cl-Arg</em> consequences under the grounded semantics and resource-bounded sceptical <em>PS</em> consequence. We provide a counterexample that shows that completeness does not hold. However, we also show that the grounded consequences defined by Dialectical <em>Cl-Arg</em> strictly subsume the grounded consequences defined by standard <em>Cl-Arg</em> formalisations of <em>PS</em>, so that we recover sceptical <em>PS</em> consequences that one would intuitively expect to hold.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109385"},"PeriodicalIF":3.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An unsupervised feature extraction and fusion framework for multi-source data based on copula theory","authors":"Xiuwei Chen, Li Lai, Maokang Luo","doi":"10.1016/j.ijar.2025.109384","DOIUrl":"10.1016/j.ijar.2025.109384","url":null,"abstract":"<div><div>With the development of big data technology, people are increasingly facing the challenge of dealing with massive amounts of multi-source or multi-sensor data. Therefore, it becomes crucial to extract valuable information from such data. Information fusion techniques provide effective solutions for handling multi-source data and can be categorized into three levels: data-level fusion, feature-level fusion, and decision-level fusion. Feature-level fusion combines features from multiple sources to create a consolidated feature, enhancing information richness. This paper proposes an unsupervised feature extraction and fusion method for multi-source data that utilizes the R-Vine copula, denoted as CF. The method starts by performing kernel density estimation to extract each data source's marginal density and distribution. Next, the maximum spanning tree is employed to select a vine structure for each attribute, and the corresponding copulas are chosen using maximum likelihood estimation and the AIC criterion. The joint probability density of each attribute across all information sources can be obtained by utilizing the relevant vine structure and copulas, serving as the final fusion feature. Finally, the proposed method is evaluated on eighteen simulated datasets and six real datasets. The results indicate that compared to several state-of-the-art fusion methods, the CF method can significantly enhance the classification accuracy of popular classifiers such as KNN, SVM, and Logistic Regression.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109384"},"PeriodicalIF":3.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient parameter-free adaptive hashing for large-scale cross-modal retrieval","authors":"Bo Li , You Wu , Zhixin Li","doi":"10.1016/j.ijar.2025.109383","DOIUrl":"10.1016/j.ijar.2025.109383","url":null,"abstract":"<div><div>The intention of deep cross-modal hashing retrieval (DCMHR) is to explore the connections between multi-media data, but most methods are only applicable to a few modalities and cannot be extended to other scenarios. Meanwhile, many methods also fail to emphasize the importance of unified training for classification loss and hash loss, which can also reduce the robustness and effectiveness of the model. Regarding these two issues, this paper designs Efficient Parameter-free Adaptive Hashing for Large-Scale Cross-Modal Retrieval (EPAH) to adaptively extract the modality variations and collect corresponding semantics of cross-modal features into the generated hash codes. EPAH does not use hyper-parameters, weight vectors, auxiliary matrices, and other structures to learn cross-modal data, while efficient parameter-free adaptive hashing can achieve multi-modal retrieval tasks. Specifically, our proposal is a two-stage strategy, divided into feature extraction and unified training, both stages are parameter-free adaptive learning. Meanwhile, this article simplifies the model training settings, selects the more stable gradient descent method, and designs the unified hash code generation function. Comprehensive experiments evidence that our EPAH approach can outperform the SoTA DCMHR methods. In addition, EPAH conducts the essential analysis of out-of-modality extension and parameter anti-interference, which demonstrates generalization and innovation. The code is available at <span><span>https://github.com/libo-02/EPAH</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109383"},"PeriodicalIF":3.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A logical formalisation of a hypothesis in weighted abduction: Towards user-feedback dialogues","authors":"Shota Motoura, Ayako Hoshino, Itaru Hosomi, Kunihiko Sadamasa","doi":"10.1016/j.ijar.2025.109382","DOIUrl":"10.1016/j.ijar.2025.109382","url":null,"abstract":"<div><div>Weighted abduction computes hypotheses that explain input observations. A reasoner of weighted abduction first generates possible hypotheses and then selects the hypothesis that is the most plausible. Since a reasoner employs parameters, called weights, that control its plausibility evaluation function, it can output the most plausible hypothesis according to a specific application using application-specific weights. This versatility makes it applicable from plant operation to cybersecurity or discourse analysis. However, the predetermined application-specific weights are not applicable to all cases of the application. Hence, the hypothesis selected by the reasoner does not necessarily seem the most plausible to the user. In order to resolve this problem, this article proposes two types of user-feedback dialogue protocols, in which the user points out, either positively, negatively or neutrally, properties of the hypotheses presented by the reasoner, and the reasoner regenerates hypotheses that satisfy the user's feedback. As it is required for user-feedback dialogue protocols, we then prove: (i) our protocols necessarily terminate under certain reasonable conditions; (ii) they converge on hypotheses that have the same properties in common as fixed target hypotheses do in common if the user determines the positivity, negativity or neutrality of each pointed-out property based on whether the target hypotheses have that property.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109382"},"PeriodicalIF":3.2,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanan Jiang, Fusheng Yu, Yuqing Tang, Chenxi Ouyang, Fangyi Li
{"title":"Trend-pattern unlimited fuzzy information granule-based LSTM model for long-term time-series forecasting","authors":"Yanan Jiang, Fusheng Yu, Yuqing Tang, Chenxi Ouyang, Fangyi Li","doi":"10.1016/j.ijar.2025.109381","DOIUrl":"10.1016/j.ijar.2025.109381","url":null,"abstract":"<div><div>Trend fuzzy information granulation has shown promising results in long-term time-series forecasting and has attracted increasing attention. In the forecasting model based on trend fuzzy information granulation, the representation of trend granules plays a crucial role. The research focuses on developing trend granules and trend granular time series to effectively represent trend information and improve forecasting performance. However, the existing trend fuzzy information granulation methods make assumptions about the trend pattern of granules (i.e., assuming that granules have linear trends or definite nonlinear trends). Fuzzy information granules with presupposed trend patterns have limited expressive ability and struggle to capture complex nonlinear trends and temporal dependencies, thus limiting their forecasting performance. To address this issue, this paper proposes a novel kind of trend fuzzy information granules, named Trend-Pattern Unlimited Fuzzy Information Granules (TPUFIGs), which are constructed by the recurrent autoencoder with automatic feature learning and nonlinear modeling capabilities. Compared with the existing trend fuzzy information granules, TPUFIGs can better characterize potential trend patterns and temporal dependencies, and exhibit stronger robustness. With the TPUFIGs and Long Short-Term Memory (LSTM) neural network, we design the TPUFIG-LSTM forecasting model, which can effectively alleviate error accumulation and improve forecasting capability. Experimental results on six heterogeneous time series datasets demonstrate the superior performance of the proposed model. By combining deep learning and granular computing, this fuzzy information granulation method characterizes intricate dynamic features in time series more effectively, thus providing a novel solution for long-term time series forecasting with improved forecasting accuracy and generalization capability.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109381"},"PeriodicalIF":3.2,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The multi-criteria ranking method for criterion-oriented regret three-way decision","authors":"Weidong Wan , Kai Zhang , Ligang Zhou","doi":"10.1016/j.ijar.2025.109374","DOIUrl":"10.1016/j.ijar.2025.109374","url":null,"abstract":"<div><div>Recently, the criterion-oriented three-way decision has garnered widespread attention as it considers the decision-makers' preferences in handling multi-criteria decision-making problems. However, due to the fact that some criterion-oriented three-way decision models do not accurately consider the specific deviation between the object evaluation value and the criterion preference value when calculating the loss function, some of the objects show the weakness of ranking failure. In order to eliminate this weakness, this paper considers this deviation as the decision-maker's regret psychology, combines the regret theory, proposes a new loss function and constructs a new criterion-oriented regret three-way decision model. Firstly, an innovative approach for determining the loss function is introduced, integrating the decision-maker's basic demands with regret theory. Secondly, thresholds are derived by combining the decision-maker's basic demands with two optimization models. Thirdly, the <em>k</em>-means++ clustering algorithm is employed to derive the objects' fuzzy depictions. Then, this paper proposes a practical method for calculating conditional probabilities by combining the concept of closeness with the fuzzy depictions of the objects. Next, a multi-criteria ranking method founded on criterion-oriented regret three-way decision is proposed. Finally, the applicability of the innovative sequencing method is verified by combining parametric and comparative analyses for the computer hardware selection problem. Additionally, in dataset experiments, the proposed method is further validated on datasets containing known ranking results and datasets containing ordered classification.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"179 ","pages":"Article 109374"},"PeriodicalIF":3.2,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143093631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Global sensitivity analysis of uncertain parameters in Bayesian networks","authors":"Rafael Ballester-Ripoll, Manuele Leonelli","doi":"10.1016/j.ijar.2025.109368","DOIUrl":"10.1016/j.ijar.2025.109368","url":null,"abstract":"<div><div>Traditionally, the sensitivity analysis of a Bayesian network studies the impact of individually modifying the entries of its conditional probability tables in a one-at-a-time (OAT) fashion. However, this approach fails to give a comprehensive account of each inputs' relevance, since simultaneous perturbations in two or more parameters often entail higher-order effects that cannot be captured by an OAT analysis. We propose to conduct global variance-based sensitivity analysis instead, whereby <em>n</em> parameters are viewed as uncertain at once and their importance is assessed jointly. Our method works by encoding the uncertainties as <em>n</em> additional variables of the network. To prevent the curse of dimensionality while adding these dimensions, we use low-rank tensor decomposition to break down the new potentials into smaller factors. Last, we apply the method of Sobol to the resulting network to obtain <em>n</em> global sensitivity indices, one for each parameter of interest. Using a benchmark array of both expert-elicited and learned Bayesian networks, we demonstrate that the Sobol indices can significantly differ from the OAT indices, thus revealing the true influence of uncertain parameters and their interactions.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"180 ","pages":"Article 109368"},"PeriodicalIF":3.2,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}