{"title":"Cut-elimination theorems for some logics associated with double Stone algebras","authors":"Martín Figallo, Juan S. Slagter","doi":"10.1016/j.ijar.2025.109526","DOIUrl":"10.1016/j.ijar.2025.109526","url":null,"abstract":"<div><div>A <em>double Stone algebra</em> is a Stone algebra whose dual lattice is also a Stone algebra. Logics that may be associated with double Stone algebras are based on bounded distributive lattices which are endowed with two negations: a Heyting negation (the pseudocomplement) and a Brouwer negation (the dual pseudocomplement) possibly satisfying some constraints. Different authors have studied the order-preserving logic associated with double Stone algebras. Recently, the four-valued character of this logic was exploited by providing a rough set semantics for it.</div><div>In this paper, we explore the proof-theoretical aspect of two logics associated with double Stone algebras, namely, the truth-preserving and the order-preserving logic, respectively. We provide sequent systems sound and complete for these logics and prove the cut-elimination theorem for both systems.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109526"},"PeriodicalIF":3.2,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stable structure learning with HC-Stable and Tabu-Stable algorithms","authors":"Neville K. Kitson, Anthony C. Constantinou","doi":"10.1016/j.ijar.2025.109522","DOIUrl":"10.1016/j.ijar.2025.109522","url":null,"abstract":"<div><div>Many Bayesian Network structure learning algorithms are unstable, with the learned graph sensitive to arbitrary dataset artifacts, such as the ordering of columns (i.e., variable order). PC-Stable <span><span>[1]</span></span> attempts to address this issue for the widely-used PC algorithm, prompting researchers to use the ‘stable’ version instead. However, this problem seems to have been overlooked for score-based algorithms. In this study, we show that some widely-used score-based algorithms, as well as hybrid and constraint-based algorithms, including PC-Stable, suffer from the same issue. We propose a novel solution for score-based greedy hill-climbing that eliminates instability by determining a stable node order, leading to consistent results regardless of variable ordering. The new Tabu-Stable algorithms achieve the highest overall performance in terms of mean BIC score, log-likelihood, and structural accuracy across networks. These results highlight the importance of addressing instability in structure learning and provide a robust and practical approach for future applications. This paper extends the scope and impact of our previous work presented at Probabilistic Graphical Models 2024 <span><span>[2]</span></span> by incorporating continuous variables, implementing new stable orders that improve performance further, and demonstrating that the approach remains effective in the presence of sampling noise. The implementations, along with usage instructions, are freely available on GitHub at <span><span>https://github.com/causal-iq/discovery</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109522"},"PeriodicalIF":3.2,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144588287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Chain graphs structure learning given local background knowledge","authors":"Shujing Yang , Fuyuan Cao , Kui Yu , Jiye Liang","doi":"10.1016/j.ijar.2025.109524","DOIUrl":"10.1016/j.ijar.2025.109524","url":null,"abstract":"<div><div>Chain graphs structure learning aims to identify and infer causal relations and symmetric association relations between variables in data. However, existing chain graphs structure learning algorithms cannot uniquely determine the causal relations from data among some variables due to the independence of these variables corresponding to multiple structures, making them only learn Markov equivalence classes of chain graphs. To alleviate this issue, we propose a <strong>C</strong>hain <strong>G</strong>raphs structure <strong>L</strong>earning algorithm <strong>G</strong>iven local background <strong>K</strong>nowledge (CGLGK). CGLGK initially learns the adjacencies and spouses of variables, constructs the skeleton of chain graphs using the adjacencies, corrects the connections between variables in the skeleton guided by local background knowledge, and orients the edges using the adjacencies and spouses to obtain the Markov equivalence classes of chain graphs. Next, CGLGK fuses local background knowledge with the learned Markov equivalence classes to obtain new knowledge. Finally, it utilizes the local valid orientation rule to orient edges within the Markov equivalence classes based on the updated knowledge, resulting in the final chain graphs structure. Meanwhile, we conducted the theoretical analysis to prove the correctness of CGLGK, and its effectiveness is verified by comparison with the classical and state-of-the-art algorithms on synthetic and real data.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109524"},"PeriodicalIF":3.2,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144572380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Rodum Bjøru , Rafael Cabañas , Helge Langseth , Antonio Salmerón
{"title":"Divide and conquer for causal computation","authors":"Anna Rodum Bjøru , Rafael Cabañas , Helge Langseth , Antonio Salmerón","doi":"10.1016/j.ijar.2025.109520","DOIUrl":"10.1016/j.ijar.2025.109520","url":null,"abstract":"<div><div>Structural causal models are a powerful framework for causal and counterfactual inference, extending the capabilities of traditional Bayesian networks. These models comprise endogenous and exogenous variables, where the exogenous variables frequently lack clear semantic interpretation. Exogenous variables are typically unobservable, rendering certain counterfactual queries unidentifiable. In such cases, standard inference algorithms for Bayesian networks are insufficient. Recent methods attempt to bound unidentifiable queries through imprecise estimation of exogenous probabilities. However, these methods become computationally infeasible as the cardinality of the exogenous variables increases, thereby constraining the complexity of applicable models. In this paper we study a divide-and-conquer approach that decomposes a general causal model into a set of submodels with low-cardinality exogenous variables, enabling exact calculation of any query within these submodels. By aggregating results from the submodels, efficient approximations of bounds for queries in the original model are obtained. Our proposal is able to handle models with variables of any cardinality assuming that there are no unobserved confounders. We show that the method is theoretically robust, and experimental results demonstrate that it achieves more accurate bounds with lower computational costs compared to existing techniques.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109520"},"PeriodicalIF":3.2,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144572379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FDACNet: Enhancing time-series classification with fuzzy feature and integrated self-attention and temporal convolution","authors":"Xiuwei Chen, Li Lai, Maokang Luo","doi":"10.1016/j.ijar.2025.109521","DOIUrl":"10.1016/j.ijar.2025.109521","url":null,"abstract":"<div><div>Time-series classification is crucial in time series analysis and holds significant importance in real-world scenarios. Applying self-attention and temporal convolution techniques is paramount when dealing with time series data. The self-attention mechanism enables the capture of correlations between different time steps in a sequence, thereby facilitating the handling of long-term dependencies. Meanwhile, temporal convolution is designed explicitly for processing time series data, effectively capturing temporal dependencies through convolutional layers. The integration of the two technologies plays a pivotal role in time series analysis, enabling accurate temporal classification. This paper proposes a novel net with fuzzy features and integrated self-attention and temporal convolution, denoted as FDACNet. The proposed net introduces two key components: FD-FE for fuzzy dominated feature extraction, and ATCmix for integrating self-attention and temporal convolution. FD-FE captures trend information by defining gradient relationship between time points within a time series sample. On the other hand, ATCmix combines convolution and self-attention to reduce parameters and enhance efficiency in handling time-series data. Finally, the proposed method is evaluated on twenty datasets and compared against twelve other state-of-the-art approaches. Experimental results demonstrate the superior classification accuracy of the proposed model, showcasing a 5.2% and 7.1% enhancement in average accuracy compared to the state-of-the-art convolution-based and transformer-based methods ModernTCN and iTransformer.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109521"},"PeriodicalIF":3.2,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144534999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PAC learning of concept inclusions for ontology-mediated query answering","authors":"Sergei Obiedkov , Barış Sertkaya","doi":"10.1016/j.ijar.2025.109523","DOIUrl":"10.1016/j.ijar.2025.109523","url":null,"abstract":"<div><div>We present a probably approximately correct algorithm for learning the terminological part of a description-logic knowledge base via subsumption queries. The axioms we learn are concept inclusions between conjunctions of concepts from a specified set of concept descriptions. By varying the distribution of queries posed to the oracle, we adapt the algorithm to improve the recall when using the resulting TBox for ontology-mediated query answering. Experimental evaluation on OWL 2 EL ontologies suggests that our approach helps significantly improve recall while maintaining a high precision of query answering.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109523"},"PeriodicalIF":3.2,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144579276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gael Tenkeu Kembang , Yannick Léa Tenkeu Jeufack , Etienne Romuald Temgoua Alomo , Leonard Kwuida
{"title":"Double Boolean algebras: Constructions, sub-structures and morphisms","authors":"Gael Tenkeu Kembang , Yannick Léa Tenkeu Jeufack , Etienne Romuald Temgoua Alomo , Leonard Kwuida","doi":"10.1016/j.ijar.2025.109519","DOIUrl":"10.1016/j.ijar.2025.109519","url":null,"abstract":"<div><div>Double Boolean algebras are algebras <span><math><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder><mo>:</mo><mo>=</mo><mo>(</mo><mi>D</mi><mo>;</mo><mo>⊓</mo><mo>,</mo><mo>⊔</mo><mo>,</mo><mo>¬</mo><mo>,</mo><mo>⌟</mo><mo>,</mo><mo>⊥</mo><mo>,</mo><mo>⊤</mo><mo>)</mo></math></span> of type <span><math><mo>(</mo><mn>2</mn><mo>,</mo><mn>2</mn><mo>,</mo><mn>1</mn><mo>,</mo><mn>1</mn><mo>,</mo><mn>0</mn><mo>,</mo><mn>0</mn><mo>)</mo></math></span> introduced by Rudolf Wille to capture the equational theory of the algebra of protoconcepts. Every double Boolean algebra <span><math><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></math></span> contains two Boolean algebras: <span><math><msub><mrow><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></mrow><mrow><mo>⊓</mo></mrow></msub></math></span> and <span><math><msub><mrow><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></mrow><mrow><mo>⊔</mo></mrow></msub></math></span>. Three main goals are achieved in this paper. First we characterize sub-algebras of a double Boolean algebra <span><math><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></math></span> as join sets of sub-algebras of the Boolean algebras <span><math><msub><mrow><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></mrow><mrow><mo>⊓</mo></mrow></msub></math></span> and <span><math><msub><mrow><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></mrow><mrow><mo>⊔</mo></mrow></msub></math></span> and a subset of <span><math><mi>D</mi><mo>﹨</mo><msub><mrow><mi>D</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span> (where <span><math><msub><mrow><mi>D</mi></mrow><mrow><mi>p</mi></mrow></msub><mo>=</mo><msub><mrow><mi>D</mi></mrow><mrow><mo>⊓</mo></mrow></msub><mo>∪</mo><msub><mrow><mi>D</mi></mrow><mrow><mo>⊔</mo></mrow></msub></math></span>) satisfying certain conditions. Second, we characterize homomorphisms between two double Boolean algebras <span><math><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></math></span> and <span><math><munder><mrow><mi>E</mi></mrow><mo>_</mo></munder></math></span> by homomorphisms between the Boolean algebras <span><math><msub><mrow><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></mrow><mrow><mo>⊓</mo></mrow></msub></math></span> and <span><math><msub><mrow><munder><mrow><mi>E</mi></mrow><mo>_</mo></munder></mrow><mrow><mo>⊓</mo></mrow></msub></math></span>, <span><math><msub><mrow><munder><mrow><mi>D</mi></mrow><mo>_</mo></munder></mrow><mrow><mo>⊔</mo></mrow></msub></math></span> and <span><math><msub><mrow><munder><mrow><mi>E</mi></mrow><mo>_</mo></munder></mrow><mrow><mo>⊔</mo></mrow></msub></math></span> and maps between <span><math><mi>D</mi><mo>﹨</mo><msub><mrow><mi>D</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span> and <em>E</em> satisfying certain conditions. Third, we give some tools to construct some classes of pure double Boolean algebras.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109519"},"PeriodicalIF":3.2,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joining copulas of extreme implicit dependence copulas","authors":"Noppawit Yanpaisan, Tippawan Santiwipanont, Songkiat Sumetkijakan","doi":"10.1016/j.ijar.2025.109518","DOIUrl":"10.1016/j.ijar.2025.109518","url":null,"abstract":"<div><div>Copulas of uniform-<span><math><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span> random variables <em>U</em> and <em>V</em> satisfying <span><math><mi>α</mi><mo>(</mo><mi>U</mi><mo>)</mo><mo>=</mo><mi>β</mi><mo>(</mo><mi>V</mi><mo>)</mo></math></span> almost surely for some measure-preserving transformations <em>α</em> and <em>β</em> are called <em>implicit dependence copulas</em>. They were recently shown to coincide with the generalized Markov products of <span><math><msub><mrow><mi>C</mi></mrow><mrow><mi>e</mi><mo>,</mo><mi>α</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>C</mi></mrow><mrow><mi>β</mi><mo>,</mo><mi>e</mi></mrow></msub></math></span> with respect to a class of joining copulas <span><math><msub><mrow><mo>(</mo><msub><mrow><mi>A</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>)</mo></mrow><mrow><mi>t</mi><mo>∈</mo><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></msub></math></span>. If <span><math><msub><mrow><mi>C</mi></mrow><mrow><mi>e</mi><mo>,</mo><mi>α</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>C</mi></mrow><mrow><mi>β</mi><mo>,</mo><mi>e</mi></mrow></msub></math></span> are not two-sided invertible, then most implicit dependence copulas, especially when <span><math><msub><mrow><mi>A</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>≡</mo><mi>Π</mi></math></span>, are not extreme points in the class of copulas. For a given pair of left and right invertible copulas <span><math><msub><mrow><mi>C</mi></mrow><mrow><mi>e</mi><mo>,</mo><mi>α</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>C</mi></mrow><mrow><mi>β</mi><mo>,</mo><mi>e</mi></mrow></msub></math></span>, we characterize extreme implicit dependence copulas in terms of the extremality of the joining copulas in the class of subcopulas on a domain involving the invertible copulas. This result is then extended to the multivariate case.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109518"},"PeriodicalIF":3.2,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144490346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cauchy Graph Convolutional Networks","authors":"Taurai Muvunza , Yang Li , Ercan Engin Kuruoglu","doi":"10.1016/j.ijar.2025.109517","DOIUrl":"10.1016/j.ijar.2025.109517","url":null,"abstract":"<div><div>A common approach to learning Bayesian networks involves specifying an appropriately chosen family of parameterized probability density such as Gaussian. However, the distribution of most real-life data is leptokurtic and may not necessarily be best described by a Gaussian process. In this work we introduce Cauchy Graphical Models (CGM), a class of multivariate Cauchy densities that can be represented as directed acyclic graphs with arbitrary network topologies, the edges of which encode linear dependencies between random variables. We develop CGLearn, the resultant algorithm for learning the structure and Cauchy parameters based on Minimum Dispersion Criterion (MDC). Experiments using simulated datasets on benchmark network topologies demonstrate the efficacy of our approach when compared to Gaussian Graphical Models (GGM). Most Graph Convolutional Neural Networks (GCN) process input graphs as ground-truth representations of node relationships, yet these graphs are constructed based on modeling assumptions and noisy data and their use may lead to suboptimal performance on downstream prediction tasks. We propose Cauchy GCN which leverages CGM to infer graph topology that depicts latent relationships between nodes. We evaluate the effectiveness and quality of the structural graphs learned by CGM, and demonstrate that Cauchy-GCN achieves superior performance compared to widely used graph construction methods.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109517"},"PeriodicalIF":3.2,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic concept reduction methods based on local information","authors":"Mei-Zheng Li , Lei-Jun Li , Ju-Sheng Mi , Qian Hu","doi":"10.1016/j.ijar.2025.109514","DOIUrl":"10.1016/j.ijar.2025.109514","url":null,"abstract":"<div><div>Knowledge reduction is one of the core research issues in formal concept analysis. As a new technique of knowledge reduction, concept reduction has received increasing attention recently. One typical method of calculating concept reducts is based on representative concept matrix (RC-matrix, for short), which can obtain all concept reducts. However, it is confronted with the following challenges: (1) before the construction of the RC-matrix, all concepts of the formal context need to be calculated, which is both time and space consuming; (2) there is a lot of redundant information in the constructed RC-matrix, which is not helpful to calculate the concept reducts; (3) when the data changes dynamically, the concept reducts need to be calculated for scratch. To address these issues, dynamic concept reduction methods based on local information are proposed in this paper. Firstly, the characteristics of the minimal elements (with respect to set inclusion) in the RC-matrix are analyzed, and all the minimal elements are directly labeled from the formal context; secondly, the advantage of local information is taken to construct each minimal elements of the RC-matrix, from which all the concept reducts can be obtained; besides, a new simplified version of RC-matrix, named as Type-I minimal RC-matrix, is further constructed to compute one concept reduct; and finally, when data dynamically changes, the connections between concept reducts of the original formal context and those of the new one are analyzed, consequently, two dynamic concept reduction algorithms are proposed.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109514"},"PeriodicalIF":3.2,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}