Data Mining and Knowledge Discovery最新文献

筛选
英文 中文
Explainable decomposition of nested dense subgraphs 嵌套密集子图的可解释分解
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-07-10 DOI: 10.1007/s10618-024-01053-8
Nikolaj Tatti
{"title":"Explainable decomposition of nested dense subgraphs","authors":"Nikolaj Tatti","doi":"10.1007/s10618-024-01053-8","DOIUrl":"https://doi.org/10.1007/s10618-024-01053-8","url":null,"abstract":"<p>Discovering dense regions in a graph is a popular tool for analyzing graphs. While useful, analyzing such decompositions may be difficult without additional information. Fortunately, many real-world networks have additional information, namely node labels. In this paper we focus on finding decompositions that have dense inner subgraphs and that can be explained using labels. More formally, we construct a binary tree <i>T</i> with labels on non-leaves that we use to partition the nodes in the input graph. To measure the quality of the tree, we model the edges in the shell and the cross edges to the inner shells as a Bernoulli variable. We reward the decompositions with the dense regions by requiring that the model parameters are non-increasing. We show that our problem is <b>NP</b>-hard, even inapproximable if we constrain the size of the tree. Consequently, we propose a greedy algorithm that iteratively finds the best split and applies it to the current tree. We demonstrate how we can efficiently compute the best split by maintaining certain counters. Our experiments show that our algorithm can process networks with over million edges in few minutes. Moreover, we show that the algorithm can find the ground truth in synthetic data and produces interpretable decompositions when applied to real world networks.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"18 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141588412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Negative-sample-free knowledge graph embedding 无负样本知识图谱嵌入
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-07-09 DOI: 10.1007/s10618-024-01052-9
Adil Bahaj, Mounir Ghogho
{"title":"Negative-sample-free knowledge graph embedding","authors":"Adil Bahaj, Mounir Ghogho","doi":"10.1007/s10618-024-01052-9","DOIUrl":"https://doi.org/10.1007/s10618-024-01052-9","url":null,"abstract":"<p>Recently, knowledge graphs (KGs) have been shown to benefit many machine learning applications in multiple domains (e.g. self-driving, agriculture, bio-medicine, recommender systems, etc.). However, KGs suffer from incompleteness, which motivates the task of KG completion which consists of inferring new (unobserved) links between existing entities based on observed links. This task is achieved using either a probabilistic, rule-based, or embedding-based approach. The latter has been shown to consistently outperform the former approaches. It however relies on negative sampling, which supposes that every observed link is “true” and that every unobserved link is “false”. Negative sampling increases the computation complexity of the learning process and introduces noise in the learning. We propose NSF-KGE, a framework for KG embedding that does not require negative sampling, yet achieves performance comparable to that of the negative sampling-based approach. NSF-KGE employs objectives from the non-contrastive self-supervised literature to learn representations that are invariant to relation transformations (e.g. translation, scaling, rotation etc) while avoiding representation collapse.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"14 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge graph embedding closed under composition 组成下封闭的知识图式嵌入
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-07-04 DOI: 10.1007/s10618-024-01050-x
Zhuoxun Zheng, Baifan Zhou, Hui Yang, Zhipeng Tan, Zequn Sun, Chunnong Li, Arild Waaler, Evgeny Kharlamov, Ahmet Soylu
{"title":"Knowledge graph embedding closed under composition","authors":"Zhuoxun Zheng, Baifan Zhou, Hui Yang, Zhipeng Tan, Zequn Sun, Chunnong Li, Arild Waaler, Evgeny Kharlamov, Ahmet Soylu","doi":"10.1007/s10618-024-01050-x","DOIUrl":"https://doi.org/10.1007/s10618-024-01050-x","url":null,"abstract":"<p>Knowledge Graph Embedding (KGE) has attracted increasing attention. Relation patterns, such as symmetry and inversion, have received considerable focus. Among them, composition patterns are particularly important, as they involve nearly all relations in KGs. However, prior KGE approaches often consider relations to be compositional only if they are well-represented in the training data. Consequently, it can lead to performance degradation, especially for under-represented composition patterns. To this end, we propose HolmE, a general form of KGE with its relation embedding space closed under composition, namely that the composition of any two given relation embeddings remains within the embedding space. This property ensures that every relation embedding can compose, or be composed by other relation embeddings. It enhances HolmE’s capability to model under-represented (also called long-tail) composition patterns with limited learning instances. To our best knowledge, our work is pioneering in discussing KGE with this property of being closed under composition. We provide detailed theoretical proof and extensive experiments to demonstrate the notable advantages of HolmE in modelling composition patterns, particularly for long-tail patterns. Our results also highlight HolmE’s effectiveness in extrapolating to unseen relations through composition and its state-of-the-art performance on benchmark datasets.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"35 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards effective urban region-of-interest demand modeling via graph representation learning 通过图表示学习实现有效的城市兴趣区域需求建模
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-07-03 DOI: 10.1007/s10618-024-01049-4
Pu Wang, Jingya Sun, Wei Chen, Lei Zhao
{"title":"Towards effective urban region-of-interest demand modeling via graph representation learning","authors":"Pu Wang, Jingya Sun, Wei Chen, Lei Zhao","doi":"10.1007/s10618-024-01049-4","DOIUrl":"https://doi.org/10.1007/s10618-024-01049-4","url":null,"abstract":"<p>Identifying the region’s functionalities and what the specific Point-of-Interest (POI) needs is essential for effective urban planning. However, due to the diversified and ambiguity nature of urban regions, there are still some significant challenges to be resolved in urban POI demand analysis. To this end, we propose a novel framework, in which Region-of-Interest Demand Modeling is enhanced through the graph representation learning, namely Variational Multi-graph Auto-encoding Fusion, aiming to effectively predict the ROI demand from both the POI level and category level. Specifically, we first divide the urban area into spatially differentiated neighborhood regions, extract the corresponding multi-dimensional natures, and then generate the Spatial-Attributed Region Graph (SARG). After that, we introduce an unsupervised multi-graph based variational auto-encoder to map regional profiles of SARG into latent space, and further retrieve the dynamic latent representations through probabilistic sampling and global fusing. Additionally, during the training process, a spatio-temporal constrained Bayesian algorithm is adopted to infer the destination POIs. Finally, extensive experiments are conducted on real-world dataset, which demonstrate our model significantly outperforms state-of-the-art baselines.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"73 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Randomnet: clustering time series using untrained deep neural networks Randomnet:使用未经训练的深度神经网络对时间序列进行聚类
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-06-22 DOI: 10.1007/s10618-024-01048-5
Xiaosheng Li, Wenjie Xi, Jessica Lin
{"title":"Randomnet: clustering time series using untrained deep neural networks","authors":"Xiaosheng Li, Wenjie Xi, Jessica Lin","doi":"10.1007/s10618-024-01048-5","DOIUrl":"https://doi.org/10.1007/s10618-024-01048-5","url":null,"abstract":"<p>Neural networks are widely used in machine learning and data mining. Typically, these networks need to be trained, implying the adjustment of weights (parameters) within the network based on the input data. In this work, we propose a novel approach, RandomNet, that employs untrained deep neural networks to cluster time series. RandomNet uses different sets of random weights to extract diverse representations of time series and then ensembles the clustering relationships derived from these different representations to build the final clustering results. By extracting diverse representations, our model can effectively handle time series with different characteristics. Since all parameters are randomly generated, no training is required during the process. We provide a theoretical analysis of the effectiveness of the method. To validate its performance, we conduct extensive experiments on all of the 128 datasets in the well-known UCR time series archive and perform statistical analysis of the results. These datasets have different sizes, sequence lengths, and they are from diverse fields. The experimental results show that the proposed method is competitive compared with existing state-of-the-art methods.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"71 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust explainer recommendation for time series classification 时间序列分类的稳健解释器推荐
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-06-20 DOI: 10.1007/s10618-024-01045-8
Thu Trang Nguyen, Thach Le Nguyen, Georgiana Ifrim
{"title":"Robust explainer recommendation for time series classification","authors":"Thu Trang Nguyen, Thach Le Nguyen, Georgiana Ifrim","doi":"10.1007/s10618-024-01045-8","DOIUrl":"https://doi.org/10.1007/s10618-024-01045-8","url":null,"abstract":"<p>Time series classification is a task which deals with temporal sequences, a prevalent data type common in domains such as human activity recognition, sports analytics and general sensing. In this area, interest in explanability has been growing as explanation is key to understand the data and the model better. Recently, a great variety of techniques (e.g., LIME, SHAP, CAM) have been proposed and adapted for time series to provide explanation in the form of <i>saliency maps</i>, where the importance of each data point in the time series is quantified with a numerical value. However, the saliency maps can and often disagree, so it is unclear which one to use. This paper provides a novel framework to <i>quantitatively evaluate and rank explanation methods for time series classification</i>. We show how to robustly evaluate the informativeness of a given explanation method (i.e., relevance for the classification task), and how to compare explanations side-by-side. <i>The goal is to recommend the best explainer for a given time series classification dataset.</i> We propose AMEE, a Model-Agnostic Explanation Evaluation framework, for recommending saliency-based explanations for time series classification. In this approach, data perturbation is added to the input time series guided by each explanation. Our results show that perturbing discriminative parts of the time series leads to significant changes in classification accuracy, which can be used to evaluate each explanation. To be robust to different types of perturbations and different types of classifiers, we aggregate the accuracy loss across perturbations and classifiers. This novel approach allows us to recommend the best explainer among a set of different explainers, including random and oracle explainers. We provide a quantitative and qualitative analysis for synthetic datasets, a variety of time-series datasets, as well as a real-world case study with known expert ground truth.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"44 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Series2vec: similarity-based self-supervised representation learning for time series classification Series2vec:基于相似性的时间序列分类自监督表示学习
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-06-20 DOI: 10.1007/s10618-024-01043-w
Navid Mohammadi Foumani, Chang Wei Tan, Geoffrey I. Webb, Hamid Rezatofighi, Mahsa Salehi
{"title":"Series2vec: similarity-based self-supervised representation learning for time series classification","authors":"Navid Mohammadi Foumani, Chang Wei Tan, Geoffrey I. Webb, Hamid Rezatofighi, Mahsa Salehi","doi":"10.1007/s10618-024-01043-w","DOIUrl":"https://doi.org/10.1007/s10618-024-01043-w","url":null,"abstract":"<p>We argue that time series analysis is fundamentally different in nature to either vision or natural language processing with respect to the forms of meaningful self-supervised learning tasks that can be defined. Motivated by this insight, we introduce a novel approach called <i>Series2Vec</i> for self-supervised representation learning. Unlike the state-of-the-art methods in time series which rely on hand-crafted data augmentation, Series2Vec is trained by predicting the similarity between two series in both temporal and spectral domains through a self-supervised task. By leveraging the similarity prediction task, which has inherent meaning for a wide range of time series analysis tasks, Series2Vec eliminates the need for hand-crafted data augmentation. To further enforce the network to learn similar representations for similar time series, we propose a novel approach that applies order-invariant attention to each representation within the batch during training. Our evaluation of Series2Vec on nine large real-world datasets, along with the UCR/UEA archive, shows enhanced performance compared to current state-of-the-art self-supervised techniques for time series. Additionally, our extensive experiments show that Series2Vec performs comparably with fully supervised training and offers high efficiency in datasets with limited-labeled data. Finally, we show that the fusion of Series2Vec with other representation learning models leads to enhanced performance for time series classification. Code and models are open-source at https://github.com/Navidfoumani/Series2Vec</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"139 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GeoRF: a geospatial random forest GeoRF:地理空间随机森林
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-06-19 DOI: 10.1007/s10618-024-01046-7
Margot Geerts, Seppe vanden Broucke, Jochen De Weerdt
{"title":"GeoRF: a geospatial random forest","authors":"Margot Geerts, Seppe vanden Broucke, Jochen De Weerdt","doi":"10.1007/s10618-024-01046-7","DOIUrl":"https://doi.org/10.1007/s10618-024-01046-7","url":null,"abstract":"<p>The geospatial domain increasingly relies on data-driven methodologies to extract actionable insights from the growing volume of available data. Despite the effectiveness of tree-based models in capturing complex relationships between features and targets, they fall short when it comes to considering spatial factors. This limitation arises from their reliance on univariate, axis-parallel splits that result in rectangular areas on a map. To address this issue and enhance both performance and interpretability, we propose a solution that introduces two novel bivariate splits: an oblique and Gaussian split designed specifically for geographic coordinates. Our innovation, called Geospatial Random Forest (geoRF), builds upon Geospatial Regression Trees (GeoTrees) to effectively incorporate geographic features and extract maximum spatial insights. Through an extensive benchmark, we show that our geoRF model outperforms traditional spatial statistical models, other spatial RF variations, machine learning and deep learning methods across a range of geospatial tasks. Furthermore, we contextualize our method’s computational time complexity relative to baseline approaches. Our prediction maps illustrate that geoRF produces more robust and intuitive decision boundaries compared to conventional tree-based models. Utilizing impurity-based feature importance measures, we validate geoRF’s effectiveness in highlighting the significance of geographic coordinates, especially in data sets exhibiting pronounced spatial patterns.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"22 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modelling event sequence data by type-wise neural point process 用类型神经点过程对事件序列数据建模
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-06-17 DOI: 10.1007/s10618-024-01047-6
Bingqing Liu
{"title":"Modelling event sequence data by type-wise neural point process","authors":"Bingqing Liu","doi":"10.1007/s10618-024-01047-6","DOIUrl":"https://doi.org/10.1007/s10618-024-01047-6","url":null,"abstract":"<p>Event sequence data widely exists in real life, where each event is typically represented as a tuple, event type and occurrence time. Recently, neural point process (NPP), a probabilistic model that learns the next event distribution with events history given, has gained a lot of attention for event sequence modelling. Existing NPP models use one single vector to encode the whole events history. However, each type of event has its own historical events of concern, which should have led to a different encoding for events history. To this end, we propose Type-wise Neural Point Process (TNPP), with each type of event having a history vector to encode the historical events of its own interest. Type-wise encoding further leads to the realization of type-wise decoding, which together makes a more effective neural point process. Experimental results on six datasets show that TNPP outperforms existing models on the event type prediction task under both extrapolation and interpolation setting. Moreover, the results in terms of scalability and interpretability show that TNPP scales well to datasets with many event types and can provide high-quality event dependencies for interpretation. The code and data can be found at https://github.com/lbq8942/TNPP.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"30 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of variable ordering on Bayesian network structure learning 变量排序对贝叶斯网络结构学习的影响
IF 4.8 3区 计算机科学
Data Mining and Knowledge Discovery Pub Date : 2024-06-08 DOI: 10.1007/s10618-024-01044-9
Neville K. Kitson, Anthony C. Constantinou
{"title":"The impact of variable ordering on Bayesian network structure learning","authors":"Neville K. Kitson, Anthony C. Constantinou","doi":"10.1007/s10618-024-01044-9","DOIUrl":"https://doi.org/10.1007/s10618-024-01044-9","url":null,"abstract":"<p>Causal Bayesian Networks (CBNs) provide an important tool for reasoning under uncertainty with potential application to many complex causal systems. Structure learning algorithms that can tell us something about the causal structure of these systems are becoming increasingly important. In the literature, the validity of these algorithms is often tested for sensitivity over varying sample sizes, hyper-parameters, and occasionally objective functions, but the effect of the order in which the variables are read from data is rarely quantified. We show that many commonly-used algorithms, both established and state-of-the-art, are more sensitive to variable ordering than these other factors when learning CBNs from discrete variables. This effect is strongest in hill-climbing and its variants where we explain how it arises, but extends to hybrid, and to a lesser-extent, constraint-based algorithms. Because the variable ordering is arbitrary, any significant effect it has on learnt graph accuracy is concerning, and raises questions about the validity of both many older and more recent results produced by these algorithms in practical applications and their rankings in performance evaluations.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"44 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信