2021 IEEE International Conference on Data Mining (ICDM)最新文献

筛选
英文 中文
Multi-Objective Distributional Reinforcement Learning for Large-Scale Order Dispatching 大规模订单调度的多目标分布式强化学习
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00202
Fan Zhou, Chenfan Lu, Xiaocheng Tang, Fan Zhang, Zhiwei Qin, Jieping Ye, Hongtu Zhu
{"title":"Multi-Objective Distributional Reinforcement Learning for Large-Scale Order Dispatching","authors":"Fan Zhou, Chenfan Lu, Xiaocheng Tang, Fan Zhang, Zhiwei Qin, Jieping Ye, Hongtu Zhu","doi":"10.1109/ICDM51629.2021.00202","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00202","url":null,"abstract":"The aim of this paper is to develop a multi-objective distributional reinforcement learning framework for improving order dispatching on large-scale ride-hailing platforms. Compared with traditional RL-based approaches that focus on drivers’ income, the proposed framework also accounts for the spatiotemporal difference between the supply and demand networks. Specifically, we model the dispatching problem as a two-objective Semi-Markov Decision Process (SMDP) and estimate the relative importance of the two objectives under some unknown existing policy via Inverse Reinforcement Learning (IRL). Then, we combine Implicit Quantile Networks (IQN) with the traditional Deep Q-Networks (DQN) to jointly learn the two return distributions and adjusting their weights to refine the old policy through on-line planning and achieve a higher supply-demand coherence of the platform. We conduct large-scale dispatching experiments to demonstrate the remarkable improvement of proposed approach on the platform’s efficiency.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114715818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Causal Discovery with Flow-based Conditional Density Estimation 基于流的条件密度估计的因果发现
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00161
Shaogang Ren, Haiyan Yin, Mingming Sun, Ping Li
{"title":"Causal Discovery with Flow-based Conditional Density Estimation","authors":"Shaogang Ren, Haiyan Yin, Mingming Sun, Ping Li","doi":"10.1109/ICDM51629.2021.00161","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00161","url":null,"abstract":"Causal-effect discovery plays an essential role in many disciplines of science and real-world applications. In this paper, we introduce a new causal discovery method to solve the classic problem of inferring the causal direction under a bivariate setting. In particular, our proposed method first leverages a flow model to estimate the joint probability density of the variables. Then we formulate a novel evaluation metric to infer the scores for each potential causal direction based on the variance of the conditional density estimation. By leveraging the flow-based conditional density estimation metric, our causal discovery approach alleviates the restrictive assumptions made by the conventional methods, such as assuming the linearity relationship between the two variables. Therefore, it could potentially be able to better capture the complex causal relationship among data in various problem domains that comes in arbitrary forms. We conduct extensive evaluations to compare our method with decent causal discovery approaches. Empirical results show that our method could promisingly outperform the baseline methods with noticeable margins on both synthetic and real-world datasets.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123256360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Hypergraph Ego-networks and Their Temporal Evolution 超图自我网络及其时间演化
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00019
Cazamere Comrie, J. Kleinberg
{"title":"Hypergraph Ego-networks and Their Temporal Evolution","authors":"Cazamere Comrie, J. Kleinberg","doi":"10.1109/ICDM51629.2021.00019","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00019","url":null,"abstract":"Interactions involving multiple objects simultaneously are ubiquitous across many domains. The systems these interactions inhabit can be modelled using hypergraphs, a generalization of traditional graphs in which each edge can connect any number of nodes. Analyzing the global and static properties of these hypergraphs has led to a plethora of novel findings regarding how these modelled system are structured. However, less is known about the localized structure of these systems and how they evolve over time. In this paper, we propose the study of hypergraph ego-networks, a structure that can be used to model higher-order interactions involving a single node. We also propose the temporal reconstruction of hypergraph ego-networks as a benchmark problem for models that aim to predict the local temporal structure of hypergraphs. By combining a deep learning binary classifier with a hill-climbing algorithm, we will present a model for reconstructing hypergraph ego-networks by incorporating structural patterns found across multiple domains.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123902257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Graph Transfer Learning 图迁移学习
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1007/s10115-022-01782-6
Andrey Gritsenko, Kimia Shayestehfard, Yuan Guo, Armin Moharrer, Jennifer G. Dy, Stratis Ioannidis
{"title":"Graph Transfer Learning","authors":"Andrey Gritsenko, Kimia Shayestehfard, Yuan Guo, Armin Moharrer, Jennifer G. Dy, Stratis Ioannidis","doi":"10.1007/s10115-022-01782-6","DOIUrl":"https://doi.org/10.1007/s10115-022-01782-6","url":null,"abstract":"","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130235119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multimodal N-best List Rescoring with Weakly Supervised Pre-training in Hybrid Speech Recognition 混合语音识别中弱监督预训练的多模态N-best列表评分
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00167
Yuanfeng Song, Xiaoling Huang, Xuefang Zhao, Di Jiang, Raymond Chi-Wing Wong
{"title":"Multimodal N-best List Rescoring with Weakly Supervised Pre-training in Hybrid Speech Recognition","authors":"Yuanfeng Song, Xiaoling Huang, Xuefang Zhao, Di Jiang, Raymond Chi-Wing Wong","doi":"10.1109/ICDM51629.2021.00167","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00167","url":null,"abstract":"N-best list rescoring, an essential step in hybrid automatic speech recognition (ASR), aims to re-evaluate the N-best hypothesis list decoded by the acoustic model (AM) and language model (LM), and selects the top-ranked hypotheses as the final ASR results. This paper explores the performance of neural rescoring models in scenarios where large-scale training labels are not available. We propose a weakly supervised neural rescoring method, WSNeuRescore, where a listwise multimodal neural rescoring model is pre-trained using labels automatically obtained without human annotators. Specifically, we employ the output of an unsupervised rescoring model, the weighted linear combination of the AM score and the LM score, as a weak supervision signal to pre-train the neural rescoring model. Our experimental evaluations on a public dataset validate that the pre-trained rescoring model based on weakly supervised data leads to an impressive performance. In the extreme scenario without any high-quality labeled data, it achieves up to an 11.90% WER reduction and a 15.56% NDCG@10 improvement over the baseline method in Kaldi, a well-known open-source toolkit in the ASR community.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124313068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Personal Human Biases and Representations for Subjective Tasks in Natural Language Processing 学习自然语言处理中主观任务的个人偏见和表征
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00140
Jan Kocoń, Marcin Gruza, Julita Bielaniewicz, Damian Grimling, Kamil Kanclerz, P. Milkowski, Przemyslaw Kazienko
{"title":"Learning Personal Human Biases and Representations for Subjective Tasks in Natural Language Processing","authors":"Jan Kocoń, Marcin Gruza, Julita Bielaniewicz, Damian Grimling, Kamil Kanclerz, P. Milkowski, Przemyslaw Kazienko","doi":"10.1109/ICDM51629.2021.00140","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00140","url":null,"abstract":"Many tasks in natural language processing like offensive, toxic, or emotional text classification are subjective by nature. Humans tend to perceive textual content in their own individual way. Existing methods commonly rely on the agreed output values, the same for all consumers. Here, we propose personalized solutions to subjective tasks. Our four new deep learning models take into account not only the content but also the specificity of a given human. The models represent different approaches to learning the representation and processing data about text readers. The experiments were carried out on four datasets: Wikipedia discussion texts labelled with attack, aggression, and toxicity, as well as opinions annotated with ten numerical emotional categories. Emotional data was considered as multivariate regression (multitask), whereas Wikipedia data as independent classifications. All our models based on human biases and their representations significantly improve the prediction quality in subjective tasks evaluated from the individual’s perspective.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125702071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Dictionary Pair-based Data-Free Fast Deep Neural Network Compression 基于字典对的无数据快速深度神经网络压缩
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00022
Yangcheng Gao, Zhao Zhang, Haijun Zhang, Mingbo Zhao, Yi Yang, Meng Wang
{"title":"Dictionary Pair-based Data-Free Fast Deep Neural Network Compression","authors":"Yangcheng Gao, Zhao Zhang, Haijun Zhang, Mingbo Zhao, Yi Yang, Meng Wang","doi":"10.1109/ICDM51629.2021.00022","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00022","url":null,"abstract":"Deep neural network (DNN) compression can reduce the memory footprint of deep networks effectively, so that the deep model can be deployed on the portable devices. However, most of the existing model compression methods cost lots of time, e.g., vector quantization or pruning, which makes them inept to the real-world applications that need fast online computation. In this paper, we therefore explore how to accelerate the model compression process by reducing the computation cost. Then, we propose a new deep model compression method, termed Dictionary Pair-based Data-Free Fast DNN Compression, which aims at reducing the memory consumption of DNNs without extra training and can greatly improve the compression efficiency. Specifically, our proposed method performs tensor decomposition on the DNN model with a fast dictionary pair learning-based reconstruction approach, which can be deployed on different layers (e.g., convolution and fully-connection layers). Given a pre-trained DNN model, we first divide the parameters (i.e., weights) of each layer into a series of partitions for dictionary pair-based fast reconstruction, which can potentially discover more fine-grained information and provide the possibility for parallel model compression. Then, dictionaries of less memory occupation are learned to reconstruct the weights. Extensive experiments on popular DNNs (i.e., VGG-16, ResNet-18 and ResNet-50) showed that our proposed weight compression method can significantly reduce the memory footprint and speed up the compression process, with less performance loss.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127871767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Technological Knowledge Flow Forecasting through A Hierarchical Interactive Graph Neural Network 基于层次交互图神经网络的技术知识流预测
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00050
Huijie Liu, Han Wu, Le Zhang, Runlong Yu, Ye Liu, Chunli Liu, Qi Liu, Enhong Chen
{"title":"Technological Knowledge Flow Forecasting through A Hierarchical Interactive Graph Neural Network","authors":"Huijie Liu, Han Wu, Le Zhang, Runlong Yu, Ye Liu, Chunli Liu, Qi Liu, Enhong Chen","doi":"10.1109/ICDM51629.2021.00050","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00050","url":null,"abstract":"With the accelerated technology development, technological trend forecasting through patent mining has become a hot issue for high-tech companies. In this term, extensive attention has been attracted to forecasting technological knowledge flows (TKF), i.e., predicting the directional flows of knowledge from one technological field to another. However, existing studies either rely on labor intensive empirical analysis or do not consider the intrinsic characteristics inherent in TKF, including the double-faced aspects (i.e., act as both the source and target) of technology nodes, multiple complex relationships among different technologies, and dynamics of the TKF process. To this end, in this paper, we make a further study and propose a data-driven solution, i.e., a Hierarchical Interactive Graph Neural Network (HighTKF), to automatically find the potential flow trends of technologies. Specifically, HighTKF makes final predictions through two kinds of representations of each technology node (a diffusion vector and an absorption vector), which is realized by three components: High-Order Interaction Module (HOI), Hierarchical Delivery Module (HD) and Technology Flow Tracing Module (TFT). For one thing, HOI and HD aim to model high-order network relationships and hierarchical relationships among technologies. For another, TFT is designed for capturing the dynamic feature evolution of technologies with the above relations involved. Also, we design a hybrid loss function and propose a new evaluation metric for better predicting the unprecedented flows between technologies. Finally, we conduct extensive experiments on a real-world patent dataset, the results verify the effectiveness of our approach and reveal some interesting phenomenons on technological knowledge flow trends.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124057368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Attention-based Feature Interaction for Efficient Online Knowledge Distillation 基于关注的特征交互,实现高效在线知识蒸馏
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00069
Tongtong Su, Qiyu Liang, Jinsong Zhang, Zhaoyang Yu, Gang Wang, Xiaoguang Liu
{"title":"Attention-based Feature Interaction for Efficient Online Knowledge Distillation","authors":"Tongtong Su, Qiyu Liang, Jinsong Zhang, Zhaoyang Yu, Gang Wang, Xiaoguang Liu","doi":"10.1109/ICDM51629.2021.00069","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00069","url":null,"abstract":"Existing online knowledge distillation (KD) methods solve the dependency problem of the high-capacity teacher model via mutual learning and ensemble learning. But they focus on the utilization of logits information in the last few layers and fail to construct a strong teacher model to better supervise student networks, leading to the inefficiency of KD. In this work, we propose a simple but effective online knowledge distillation algorithm, called Attentive Feature Interaction Distillation (AFID). It applies interactive teaching in which the teacher and the student can send, receive, and give feedback on an equal footing, ultimately promoting the generality of both. Specifically, we set up a Feature Interaction Module for two sub-networks to conduct low-level and mid-level feature learning. They can alternately transfer attentive features maps to exchange interesting regions and fuse the other party’s map with the features of self-extraction for information enhancement. Besides, we assign a Feature Fusion Module, in which a Peer Fused Teacher is formed to fuse the output features of two sub-networks to guide sub-networks and a Peer Ensemble Teacher is established to accomplish mutual learning between the two teachers. Integrating Feature Interaction Module and Feature Fusion Module into a unified framework takes full advantage of the interactive teaching mechanism and makes the two sub-networks capture and transfer more fine-grained features to each other. Experimental results on CIFAR-100 and ImageNet ILSVRC 2012 real datasets show that AFID achieves significant performance improvements compared with existing online KD and classical teacher-guide methods.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124438986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
GraphANGEL: Adaptive aNd Structure-Aware Sampling on Graph NEuraL Networks 图神经网络的自适应和结构感知采样
2021 IEEE International Conference on Data Mining (ICDM) Pub Date : 2021-12-01 DOI: 10.1109/ICDM51629.2021.00059
Jingshu Peng, Yanyan Shen, Lei Chen
{"title":"GraphANGEL: Adaptive aNd Structure-Aware Sampling on Graph NEuraL Networks","authors":"Jingshu Peng, Yanyan Shen, Lei Chen","doi":"10.1109/ICDM51629.2021.00059","DOIUrl":"https://doi.org/10.1109/ICDM51629.2021.00059","url":null,"abstract":"Graph neural networks (GNNs) have gained increasing attention in their great success at graph representation learning. In GNNs, the neighborhood of the target node is aggregated iteratively to capture and learn its local structure and neighbor information. Observing that different nodes often require a distinct number of iterations to better learn the representation, we propose an adaptive and structure-aware graph sampling scheme GraphANGEL for GNNs. However, it is quite challenging because both the suitable range of exploration and the important substructure in the neighborhood are difficult to determine. Exploiting the unique feature of random walk mixing time and various node structural role importance measures, we first propose a lightweight component to flexibly estimate the proper neighborhood exploration depth for each target node. Then we investigate different importance metrics to identify and sample the most structurally critical subgraphs that carry a larger influence in messaging passing. Moreover, since different importance metrics unveil different aspects of the graph, we combine and ensemble various importance measures with attention to boost the final performance. In this manner, our method adaptively and explicitly embeds the structural importance information of a node and its critical neighborhood at the same time for finer structure-aware graph representation learning. Evaluation on the benchmark datasets suggests the competitive performance of GraphANGEL to the state-of-the-art approaches, demonstrating the effectiveness of our adaptive and structure-aware sampling approach.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127950453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信