2020 IEEE International Conference on Data Mining (ICDM)最新文献

筛选
英文 中文
Dual-Side Auto-Encoder for High-Dimensional Time Series Segmentation 用于高维时间序列分割的双面自编码器
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00102
Yue Bai, Lichen Wang, Yunyu Liu, Yu Yin, Y. Fu
{"title":"Dual-Side Auto-Encoder for High-Dimensional Time Series Segmentation","authors":"Yue Bai, Lichen Wang, Yunyu Liu, Yu Yin, Y. Fu","doi":"10.1109/ICDM50108.2020.00102","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00102","url":null,"abstract":"High-dimensional time series segmentation aims to segment a long temporal sequence into several short and meaningful subsequences. The high-dimensionality makes it challenging due to the complicated correlations among the sequential features. A large number of labeled data is required in existing supervised methods, and unsupervised methods mainly deploy clustering approaches, which are sensitive to outliers and hard to guarantee high performance. Also, most existing methods mainly rely on hand-craft features to deal with regular time series segmentation and achieve promising results. However, these approaches cannot effectively handle high-dimensional time series and will result in a high computational cost. In our work, we propose a novel unsupervised representation learning framework called Dual-Side Auto-Encoder (DSAE). It mainly focuses on high-dimensional time series segmentation by effectively capturing the temporal correlative patterns. Specifically, a single-to-multiple auto-encoder is designed to capture local sequential information. Besides, a long-shot distance encoding strategy is proposed. It aims to explicitly guide the learning process to obtain distinctive representations for segmentation. Furthermore, the long-short distance strategy is also executed in the decoded feature space, which implicitly directs the representation learning. Substantial experiments on six datasets illustrate the model effectiveness11Code will be released at https://github.com/yueb17/HTSS.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123051795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cold Item Recommendations via Hierarchical Item2vec 通过分层Item2vec推荐冷项目
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00101
Oren Barkan, Avi Caciularu, Idan Rejwan, Ori Katz, Jonathan Weill, Itzik Malkiel, Noam Koenigstein
{"title":"Cold Item Recommendations via Hierarchical Item2vec","authors":"Oren Barkan, Avi Caciularu, Idan Rejwan, Ori Katz, Jonathan Weill, Itzik Malkiel, Noam Koenigstein","doi":"10.1109/ICDM50108.2020.00101","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00101","url":null,"abstract":"Learning item representations is a key building block in recommender systems research. However, representations often suffer from the cold start problem - a well-known problem in which rare items in the tail of the distribution face insufficient data yielding inadequate representations. In this work, we present a novel hybrid recommender that supports the utilization of hierarchical content-based information to mitigate the cold start problem. In particular, we assume a taxonomy of item tags in which every item is associated with several ‘parent’ tags and the tags themselves can be associated with several ‘parent’ tags in a hierarchical manner. Our model learns item representations that are guided by the ‘parent’ tags of each item which allows propagating relevant information between items sharing the same hierarchy. In addition, the tags are modeled using tag representations that allow propagating information between any two tags that share a common ancestor. Due to space limitation, we focus this work on a recommendations task, however the same approach can be utilized for general representation learning e.g. language models.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"111 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124410194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A Generalized-Momentum-Accelerated Hessian-Vector Algorithm for High-Dimensional and Sparse Data 高维稀疏数据的广义动量加速Hessian-Vector算法
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00134
Weiling Li, Xin Luo
{"title":"A Generalized-Momentum-Accelerated Hessian-Vector Algorithm for High-Dimensional and Sparse Data","authors":"Weiling Li, Xin Luo","doi":"10.1109/ICDM50108.2020.00134","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00134","url":null,"abstract":"Precisely understanding high-dimensional and sparse (HiDS) user-item interactions is the most important issue in a recommender system. A latent factor analysis (LFA)-based model has proven to be efficient in addressing it, while current models of this kind mostly rely on first-order optimizers. It is vital to implement an LFA-based model able to approach the second order stationary points efficiently for improving its representative learning ability. To do so, this work presents a Generalized-momentum-accelerated Hessian-vector Algorithm (GHA) for HiDS data. Its main idea includes a) adopting the principle of a Hessian-vector-product-based method to avoid operating a Hessian matrix directly, and b) incorporating a generalized momentum method into its parameter learning process for further enhancing its ability in approaching a stationary point. Experimental results on two industrial datasets demonstrate that when compared with state-of-the-art LFA-based models, a GHA-based LFA model achieves gains in accuracy and convergence rate. These positive outcomes also indicate that a generalized momentum method is compatible with algorithms implicitly relying on gradients like a second-order algorithm.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123795549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adversarial Label-Flipping Attack and Defense for Graph Neural Networks 图神经网络的对抗性标签翻转攻击与防御
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00088
Mengmei Zhang, Linmei Hu, C. Shi, Xiao Wang
{"title":"Adversarial Label-Flipping Attack and Defense for Graph Neural Networks","authors":"Mengmei Zhang, Linmei Hu, C. Shi, Xiao Wang","doi":"10.1109/ICDM50108.2020.00088","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00088","url":null,"abstract":"With the great popularity of Graph Neural Networks (GNNs), the robustness of GNNs to adversarial attacks has received increasing attention. However, existing works neglect adversarial label-flipping attacks, where the attacker can manipulate an unnoticeable fraction of training labels. Exploring the robustness of GNNs to label-flipping attacks is highly critical, especially when labels are collected from external sources and false labels are easy to inject (e.g., recommendation systems). In this work, we introduce the first study of adversarial label-flipping attacks on GNNs. We propose an effective attack model LafAK based on approximated closed form of GNNs and continuous surrogate of non-differentiable objective, efficiently generating attacks via gradient-based optimizers. Furthermore, we show that one key reason for the vulnerability of GNNs to label-flipping attack is overfitting to flipped nodes. Based on this observation, we propose a defense framework which introduces a community-preserving self-supervised task as regularization to avoid overfitting. We demonstrate the effectiveness of our proposed attack model to GNNs on four real-world datasets. The effectiveness of our defense framework is also well validated by the substantial improvements of defense based GNN and its variants under label-flipping attacks.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125527100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
FeatureNorm: L2 Feature Normalization for Dynamic Graph Embedding 动态图嵌入的L2特征归一化
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00082
Menglin Yang, Ziqiao Meng, Irwin King
{"title":"FeatureNorm: L2 Feature Normalization for Dynamic Graph Embedding","authors":"Menglin Yang, Ziqiao Meng, Irwin King","doi":"10.1109/ICDM50108.2020.00082","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00082","url":null,"abstract":"Dynamic graphs arise in a plethora of practical scenarios such as social networks, communication networks, and financial transaction networks. Given a dynamic graph, it is fundamental and essential to learn a graph representation that is expected not only to preserve structural proximity but also jointly capture the time-evolving patterns. Recently, graph convolutional network (GCN) has been widely explored and used in non-Euclidean application domains. The main success of GCN, especially in handling dependencies and passing messages within nodes, lies in its approximation to Laplacian smoothing. As a matter of fact, this smoothing technique can not only encourage must-link node pairs to get closer but also push cannot-link pairs to shrink together, which potentially cause serious feature shrink or oversmoothing problem, especially when stacking graph convolution in multiple layers or steps. For learning time-evolving patterns, a natural solution is to preserve historical state and combine it with the current interactions to obtain the most recent representation. Then the serious feature shrink or oversmoothing problem could happen when stacking graph convolution explicitly or implicitly according to current prevalent methods, which would make nodes too similar to distinguish each other. To solve this problem in dynamic graph embedding, we analyze the shrinking properties in the node embedding space at first, and then design a simple yet versatile method, which exploits L2 feature normalization constraint to rescale all nodes to hypersphere of a unit ball so that nodes would not shrink together, and yet similar nodes can still get closer. Extensive experiments on four real-world dynamic graph datasets compared with competitive baseline models demonstrate the effectiveness of the proposed method.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115646591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Multi-Attention 3D Residual Neural Network for Origin-Destination Crowd Flow Prediction 基于多注意力三维残差神经网络的人群流量预测
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00142
Jiaman Ma, Jeffrey Chan, S. Rajasegarar, G. Ristanoski, C. Leckie
{"title":"Multi-Attention 3D Residual Neural Network for Origin-Destination Crowd Flow Prediction","authors":"Jiaman Ma, Jeffrey Chan, S. Rajasegarar, G. Ristanoski, C. Leckie","doi":"10.1109/ICDM50108.2020.00142","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00142","url":null,"abstract":"To provide effective services for intelligent transportation systems (ITS), such as optimizing ride services and recommending trips, it is important to predict the distributions of passenger flows from various origins to destinations. However, existing crowd flow prediction models have not sufficiently addressed this problem, and most methods have only focused on in and out flows of individual regions. The main challenges of origin-destination (OD) crowd flow prediction are diverse flow patterns across city networks and data sparsity. To solve these problems, we propose a Multi Attention 3D Residual Network (MAThR) to predict city-wide OD crowd flows. In particular, we develop a multi-component 3D residual structure with a novel global self-attention mechanism to dynamically aggregate the OD spatial-temporal dependencies, by modeling three components: contextual information of the region, and long and short term periodic crowd flows. For each component, we design a tensor criss-cross self-attention block, which can simultaneously discover the global and local correlation of spatial (where), temporal (when) and contextual (which) information between all OD pairs. Evaluation on real-world crowd flow data demonstrates the advantages of our MAThR method on prediction accuracy, compared to other existing state-of-the-art methods.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122383261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
PMLF: Prediction-Sampling-based Multilayer-Structured Latent Factor Analysis PMLF:基于预测抽样的多层结构潜在因素分析
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00076
Di Wu, Long Jin, Xin Luo
{"title":"PMLF: Prediction-Sampling-based Multilayer-Structured Latent Factor Analysis","authors":"Di Wu, Long Jin, Xin Luo","doi":"10.1109/ICDM50108.2020.00076","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00076","url":null,"abstract":"A latent factor (LF) model can implement efficient analysis for a high-dimensional and sparse (HiDS) matrix from recommender systems (RSs). However, an LF model's representation learning ability to a targeted HiDS matrix is heavily proportional to its known data density. Unfortunately, an HiDS matrix's known data are limited due to users' activity limitations in RSs. Motivated by this observation, this paper proposes a Prediction-sampling-based Multilayer-structured Latent Factor (PMLF) model. Following the principle of Deep Forest [1], PMLF implements a loosely-connected multilayered LF structure, where each layer generates synthetic ratings to enrich the input for the next layer. Such an injection process is carefully monitored through a random sampling process and nonlinear activations to avoid overfitting. Thus, PMLF's representation learning ability to an HiDS matrix is significantly enhanced owing to the carefully injected estimates and its generalized multilayer-structure. Experimental results on four HiDS matrices from industrial RSs indicate that compared with six state-of-the-art LF-based and deep neural networks-based models, PMLF well balances the prediction accuracy and computational efficiency, making it satisfy demands of fast and accurate industrial applications.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130609231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Hide and Mine in Strings: Hardness and Algorithms 隐藏和挖掘字符串:硬度和算法
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00103
G. Bernardini, A. Conte, Garance Gourdel, R. Grossi, G. Loukides, N. Pisanti, S. Pissis, G. Punzi, L. Stougie, Michelle Sweering
{"title":"Hide and Mine in Strings: Hardness and Algorithms","authors":"G. Bernardini, A. Conte, Garance Gourdel, R. Grossi, G. Loukides, N. Pisanti, S. Pissis, G. Punzi, L. Stougie, Michelle Sweering","doi":"10.1109/ICDM50108.2020.00103","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00103","url":null,"abstract":"We initiate a study on the fundamental relation between data sanitization (i.e., the process of hiding confidential information in a given dataset) and frequent pattern mining, in the context of sequential (string) data. Current methods for string sanitization hide confidential patterns introducing, however, a number of spurious patterns that may harm the utility of frequent pattern mining. The main computational problem is to minimize this harm. Our contribution here is twofold. First, we present several hardness results, for different variants of this problem, essentially showing that these variants cannot be solved or even be approximated in polynomial time. Second, we propose integer linear programming formulations for these variants and algorithms to solve them, which work in polynomial time under certain realistic assumptions on the problem parameters.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124361759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fast Sparse Connectivity Network Adaption via Meta-Learning 基于元学习的快速稀疏连接网络自适应
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00032
Bo Jin, Ke Cheng, Yue Qu, Liang Zhang, Keli Xiao, Xinjiang Lu, Xiaopeng Wei
{"title":"Fast Sparse Connectivity Network Adaption via Meta-Learning","authors":"Bo Jin, Ke Cheng, Yue Qu, Liang Zhang, Keli Xiao, Xinjiang Lu, Xiaopeng Wei","doi":"10.1109/ICDM50108.2020.00032","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00032","url":null,"abstract":"Partial correlation-based connectivity networks can describe the direct connectivity between features while avoiding spurious effects, and hence they can be implemented in diagnosing complex dynamic multivariate systems. However, existing studies mainly focus on single systems that are ill-equipped for incremental learning. Moreover, related methods estimate temporal connectivity network by imposing only sparse regularization without integrating pattern priors (e.g., inter-system shared pattern and intra-system intrinsic pattern), which have been proven effective in limiting noise interference. To this end, we develop an adaptive connectivity estimation model that incorporates prior patterns, namely Sparse Adaptive Meta-Learning Connectivity Network (SAMCN). Specifically, our model extends ideas of the gradient-based meta-learning to capture inter-system shared prior information by generating fast adaptive initialization parameters for the connectivity matrix. Then, a sparse variational autoencoder is proposed to generate a weight matrix for sparse regularization penalty in reweighted LASSO, which helps extract intra-system intrinsic patterns (local manifold structure). Experimental results on both synthetic data and real-world datasets demonstrate that our method is capable of adequately capturing the aforementioned pattern priors. Further, experiments from corresponding classification tasks validate the strength of the prior pattern-aware features connectivity network in resulting in better classification performance.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126478922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploiting Knowledge Hierarchy for Finding Similar Exercises in Online Education Systems 利用知识层次在在线教育系统中寻找相似的练习
2020 IEEE International Conference on Data Mining (ICDM) Pub Date : 2020-11-01 DOI: 10.1109/ICDM50108.2020.00167
Wei Tong, Shiwei Tong, Wei Huang, Liyang He, Jianhui Ma, Qi Liu, Enhong Chen
{"title":"Exploiting Knowledge Hierarchy for Finding Similar Exercises in Online Education Systems","authors":"Wei Tong, Shiwei Tong, Wei Huang, Liyang He, Jianhui Ma, Qi Liu, Enhong Chen","doi":"10.1109/ICDM50108.2020.00167","DOIUrl":"https://doi.org/10.1109/ICDM50108.2020.00167","url":null,"abstract":"In education systems, Finding Similar Exercises (FSE) is the key step for both exercise retrieval and duplicate detection. Recently, more and more attention has been drawn into this area and several works have been proposed, to utilize the exercise content (e.g., texts or images) or the labeled knowledge concepts. Such approaches, however, have failed to take knowledge hierarchy into account. To this end, we advance a novel knowledge-aware multimodal network, namely KnowNet, for finding similar exercises in large-scale online education systems by integrating the knowledge hierarchy into the heterogeneous exercise data and learning a relation-aware semantic representation. Specifically, we first propose a Content Representation Layer (CRL) to learn a unified semantic representation of the heterogeneous exercise content. Then, we design a Hierarchy Fusion Layer (HFL) to exploit the knowledge hierarchy. By combining the knowledge hierarchy, HFL can not only retrieve the relation-aware semantic representation but also provide an interpretable view to investigate the similarity of exercises. Finally, we adopt a Similarity Score Layer (SSL) for returning similar exercises. Extensive experiments demonstrate the effectiveness and interpretability of KnowNet.","PeriodicalId":202149,"journal":{"name":"2020 IEEE International Conference on Data Mining (ICDM)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121961257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信