2018 IEEE International Conference on Data Mining (ICDM)最新文献

筛选
英文 中文
Accelerating Experimental Design by Incorporating Experimenter Hunches 结合实验者直觉加速实验设计
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00041
Cheng Li, Santu Rana, Sunil Gupta, Vu Nguyen, S. Venkatesh, A. Sutti, D. R. Leal, Teo Slezak, Murray Height, M. Mohammed, I. Gibson
{"title":"Accelerating Experimental Design by Incorporating Experimenter Hunches","authors":"Cheng Li, Santu Rana, Sunil Gupta, Vu Nguyen, S. Venkatesh, A. Sutti, D. R. Leal, Teo Slezak, Murray Height, M. Mohammed, I. Gibson","doi":"10.1109/ICDM.2018.00041","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00041","url":null,"abstract":"Experimental design is a process of obtaining a product with target property via experimentation. Bayesian optimization offers a sample-efficient tool for experimental design when experiments are expensive. Often, expert experimenters have 'hunches' about the behavior of the experimental system, offering potentials to further improve the efficiency. In this paper, we consider per-variable monotonic trend in the underlying property that results in a unimodal trend in those variables for a target value optimization. For example, sweetness of a candy is monotonic to the sugar content. However, to obtain a target sweetness, the utility of the sugar content becomes a unimodal function, which peaks at the value giving the target sweetness and falls off both ways. In this paper, we propose a novel method to solve such problems that achieves two main objectives: a) the monotonicity information is used to the fullest extent possible, whilst ensuring that b) the convergence guarantee remains intact. This is achieved by a two-stage Gaussian process modeling, where the first stage uses the monotonicity trend to model the underlying property, and the second stage uses 'virtual' samples, sampled from the first, to model the target value optimization function. The process is made theoretically consistent by adding appropriate adjustment factor in the posterior computation, necessitated because of using the 'virtual' samples. The proposed method is evaluated through both simulations and real world experimental design problems of a) new short polymer fiber with the target length, and b) designing of a new three dimensional porous scaffolding with a target porosity. In all scenarios our method demonstrates faster convergence than the basic Bayesian optimization approach not using such 'hunches'.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"127 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114047212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Highly Parallel Sequential Pattern Mining on a Heterogeneous Platform 异构平台上的高度并行顺序模式挖掘
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00131
Yu-Heng Hsieh, Chun-Chieh Chen, Hong-Han Shuai, Ming-Syan Chen
{"title":"Highly Parallel Sequential Pattern Mining on a Heterogeneous Platform","authors":"Yu-Heng Hsieh, Chun-Chieh Chen, Hong-Han Shuai, Ming-Syan Chen","doi":"10.1109/ICDM.2018.00131","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00131","url":null,"abstract":"Sequential pattern mining can be applied to various fields such as disease prediction and stock analysis. Many algorithms have been proposed for sequential pattern mining, together with acceleration methods. In this paper, we show that a heterogeneous platform with CPU and GPU is more suitable for sequential pattern mining than traditional CPU-based approaches since the support counting process is inherently succinct and repetitive. Therefore, we propose the PArallel SequenTial pAttern mining algorithm, referred to as PASTA, to accelerate sequential pattern mining by combining the merits of CPU and GPU computing. Explicitly, PASTA adopts the vertical bitmap representation of database to exploits the GPU parallelism. In addition, a pipeline strategy is proposed to ensure that both CPU and GPU on the heterogeneous platform operate concurrently to fully utilize the computing power of the platform. Furthermore, we develop a swapping scheme to mitigate the limited memory problem of the GPU hardware without decreasing the performance. Finally, comprehensive experiments are conducted to analyze PASTA with different baselines. The experiments show that PASTA outperforms the state-of-the-art algorithms by orders of magnitude on both real and synthetic datasets.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125331477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DeepDiffuse: Predicting the 'Who' and 'When' in Cascades DeepDiffuse:预测级联中的“谁”和“何时”
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00134
Mohammad Raihanul Islam, S. Muthiah, B. Adhikari, B. Prakash, Naren Ramakrishnan
{"title":"DeepDiffuse: Predicting the 'Who' and 'When' in Cascades","authors":"Mohammad Raihanul Islam, S. Muthiah, B. Adhikari, B. Prakash, Naren Ramakrishnan","doi":"10.1109/ICDM.2018.00134","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00134","url":null,"abstract":"Cascades are an accepted model to capturing how information diffuses across social network platforms. A large body of research has been focused on dissecting the anatomy of such cascades and forecasting their progression. One recurring theme involves predicting the next stage(s) of cascades utilizing pertinent information such as the underlying social network, structural properties of nodes (e.g., degree) and (partial) histories of cascade propagation. However, such type of granular information is rarely available in practice. We study in this paper the problem of cascade prediction utilizing only two types of (coarse) information, viz. which node is infected and its corresponding infection time. We first construct several simple baselines to solve this cascade prediction problem. Then we describe the shortcomings of these methods and propose a new solution leveraging recent progress in embeddings and attention models from representation learning. We also perform an exhaustive analysis of our methods on several real world datasets. Our proposed model outperforms the baselines and several other state-of-the-art methods.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131511676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
ASTM: An Attentional Segmentation Based Topic Model for Short Texts ASTM:基于注意力分割的短文本主题模型
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00073
Jiamiao Wang, Ling Chen, Lu Qin, Xindong Wu
{"title":"ASTM: An Attentional Segmentation Based Topic Model for Short Texts","authors":"Jiamiao Wang, Ling Chen, Lu Qin, Xindong Wu","doi":"10.1109/ICDM.2018.00073","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00073","url":null,"abstract":"To address the data sparsity problem in short text understanding, various alternative topic models leveraging word embeddings as background knowledge have been developed recently. However, existing models combine auxiliary information and topic modeling in a straightforward way without considering human reading habits. In contrast, extensive studies have proven that it is full of potential in textual analysis by taking into account human attention. Therefore, we propose a novel model, Attentional Segmentation based Topic Model (ASTM), to integrate both word embeddings as supplementary information and an attention mechanism that segments short text documents into fragments of adjacent words receiving similar attention. Each segment is assigned to a topic and each document can have multiple topics. We evaluate the performance of our model on three real-world short text datasets. The experimental results demonstrate that our model outperforms the state-of-the-art in terms of both topic coherence and text classification.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121267779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Doc2Cube: Allocating Documents to Text Cube Without Labeled Data Doc2Cube:将文档分配到没有标记数据的文本立方体
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00169
Fangbo Tao, Chao Zhang, Xiusi Chen, Meng Jiang, T. Hanratty, Lance M. Kaplan, Jiawei Han
{"title":"Doc2Cube: Allocating Documents to Text Cube Without Labeled Data","authors":"Fangbo Tao, Chao Zhang, Xiusi Chen, Meng Jiang, T. Hanratty, Lance M. Kaplan, Jiawei Han","doi":"10.1109/ICDM.2018.00169","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00169","url":null,"abstract":"Data cube is a cornerstone architecture in multidimensional analysis of structured datasets. It is highly desirable to conduct multidimensional analysis on text corpora with cube structures for various text-intensive applications in healthcare, business intelligence, and social media analysis. However, one bottleneck to constructing text cube is to automatically put millions of documents into the right cube cells so that quality multidimensional analysis can be conducted afterwards-it is too expensive to allocate documents manually or rely on massively labeled data. We propose Doc2Cube, a method that constructs a text cube from a given text corpus in an unsupervised way. Initially, only the label names (e.g., USA, China) of each dimension (e.g., location) are provided instead of any labeled data. Doc2Cube leverages label names as weak supervision signals and iteratively performs joint embedding of labels, terms, and documents to uncover their semantic similarities. To generate joint embeddings that are discriminative for cube construction, Doc2Cube learns dimension-tailored document representations by selectively focusing on terms that are highly label-indicative in each dimension. Furthermore, Doc2Cube alleviates label sparsity by propagating the information from label names to other terms and enriching the labeled term set. Our experiments on real data demonstrate the superiority of Doc2Cube over existing methods.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132905842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Similarity-Based Active Learning for Image Classification Under Class Imbalance 类不平衡下基于相似性的图像分类主动学习
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00196
Chuanhai Zhang, Wallapak Tavanapong, Gavin Kijkul, J. Wong, P. C. Groen, Jung-Hwan Oh
{"title":"Similarity-Based Active Learning for Image Classification Under Class Imbalance","authors":"Chuanhai Zhang, Wallapak Tavanapong, Gavin Kijkul, J. Wong, P. C. Groen, Jung-Hwan Oh","doi":"10.1109/ICDM.2018.00196","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00196","url":null,"abstract":"Many image classification tasks (e.g., medical image classification) have a severe class imbalance problem. Convolutional neural network (CNN) is currently a state-of-the-art method for image classification. CNN relies on a large training dataset to achieve high classification performance. However, manual labeling is costly and may not even be feasible for medical domain. In this paper, we propose a novel similarity-based active deep learning framework (SAL) that deals with class imbalance. SAL actively learns a similarity model to recommend unlabeled rare class samples for experts' manual labeling. Based on similarity ranking, SAL recommends high confidence unlabeled common class samples for automatic pseudo-labeling without experts' labeling effort. To the best of our knowledge, SAL is the first active deep learning framework that deals with a significant class imbalance. Our experiments show that SAL consistently outperforms two other recent active deep learning methods on two challenging datasets. What's more, SAL obtains nearly the upper bound classification performance (using all the images in the training dataset) while the domain experts labeled only 5.6% and 7.5% of all images in the Endoscopy dataset and the Caltech-256 dataset, respectively. SAL significantly reduces the experts' manual labeling efforts while achieving near optimal classification performance.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133556584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
MuVAN: A Multi-view Attention Network for Multivariate Temporal Data 多维时间数据的多视角注意力网络
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00087
Ye Yuan, Guangxu Xun, Fenglong Ma, Yaqing Wang, Nan Du, Ke-bin Jia, Lu Su, Aidong Zhang
{"title":"MuVAN: A Multi-view Attention Network for Multivariate Temporal Data","authors":"Ye Yuan, Guangxu Xun, Fenglong Ma, Yaqing Wang, Nan Du, Ke-bin Jia, Lu Su, Aidong Zhang","doi":"10.1109/ICDM.2018.00087","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00087","url":null,"abstract":"Recent advances in attention networks have gained enormous interest in time series data mining. Various attention mechanisms are proposed to soft-select relevant timestamps from temporal data by assigning learnable attention scores. However, many real-world tasks involve complex multivariate time series that continuously measure target from multiple views. Different views may provide information of different levels of quality varied over time, and thus should be assigned with different attention scores as well. Unfortunately, the existing attention-based architectures cannot be directly used to jointly learn the attention scores in both time and view domains, due to the data structure complexity. Towards this end, we propose a novel multi-view attention network, namely MuVAN, to learn fine-grained attentional representations from multivariate temporal data. MuVAN is a unified deep learning model that can jointly calculate the two-dimensional attention scores to estimate the quality of information contributed by each view within different timestamps. By constructing a hybrid focus procedure, we are able to bring more diversity to attention, in order to fully utilize the multi-view information. To evaluate the performance of our model, we carry out experiments on three real-world benchmark datasets. Experimental results show that the proposed MuVAN model outperforms the state-of-the-art deep representation approaches in different real-world tasks. Analytical results through a case study demonstrate that MuVAN can discover discriminative and meaningful attention scores across views over time, which improves the feature representation of multivariate temporal data.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132232287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Partial Multi-view Clustering via Consistent GAN 基于一致性GAN的部分多视图聚类
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00174
Qianqian Wang, Zhengming Ding, Zhiqiang Tao, Quanxue Gao, Y. Fu
{"title":"Partial Multi-view Clustering via Consistent GAN","authors":"Qianqian Wang, Zhengming Ding, Zhiqiang Tao, Quanxue Gao, Y. Fu","doi":"10.1109/ICDM.2018.00174","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00174","url":null,"abstract":"Multi-view clustering, as one of the most important methods to analyze multi-view data, has been widely used in many real-world applications. Most existing multi-view clustering methods perform well on the assumption that each sample appears in all views. Nevertheless, in real-world application, each view may well face the problem of the missing data due to noise, or malfunction. In this paper, a new consistent generative adversarial network is proposed for partial multi-view clustering. We learn a common low-dimensional representation, which can both generate the missing view data and capture a better common structure from partial multi-view data for clustering. Different from the most existing methods, we use the common representation encoded by one view to generate the missing data of the corresponding view by generative adversarial networks, then we use the encoder and clustering networks. This is intuitive and meaningful because encoding common representation and generating the missing data in our model will promote mutually. Experimental results on three different multi-view databases illustrate the superiority of the proposed method.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134362813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Deep Structure Learning for Fraud Detection 欺诈检测的深度结构学习
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00072
Haibo Wang, Chuan Zhou, Jia Wu, Weizhen Dang, Xingquan Zhu, Jilong Wang
{"title":"Deep Structure Learning for Fraud Detection","authors":"Haibo Wang, Chuan Zhou, Jia Wu, Weizhen Dang, Xingquan Zhu, Jilong Wang","doi":"10.1109/ICDM.2018.00072","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00072","url":null,"abstract":"Fraud detection is of great importance because fraudulent behaviors may mislead consumers or bring huge losses to enterprises. Due to the lockstep feature of fraudulent behaviors, fraud detection problem can be viewed as finding suspicious dense blocks in the attributed bipartite graph. In reality, existing attribute-based methods are not adversarially robust, because fraudsters can take some camouflage actions to cover their behavior attributes as normal. More importantly, existing structural information based methods only consider shallow topology structure, making their effectiveness sensitive to the density of suspicious blocks. In this paper, we propose a novel deep structure learning model named DeepFD to differentiate normal users and suspicious users. DeepFD can preserve the non-linear graph structure and user behavior information simultaneously. Experimental results on different types of datasets demonstrate that DeepFD outperforms the state-of-the-art baselines.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131753658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Privacy-Preserving Temporal Record Linkage 保护隐私的临时记录链接
2018 IEEE International Conference on Data Mining (ICDM) Pub Date : 2018-11-01 DOI: 10.1109/ICDM.2018.00053
Thilina Ranbaduge, P. Christen
{"title":"Privacy-Preserving Temporal Record Linkage","authors":"Thilina Ranbaduge, P. Christen","doi":"10.1109/ICDM.2018.00053","DOIUrl":"https://doi.org/10.1109/ICDM.2018.00053","url":null,"abstract":"Record linkage (RL) is the process of identifying matching records from different databases that refer to the same entity. It is common that the attribute values of records that belong to the same entity do evolve over time, for example people can change their surname or address. Therefore, to identify the records that refer to the same entity over time, RL should make use of temporal information such as the time-stamp of when a record was created and/or update last. However, if RL needs to be conducted on information about people, due to privacy and confidentiality concerns organizations are often not willing or allowed to share sensitive data in their databases, such as personal medical records, or location and financial details, with other organizations. This paper is the first to propose a privacy-preserving temporal record linkage (PPTRL) protocol that can link records across different databases while ensuring the privacy of the sensitive data in these databases. We propose a novel protocol based on Bloom filter encoding which incorporates the temporal information available in records during the linkage process. Our approach uses homomorphic encryption to securely calculate the probabilities of entities changing attribute values in their records over a period of time. Based on these probabilities we generate a set of masking Bloom filters to adjust the similarities between record pairs. We provide a theoretical analysis of the complexity and privacy of our technique and conduct an empirical study on large real databases containing several millions of records. The experimental results show that our approach can achieve better linkage quality compared to non-temporal PPRL while providing privacy to individuals in the databases that are being linked.","PeriodicalId":286444,"journal":{"name":"2018 IEEE International Conference on Data Mining (ICDM)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132982675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信