Proceedings. IEEE International Conference on Computer Vision最新文献

筛选
英文 中文
WATERSHED MERGE FOREST CLASSIFICATION FOR ELECTRON MICROSCOPY IMAGE STACK SEGMENTATION. 分水岭合并森林分类在电镜图像叠加分割中的应用。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738838
Ting Liu, Mojtaba Seyedhosseini, Mark Ellisman, Tolga Tasdizen
{"title":"WATERSHED MERGE FOREST CLASSIFICATION FOR ELECTRON MICROSCOPY IMAGE STACK SEGMENTATION.","authors":"Ting Liu,&nbsp;Mojtaba Seyedhosseini,&nbsp;Mark Ellisman,&nbsp;Tolga Tasdizen","doi":"10.1109/ICIP.2013.6738838","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738838","url":null,"abstract":"<p><p>Automated electron microscopy (EM) image analysis techniques can be tremendously helpful for connectomics research. In this paper, we extend our previous work [1] and propose a fully automatic method to utilize inter-section information for intra-section neuron segmentation of EM image stacks. A watershed merge forest is built via the watershed transform with each tree representing the region merging hierarchy of one 2D section in the stack. A section classifier is learned to identify the most likely region correspondence between adjacent sections. The inter-section information from such correspondence is incorporated to update the potentials of tree nodes. We resolve the merge forest using these potentials together with consistency constraints to acquire the final segmentation of the whole stack. We demonstrate that our method leads to notable segmentation accuracy improvement by experimenting with two types of EM image data sets.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2013 ","pages":"4069-4073"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICIP.2013.6738838","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32887793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Facial Action Unit Event Detection by Cascade of Tasks. 通过任务级联进行面部动作单元事件检测。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2013-01-01 DOI: 10.1109/ICCV.2013.298
Xiaoyu Ding, Wen-Sheng Chu, Fernando De la Torre, Jeffery F Cohn, Qiao Wang
{"title":"Facial Action Unit Event Detection by Cascade of Tasks.","authors":"Xiaoyu Ding, Wen-Sheng Chu, Fernando De la Torre, Jeffery F Cohn, Qiao Wang","doi":"10.1109/ICCV.2013.298","DOIUrl":"10.1109/ICCV.2013.298","url":null,"abstract":"<p><p>Automatic facial Action Unit (AU) detection from video is a long-standing problem in facial expression analysis. AU detection is typically posed as a classification problem between frames or segments of positive examples and negative ones, where existing work emphasizes the use of different features or classifiers. In this paper, we propose a method called Cascade of Tasks (CoT) that combines the use of different tasks (i.e., frame, segment and transition) for AU event detection. We train CoT in a sequential manner embracing diversity, which ensures robustness and generalization to unseen data. In addition to conventional frame-based metrics that evaluate frames independently, we propose a new event-based metric to evaluate detection performance at event-level. We show how the CoT method consistently outperforms state-of-the-art approaches in both frame-based and event-based metrics, across three public datasets that differ in complexity: CK+, FERA and RU-FACS.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2013 ","pages":"2400-2407"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4174346/pdf/nihms-555617.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32703899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stacked Predictive Sparse Coding for Classification of Distinct Regions of Tumor Histopathology. 堆叠预测稀疏编码用于肿瘤组织病理学不同区域的分类。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2013-01-01 DOI: 10.1109/ICCV.2013.28
Hang Chang, Yin Zhou, Paul Spellman, Bahram Parvin
{"title":"Stacked Predictive Sparse Coding for Classification of Distinct Regions of Tumor Histopathology.","authors":"Hang Chang,&nbsp;Yin Zhou,&nbsp;Paul Spellman,&nbsp;Bahram Parvin","doi":"10.1109/ICCV.2013.28","DOIUrl":"https://doi.org/10.1109/ICCV.2013.28","url":null,"abstract":"<p><p>Image-based classification of tissue histology, in terms of distinct histopathology (e.g., tumor or necrosis regions), provides a series of indices for tumor composition. Furthermore, aggregation of these indices from each whole slide image (WSI) in a large cohort can provide predictive models of clinical outcome. However, the performance of the existing techniques is hindered as a result of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state) that are always present in a large cohort. We suggest that, compared with human engineered features widely adopted in existing systems, unsupervised feature learning is more tolerant to batch effect (e.g., technical variations associated with sample preparation) and pertinent features can be learned without user intervention. This leads to a novel approach for classification of tissue histology based on unsupervised feature learning and spatial pyramid matching (SPM), which utilize sparse tissue morphometric signatures at various locations and scales. This approach has been evaluated on two distinct datasets consisting of different tumor types collected from The Cancer Genome Atlas (TCGA), and the experimental results indicate that the proposed approach is (i) extensible to different tumor types; (ii) robust in the presence of wide technical variations and biological heterogeneities; and (iii) scalable with varying training sample sizes.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":" ","pages":"169-176"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2013.28","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32293888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Active Geodesics: Region-based Active Contour Segmentation with a Global Edge-based Constraint. 主动测地线:基于全局边缘约束的区域主动轮廓分割。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2011-11-01 DOI: 10.1109/ICCV.2011.6126468
Vikram Appia, Anthony Yezzi
{"title":"Active Geodesics: Region-based Active Contour Segmentation with a Global Edge-based Constraint.","authors":"Vikram Appia,&nbsp;Anthony Yezzi","doi":"10.1109/ICCV.2011.6126468","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126468","url":null,"abstract":"<p><p>We present an <i>active geodesic</i> contour model in which we constrain the evolving active contour to be a geodesic with respect to a weighted edge-based energy through its entire evolution rather than just at its final state (as in the traditional <i>geodesic active contour</i> models). Since the contour is always a geodesic throughout the evolution, we automatically get local optimality with respect to an edge fitting criterion. This enables us to construct a purely region-based energy minimization model without having to devise arbitrary weights in the combination of our energy function to balance edge-based terms with the region-based terms. We show that this novel approach of combining edge information as the <i>geodesic constraint</i> in optimizing a purely region-based energy yields a new class of active contours which exhibit both local and global behaviors that are naturally responsive to intuitive types of user interaction. We also show the relationship of this new class of globally constrained active contours with traditional minimal path methods, which seek global minimizers of purely edge-based energies without incorporating region-based criteria. Finally, we present some numerical examples to illustrate the benefits of this approach over traditional active contour models.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2011 ","pages":"1975-1980"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2011.6126468","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32786559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Sparse Multi-Task Regression and Feature Selection to Identify Brain Imaging Predictors for Memory Performance. 稀疏多任务回归和特征选择识别记忆性能的脑成像预测因子。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2011-01-01 DOI: 10.1109/ICCV.2011.6126288
Hua Wang, Feiping Nie, Heng Huang, Shannon Risacher, Chris Ding, Andrew J Saykin, Li Shen
{"title":"Sparse Multi-Task Regression and Feature Selection to Identify Brain Imaging Predictors for Memory Performance.","authors":"Hua Wang,&nbsp;Feiping Nie,&nbsp;Heng Huang,&nbsp;Shannon Risacher,&nbsp;Chris Ding,&nbsp;Andrew J Saykin,&nbsp;Li Shen","doi":"10.1109/ICCV.2011.6126288","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126288","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions, which makes regression analysis a suitable model to study whether neuroimaging measures can help predict memory performance and track the progression of AD. Existing memory performance prediction methods via regression, however, do not take into account either the interconnected structures within imaging data or those among memory scores, which inevitably restricts their predictive capabilities. To bridge this gap, we propose a novel Sparse Multi-tAsk Regression and feaTure selection (SMART) method to jointly analyze all the imaging and clinical data under a single regression framework and with shared underlying sparse representations. Two convex regularizations are combined and used in the model to enable sparsity as well as facilitate multi-task learning. The effectiveness of the proposed method is demonstrated by both clearly improved prediction performances in all empirical test cases and a compact set of selected RAVLT-relevant MRI predictors that accord with prior studies.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":" ","pages":"557-562"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2011.6126288","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32720873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 139
Kernel Non-Rigid Structure from Motion. 从运动中提取非刚性结构内核
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2011-01-01 DOI: 10.1109/ICCV.2011.6126319
Paulo F U Gotardo, Aleix M Martinez
{"title":"Kernel Non-Rigid Structure from Motion.","authors":"Paulo F U Gotardo, Aleix M Martinez","doi":"10.1109/ICCV.2011.6126319","DOIUrl":"10.1109/ICCV.2011.6126319","url":null,"abstract":"<p><p>Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":" ","pages":"802-809"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3758879/pdf/nihms482972.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31705935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Segmentation Using Feature-based Graph Partitioning Active Contours. 基于特征的图形分割活动轮廓的高效分割。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2009-09-29 DOI: 10.1109/iccv.2009.5459320
Filiz Bunyak, Kannappan Palaniappan
{"title":"Efficient Segmentation Using Feature-based Graph Partitioning Active Contours.","authors":"Filiz Bunyak,&nbsp;Kannappan Palaniappan","doi":"10.1109/iccv.2009.5459320","DOIUrl":"https://doi.org/10.1109/iccv.2009.5459320","url":null,"abstract":"<p><p>Graph partitioning active contours (GPAC) is a recently introduced approach that elegantly embeds the graph-based image segmentation problem within a continuous optimization framework. GPAC can be used within parametric snake-based or implicit level set-based active contour continuous paradigms for image partitioning. However, GPAC similar to many other graph-based approaches has quadratic memory requirements which severely limits the scalability of the algorithm to practical problem domains. An N xN image requires O(N(4)) computation and memory to create and store the full graph of pixel inter-relationships even before the start of the contour optimization process. For example, an 1024x1024 grayscale image needs over one terabyte of memory. Approximations using tile/block-based or superpixel-based multiscale grouping of the pixels reduces this complexity by trading off accuracy. This paper describes a new algorithm that implements the exact GPAC algorithm using a constant memory requirement of a few kilobytes, independent of image size.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2009 ","pages":"873-880"},"PeriodicalIF":0.0,"publicationDate":"2009-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2009.5459320","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28987743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Feature Preserving Image Smoothing Using a Continuous Mixture of Tensors. 使用连续混合张量的特征保持图像平滑。
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2007-10-14 DOI: 10.1109/ICCV.2007.4408918
Ozlem Subakan, Bing Jian, Baba C Vemuri, C Eduardo Vallejos
{"title":"Feature Preserving Image Smoothing Using a Continuous Mixture of Tensors.","authors":"Ozlem Subakan,&nbsp;Bing Jian,&nbsp;Baba C Vemuri,&nbsp;C Eduardo Vallejos","doi":"10.1109/ICCV.2007.4408918","DOIUrl":"https://doi.org/10.1109/ICCV.2007.4408918","url":null,"abstract":"<p><p>Many computer vision and image processing tasks require the preservation of local discontinuities, terminations and bifurcations. Denoising with feature preservation is a challenging task and in this paper, we present a novel technique for preserving complex oriented structures such as junctions and corners present in images. This is achieved in a two stage process namely, (1) All image data are pre-processed to extract local orientation information using a steerable Gabor filter bank. The orientation distribution at each lattice point is then represented by a continuous mixture of Gaussians. The continuous mixture representation can be cast as the Laplace transform of the mixing density over the space of positive definite (covariance) matrices. This mixing density is assumed to be a parameterized distribution, namely, a mixture of Wisharts whose Laplace transform is evaluated in a closed form expression called the Rigaut type function, a scalar-valued function of the parameters of the Wishart distribution. Computation of the weights in the mixture Wisharts is formulated as a sparse deconvolution problem. (2) The feature preserving denoising is then achieved via iterative convolution of the given image data with the Rigaut type function. We present experimental results on noisy data, real 2D images and 3D MRI data acquired from plant roots depicting bifurcating roots. Superior performance of our technique is depicted via comparison to the state-of-the-art anisotropic diffusion filter.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"11 ","pages":"nihpa163297"},"PeriodicalIF":0.0,"publicationDate":"2007-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2007.4408918","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"28645836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
What Data to Co-register for Computing Atlases. 计算地图集需要共同注册哪些数据?
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2007-10-01 DOI: 10.1109/ICCV.2007.4409157
B T Thomas Yeo, Mert Sabuncu, Hartmut Mohlberg, Katrin Amunts, Karl Zilles, Polina Golland, Bruce Fischl
{"title":"What Data to Co-register for Computing Atlases.","authors":"B T Thomas Yeo,&nbsp;Mert Sabuncu,&nbsp;Hartmut Mohlberg,&nbsp;Katrin Amunts,&nbsp;Karl Zilles,&nbsp;Polina Golland,&nbsp;Bruce Fischl","doi":"10.1109/ICCV.2007.4409157","DOIUrl":"https://doi.org/10.1109/ICCV.2007.4409157","url":null,"abstract":"<p><p>We argue that registration should be thought of as a means to an end, and not as a goal by itself. In particular, we consider the problem of predicting the locations of hidden labels of a test image using observable features, given a training set with both the hidden labels and observable features. For example, the hidden labels could be segmentation labels or activation regions in fMRI, while the observable features could be sulcal geometry or MR intensity. We analyze a probabilistic framework for computing an optimal atlas, and the subsequent registration of a new subject using only the observable features to optimize the hidden label alignment to the training set. We compare two approaches for co-registering training images for the atlas construction: the traditional approach of only using observable features and a novel approach of only using hidden labels. We argue that the alternative approach is superior particularly when the relationship between the hidden labels and observable features is complex and unknown. As an application, we consider the task of registering cortical folds to optimize Brodmann area localization. We show that the alignment of the Brodmann areas improves by up to 25% when using the alternative atlas compared with the traditional atlas. To the best of our knowledge, these are the most accurate Brodmann area localization results (achieved via cortical fold registration) reported to date.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2007 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2007.4409157","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33393769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Cortical Folding Development Study based on Over-Complete Spherical Wavelets. 基于超完全球形小波的皮层折叠发展研究
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2007-10-01 DOI: 10.1109/ICCV.2007.4409137
Peng Yu, Boon Thye Thomas Yeo, P Ellen Grant, Bruce Fischl, Polina Golland
{"title":"Cortical Folding Development Study based on Over-Complete Spherical Wavelets.","authors":"Peng Yu, Boon Thye Thomas Yeo, P Ellen Grant, Bruce Fischl, Polina Golland","doi":"10.1109/ICCV.2007.4409137","DOIUrl":"10.1109/ICCV.2007.4409137","url":null,"abstract":"<p><p>We introduce the use of over-complete spherical wavelets for shape analysis of 2D closed surfaces. Bi-orthogonal spherical wavelets have been shown to be powerful tools in the segmentation and shape analysis of 2D closed surfaces, but unfortunately they suffer from aliasing problems and are therefore not invariant under rotations of the underlying surface parameterization. In this paper, we demonstrate the theoretical advantage of over-complete wavelets over bi-orthogonal wavelets and illustrate their utility on both synthetic and real data. In particular, we show that over-complete spherical wavelets allow us to build more stable cortical folding development models, and detect a wider array of regions of folding development in a newborn dataset.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2007 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4465965/pdf/nihms686956.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33393768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信