Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops最新文献

筛选
英文 中文
Autonomous detection of disruptions in the intensive care unit using deep mask RCNN. 使用深度掩膜RCNN自动检测重症监护室的干扰。
Kumar Rohit Malhotra, Anis Davoudi, Scott Siegel, Azra Bihorac, Parisa Rashidi
{"title":"Autonomous detection of disruptions in the intensive care unit using deep mask RCNN.","authors":"Kumar Rohit Malhotra,&nbsp;Anis Davoudi,&nbsp;Scott Siegel,&nbsp;Azra Bihorac,&nbsp;Parisa Rashidi","doi":"10.1109/CVPRW.2018.00241","DOIUrl":"10.1109/CVPRW.2018.00241","url":null,"abstract":"<p><p>Patients staying in the Intensive Care Unit (ICU) have a severely disrupted circadian rhythm. Due to patients' critical medical condition, ICU physicians and nurses have to provide round-the-clock clinical care, further disrupting patients' circadian rhythm. Mistimed family visits during rest-time can also disrupt patients' circadian rhythm. Currently, such effects are only reported based on hospital visitation policies rather than the actual number of visitors and care providers in the room. To quantify visitation disruptions, we used a deep Mask R-CNN model, a deep learning framework for object instance segmentation to detect and quantify the number of individuals in the ICU unit. This study represents the first effort to automatically quantify visitations in an ICU room, which could have implications in terms of policy adjustment, as well as circadian rhythm investigation. Our model achieved precision of 0.97 and recall of 0.67, with F1 score of 0.79 for detecting disruptions in the ICU units.</p>","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"2018 ","pages":"1944-1946"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPRW.2018.00241","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37106604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
FACSCaps: Pose-Independent Facial Action Coding with Capsules. FACSCaps:与姿势无关的面部动作编码与胶囊。
Itir Onal Ertugrul, Lászlό A Jeni, Jeffrey F Cohn
{"title":"FACSCaps: Pose-Independent Facial Action Coding with Capsules.","authors":"Itir Onal Ertugrul,&nbsp;Lászlό A Jeni,&nbsp;Jeffrey F Cohn","doi":"10.1109/CVPRW.2018.00287","DOIUrl":"10.1109/CVPRW.2018.00287","url":null,"abstract":"<p><p>Most automated facial expression analysis methods treat the face as a 2D object, flat like a sheet of paper. That works well provided images are frontal or nearly so. In real- world conditions, moderate to large head rotation is common and system performance to recognize expression degrades. Multi-view Convolutional Neural Networks (CNNs) have been proposed to increase robustness to pose, but they require greater model sizes and may generalize poorly across views that are not included in the training set. We propose FACSCaps architecture to handle multi-view and multi-label facial action unit (AU) detection within a single model that can generalize to novel views. Additionally, FACSCaps's ability to synthesize faces enables insights into what is leaned by the model. FACSCaps models video frames using matrix capsules, where hierarchical pose relationships between face parts are built into internal representations. The model is trained by jointly optimizing a multi-label loss and the reconstruction accuracy. FACSCaps was evaluated using the FERA 2017 facial expression dataset that includes spontaneous facial expressions in a wide range of head orientations. FACSCaps outperformed both state-of-the-art CNNs and their temporal extensions.</p>","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"2018 ","pages":"2211-2220"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPRW.2018.00287","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37293589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Resolution-Enhanced Lensless Color Shadow Imaging Microscopy Based on Large Field-of-View Submicron-Pixel Imaging Sensors 基于大视场亚微米像素成像传感器的分辨率增强无透镜彩色阴影成像显微镜
Yang Cheng, Xiaofeng Bu, Haowen Ma, Z. Limin, Cao Xu, Y. Tao, Hua Xia, Yan Feng
{"title":"Resolution-Enhanced Lensless Color Shadow Imaging Microscopy Based on Large Field-of-View Submicron-Pixel Imaging Sensors","authors":"Yang Cheng, Xiaofeng Bu, Haowen Ma, Z. Limin, Cao Xu, Y. Tao, Hua Xia, Yan Feng","doi":"10.1109/CVPRW.2018.00301","DOIUrl":"https://doi.org/10.1109/CVPRW.2018.00301","url":null,"abstract":"We report a resolution-enhanced lensless color shadow imaging microscopy (RELCSIM) system based on large field-of-view (FOV) submicron-pixel imaging sensors. The physical pixel size of our custom made imaging chip is 0.95um × 0.95um, and the pixel-count is 25 millions (5120H × 5120V). By directly recording the shadow of the samples without any postprocssing, we have realized a microscope with a half-pitch resolution of ~ 1um and a FOV of ~ 25mm2 simutaneously. To verify the resolution of our system, the grating samples coated on the surface of the chip are imaged. We further demonstrate the monochromatic and color shadow imaging of muscle tissue specimens with the prototype, which show the potential for applications such as diagnostic pathology.","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"31 1","pages":"2246-2253"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84250418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Efficient and Provable Approach for Mixture Proportion Estimation Using Linear Independence Assumption. 基于线性无关假设的混合比例估计的有效且可证明的方法。
Xiyu Yu, Tongliang Liu, Mingming Gong, Kayhan Batmanghelich, Dacheng Tao
{"title":"An Efficient and Provable Approach for Mixture Proportion Estimation Using Linear Independence Assumption.","authors":"Xiyu Yu,&nbsp;Tongliang Liu,&nbsp;Mingming Gong,&nbsp;Kayhan Batmanghelich,&nbsp;Dacheng Tao","doi":"10.1109/CVPR.2018.00471","DOIUrl":"https://doi.org/10.1109/CVPR.2018.00471","url":null,"abstract":"<p><p>In this paper, we study the mixture proportion estimation (MPE) problem in a new setting: given samples from the mixture and the component distributions, we identify the proportions of the components in the mixture distribution. To address this problem, we make use of a linear independence assumption, i.e., the component distributions are independent from each other, which is much weaker than assumptions exploited in the previous MPE methods. Based on this assumption, we propose a method (1) that uniquely identifies the mixture proportions, (2) whose output provably converges to the optimal solution, and (3) that is computationally efficient. We show the superiority of the proposed method over the state-of-the-art methods in two applications including learning with label noise and semi-supervised learning on both synthetic and real-world datasets.</p>","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"2018 ","pages":"4480-4489"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPR.2018.00471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37671061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Applying Faster R-CNN for Object Detection on Malaria Images. 基于更快R-CNN的疟疾图像目标检测
Jane Hung, Stefanie C P Lopes, Odailton Amaral Nery, Francois Nosten, Marcelo U Ferreira, Manoj T Duraisingh, Matthias Marti, Deepali Ravel, Gabriel Rangel, Benoit Malleret, Marcus V G Lacerda, Laurent Rénia, Fabio T M Costa, Anne E Carpenter
{"title":"Applying Faster R-CNN for Object Detection on Malaria Images.","authors":"Jane Hung,&nbsp;Stefanie C P Lopes,&nbsp;Odailton Amaral Nery,&nbsp;Francois Nosten,&nbsp;Marcelo U Ferreira,&nbsp;Manoj T Duraisingh,&nbsp;Matthias Marti,&nbsp;Deepali Ravel,&nbsp;Gabriel Rangel,&nbsp;Benoit Malleret,&nbsp;Marcus V G Lacerda,&nbsp;Laurent Rénia,&nbsp;Fabio T M Costa,&nbsp;Anne E Carpenter","doi":"10.1109/cvprw.2017.112","DOIUrl":"https://doi.org/10.1109/cvprw.2017.112","url":null,"abstract":"<p><p>Deep learning based models have had great success in object detection, but the state of the art models have not yet been widely applied to biological image data. We apply for the first time an object detection model previously used on natural images to identify cells and recognize their stages in brightfield microscopy images of malaria-infected blood. Many micro-organisms like malaria parasites are still studied by expert manual inspection and hand counting. This type of object detection task is challenging due to factors like variations in cell shape, density, and color, and uncertainty of some cell classes. In addition, annotated data useful for training is scarce, and the class distribution is inherently highly imbalanced due to the dominance of uninfected red blood cells. We use Faster Region-based Convolutional Neural Network (Faster R-CNN), one of the top performing object detection models in recent years, pre-trained on ImageNet but fine tuned with our data, and compare it to a baseline, which is based on a traditional approach consisting of cell segmentation, extraction of several single-cell features, and classification using random forests. To conduct our initial study, we collect and label a dataset of 1300 fields of view consisting of around 100,000 individual cells. We demonstrate that Faster R-CNN outperforms our baseline and put the results in context of human performance.</p>","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"2017 ","pages":"808-813"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/cvprw.2017.112","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39626280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 145
Riemannian Variance Filtering: An Independent Filtering Scheme for Statistical Tests on Manifold-valued Data. 黎曼方差过滤:用于漫值数据统计检验的独立滤波方案。
Ligang Zheng, Hyunwoo J Kim, Nagesh Adluru, Michael A Newton, Vikas Singh
{"title":"Riemannian Variance Filtering: An Independent Filtering Scheme for Statistical Tests on Manifold-valued Data.","authors":"Ligang Zheng, Hyunwoo J Kim, Nagesh Adluru, Michael A Newton, Vikas Singh","doi":"10.1109/CVPRW.2017.99","DOIUrl":"10.1109/CVPRW.2017.99","url":null,"abstract":"<p><p>Performing large scale hypothesis testing on brain imaging data to identify group-wise differences (e.g., between healthy and diseased subjects) typically leads to a large number of tests (one per voxel). Multiple testing adjustment (or correction) is necessary to control false positives, which may lead to lower detection power in detecting true positives. Motivated by the use of so-called \"independent filtering\" techniques in statistics (for genomics applications), this paper investigates the use of independent filtering for manifold-valued data (e.g., Diffusion Tensor Imaging, Cauchy Deformation Tensors) which are broadly used in neuroimaging studies. Inspired by the concept of variance of a Riemannian Gaussian distribution, a type of non-specific data-dependent Riemannian variance filter is proposed. In practice, the filter will select a subset of the full set of voxels for performing the statistical test, leading to a more appropriate multiple testing correction. Our experiments on synthetic/simulated manifold-valued data show that the detection power is improved when the statistical tests are performed on the voxel locations that \"pass\" the filter. Given the broadening scope of applications where manifold-valued data are utilized, the scheme can serve as a general feature selection scheme.</p>","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"2017 ","pages":"699-708"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7191643/pdf/nihms-1580341.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37890624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Riemannian Framework for Linear and Quadratic Discriminant Analysis on the Tangent Space of Shapes. 形状切线空间线性和二次判别分析的riemanan框架。
Susovan Pal, Roger P Woods, Suchit Panjiyar, Elizabeth Sowell, Katherine L Narr, Shantanu H Joshi
{"title":"A Riemannian Framework for Linear and Quadratic Discriminant Analysis on the Tangent Space of Shapes.","authors":"Susovan Pal,&nbsp;Roger P Woods,&nbsp;Suchit Panjiyar,&nbsp;Elizabeth Sowell,&nbsp;Katherine L Narr,&nbsp;Shantanu H Joshi","doi":"10.1109/CVPRW.2017.102","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.102","url":null,"abstract":"<p><p>We present a Riemannian framework for linear and quadratic discriminant classification on the tangent plane of the shape space of curves. The shape space is infinite dimensional and is constructed out of square root velocity functions of curves. We introduce the notion of mean and covariance of shape-valued random variables and samples from a tangent space to the pre-shapes (invariant to translation and scaling) and then extend it to the full shape space (rotational invariance). The shape observations from the population are approximated by coefficients of a Fourier basis of the tangent space. The algorithms for linear and quadratic discriminant analysis are then defined using reduced dimensional features obtained by projecting the original shape observations on to the truncated Fourier basis. We show classification results on synthetic data and shapes of cortical sulci, corpus callosum curves, as well as facial midline curve profiles from patients with fetal alcohol syndrome (FAS).</p>","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"2017 ","pages":"726-734"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPRW.2017.102","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35218937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
UAVSensor Fusion with Latent-Dynamic Conditional Random Fields in Coronal Plane Estimation 基于潜在动态条件随机场的无人机传感器融合冠状面估计
Amir M. Rahimi, Raphael Ruschel, B. S. Manjunath
{"title":"UAVSensor Fusion with Latent-Dynamic Conditional Random Fields in Coronal Plane Estimation","authors":"Amir M. Rahimi, Raphael Ruschel, B. S. Manjunath","doi":"10.1109/CVPR.2016.490","DOIUrl":"https://doi.org/10.1109/CVPR.2016.490","url":null,"abstract":"We present a real-time body orientation estimation in a micro-Unmanned Air Vehicle video stream. This work is part ofafully autonomous UAVsystem which can maneuver to face a single individual in challenging outdoor environments. Our body orientation estimation consists of the following steps: (a) obtaining a set ofvisual appearance models for each body orientation, where each model is tagged with a set of scene information (obtained from sensors), (b) exploiting the mutual information of on-board sensors using latent-dynamic conditional random fields (WCRF), (c) Characterizing each visual appearance model with the most discriminative sensor information, (d) fast estimation ofbody orientation during the test flights given theWCRF parameters and the corresponding sensor readings. The key aspects of our approach is to add sparsity to the sensor readings with latent variables followed by long range dependency analysis. Experimental results obtained over real-time video streams demonstrate a significant improvement in both speed (l5-fps) and accuracy (72%) compared to the state of the art techniques that only rely on visual data. Video demonstration ofour autonomous flights (both from ground view and aerial view) are included in the supplementary material.","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"11 1","pages":"4527-4534"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73929691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Effects of Resolution and Registration Algorithm on the Accuracy of EPI vNavs for Real Time Head Motion Correction in MRI. 分辨率和配准算法对MRI实时头部运动校正中EPI vnav精度的影响。
Yingzhuo Zhang, Iman Aganj, André J W van der Kouwe, M Dylan Tisdall
{"title":"Effects of Resolution and Registration Algorithm on the Accuracy of EPI vNavs for Real Time Head Motion Correction in MRI.","authors":"Yingzhuo Zhang,&nbsp;Iman Aganj,&nbsp;André J W van der Kouwe,&nbsp;M Dylan Tisdall","doi":"10.1109/CVPRW.2016.79","DOIUrl":"https://doi.org/10.1109/CVPRW.2016.79","url":null,"abstract":"<p><p>Low-resolution, EPI-based Volumetric Navigators (vNavs) have been used as a prospective motion-correction system in a variety of MRI neuroimaging pulse sequences. The use of low-resolution volumes represents a trade-off between motion tracking accuracy and acquisition time. However, this means that registration must be accurate on the order of 0.2 voxels or less to be effective for motion correction. While vNavs have shown promising results in clinical and research use, the choice of navigator and registration algorithm have not previously been systematically evaluated. In this work we experimentally evaluate the accuracy of vNavs, and possible design choices for future improvements to the system, using real human data. We acquired navigator volumes at three isotropic resolutions (6.4 mm, 8 mm, and 10 mm) with known rotations and translations. The vNavs were then rigidly registered using trilinear, tricubic, and cubic B-spline interpolation. We demonstrate a novel refactoring of the cubic B-spline algorithm that stores pre-computed coefficients to reduce the per-interpolation time to be identical to tricubic interpolation. Our results show that increasing vNav resolution improves registration accuracy, and that cubic B-splines provide the highest registration accuracy at all vNav resolutions. Our results also suggest that the time required by vNavs may be reduced by imaging at 10 mm resolution, without substantial cost in registration accuracy.</p>","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"2016 ","pages":"583-591"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CVPRW.2016.79","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34902053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Best-Buddies Similarity for robust template matching Best-Buddies相似度用于稳健模板匹配
Tali Dekel, Shaul Oron, Michael Rubinstein, S. Avidan, W. Freeman
{"title":"Best-Buddies Similarity for robust template matching","authors":"Tali Dekel, Shaul Oron, Michael Rubinstein, S. Avidan, W. Freeman","doi":"10.1109/CVPR.2015.7298813","DOIUrl":"https://doi.org/10.1109/CVPR.2015.7298813","url":null,"abstract":"We propose a novel method for template matching in unconstrained environments. Its essence is the Best-Buddies Similarity (BBS), a useful, robust, and parameter-free similarity measure between two sets of points. BBS is based on counting the number of Best-Buddies Pairs (BBPs)—pairs of points in source and target sets, where each point is the nearest neighbor of the other. BBS has several key features that make it robust against complex geometric deformations and high levels of outliers, such as those arising from background clutter and occlusions. We study these properties, provide a statistical analysis that justifies them, and demonstrate the consistent success of BBS on a challenging real-world dataset.","PeriodicalId":89346,"journal":{"name":"Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops","volume":"35 1","pages":"2021-2029"},"PeriodicalIF":0.0,"publicationDate":"2015-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80057675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信