2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)最新文献

筛选
英文 中文
SIBGRAPI 2018 Program Committee SIBGRAPI 2018项目委员会
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/sibgrapi.2018.00006
Abel Gomes, A. Barbosa, Adriano Veloso, Afonso Paiva, Alexandre Chapiro, Alexandre Falcao, Andrew Nealen, A. Lacerda, António Coelho, Aristófanes Correa
{"title":"SIBGRAPI 2018 Program Committee","authors":"Abel Gomes, A. Barbosa, Adriano Veloso, Afonso Paiva, Alexandre Chapiro, Alexandre Falcao, Andrew Nealen, A. Lacerda, António Coelho, Aristófanes Correa","doi":"10.1109/sibgrapi.2018.00006","DOIUrl":"https://doi.org/10.1109/sibgrapi.2018.00006","url":null,"abstract":"Abel Gomes, University of Beira Interior Adín Ramírez Rivera, Unicamp Adriano Barbosa, UFGD Adriano Veloso, UFMG Afonso Paiva, ICMC-USP Alessandro Koerich, ETS-Montreal Alex Laier, UFF Alexandre Chapiro, Dolby Laboratories Alexandre Falcao, IC-UNICAMP Alexandre Zaghetto, University of Brasília Aline Paes, Institute of Computing / Universidade Federal Fluminense Alper Yilmaz, Ohio State University, Ohio Amilcar Soares Junior, Dalhousie University Ana Serrano, Universidad de Zaragoza Anderson Maciel, UFRGS André Backes, Universidade Federal de Uberlândia André Saúde, UFLA Andrew Nealen, USC Anisio Lacerda, CEFET-MG Antonio Nazare, Federal University of Minas Gerais Antonio Vieira, UNIMONTES António Coelho, FEUP/INESC TEC Aparecido Marana, UNESP Aristófanes Correa, UFMA Azael Sousa, Unicamp Bernardo Henz, UFRGS / IFFar Bruno Espinoza, UnB Cai Minjie, University of Tokyo Camilo Dorea, University of Brasilia Carla Pagliari, Inistituto Militar de Engenharia Carlos Santos, UFABC Carlos Thomaz, FEI Christian Pagot, UFPB Claudio Esperança, UFRJ Claudio Jung, UFRGS Creto Vidal, UFC Cristina Vasconcelos, UFF Cunjian Chen, Michigan State University Daniel Pedronette, UNESP Danilo Coimbra, Federal University of Bahia David Menotti, Federal University of Paraná Dibio Borges, UnB Diogo Garcia, University of Brasilia, Brazil Edilson de Aguiar, UFES","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125800427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupling Expressiveness and Body-Mechanics in Human Motion 人体运动中的解耦、表达与身体力学
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00035
Gustavo Eggert Boehs, M. Vieira, Clovis Geyer Pereira
{"title":"Decoupling Expressiveness and Body-Mechanics in Human Motion","authors":"Gustavo Eggert Boehs, M. Vieira, Clovis Geyer Pereira","doi":"10.1109/SIBGRAPI.2018.00035","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00035","url":null,"abstract":"Modern motion capturing systems can accurately store human motion with high precision. Editing this kind of data is troublesome, due to the amount and complexity of data. In this paper, we present a method for decoupling the aspects of human motion that are strictly related to locomotion and balance, from other movements that may convey expressiveness and intentionality. We then demonstrate how this decoupling can be useful in creating variations of the original motion, or in mixing different actions together.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133754097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delaunay Triangulation Data Augmentation Guided by Visual Analytics for Deep Learning 基于深度学习的视觉分析的Delaunay三角数据增强
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00056
A. Peixinho, B. C. Benato, L. G. Nonato, A. Falcão
{"title":"Delaunay Triangulation Data Augmentation Guided by Visual Analytics for Deep Learning","authors":"A. Peixinho, B. C. Benato, L. G. Nonato, A. Falcão","doi":"10.1109/SIBGRAPI.2018.00056","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00056","url":null,"abstract":"It is well known that image classification problems can be effectively solved by Convolutional Neural Networks (CNNs). However, the number of supervised training examples from all categories must be high enough to avoid model overfitting. In this case, two key alternatives are usually presented (a) the generation of artificial examples, known as data augmentation, and (b) reusing a CNN previously trained over a large supervised training set from another image classification problem — a strategy known as transfer learning. Deep learning approaches have rarely exploited the superior ability of humans for cognitive tasks during the machine learning loop. We advocate that the expert intervention through visual analytics can improve machine learning. In this work, we demonstrate this claim by proposing a data augmentation framework based on Encoder-Decoder Neural Networks (EDNNs) and visual analytics for the design of more effective CNN-based image classifiers. An EDNN is initially trained such that its encoder extracts a feature vector from each training image. These samples are projected from the encoder feature space on to a 2D coordinate space. The expert includes points to the projection space and the feature vectors of the new samples are obtained on the original feature space by interpolation. The decoder generates artificial images from the feature vectors of the new samples and the augmented training set is used to improve the CNN-based classifier. We evaluate methods for the proposed framework and demonstrate its advantages using data from a real problem as case study — the diagnosis of helminth eggs in humans. We also show that transfer learning and data augmentation by affine transformations can further improve the results.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125107914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Factors Influencing the Perception of Realism in Synthetic Facial Expressions 影响合成面部表情真实感感知的因素
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00045
R. L. Testa, Ariane Machado-Lima, Fátima L. S. Nunes
{"title":"Factors Influencing the Perception of Realism in Synthetic Facial Expressions","authors":"R. L. Testa, Ariane Machado-Lima, Fátima L. S. Nunes","doi":"10.1109/SIBGRAPI.2018.00045","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00045","url":null,"abstract":"One way to synthesize facial expressions is to change an image to represent the desired emotion and it is useful in entertainment, diagnostic and psychiatric disorders therapy applications. Despite several existing approaches, there is little discussion about factors that contribute or hinder the perception of realism in synthetic facial expressions images. After presenting an approach for facial expressions synthesis through the deformation of facial features, this paper provides an evaluation by 155 volunteers regarding the realism of synthesized images. The proposed facial expression synthesis aims to generate new images using two source images (neutral and expressive face) and changing the expression in a target image (neutral face). The results suggest that assignment of realism depends on the type of image (real or synthetic). However, the synthesis presents images that can be considered realistic, especially for the expression of happiness. Finally, while factors such as color difference between subsequent regions and unnatural-sized facial features contribute to less realism, other factors such as the presence of wrinkles contribute to a greater assignment of realism to images.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122210928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
360 Stitching from Dual-Fisheye Cameras Based on Feature Cluster Matching 基于特征聚类匹配的双鱼眼相机360度拼接
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00047
Tancredo Souza, R. Roberto, J. P. Lima, V. Teichrieb, J. Quintino, F. Q. Silva, André L. M. Santos, Helder Pinho
{"title":"360 Stitching from Dual-Fisheye Cameras Based on Feature Cluster Matching","authors":"Tancredo Souza, R. Roberto, J. P. Lima, V. Teichrieb, J. Quintino, F. Q. Silva, André L. M. Santos, Helder Pinho","doi":"10.1109/SIBGRAPI.2018.00047","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00047","url":null,"abstract":"In the past years, captures made by dual-fisheye lens cameras have been used for virtual reality, 360 broadcasting and many other applications. For these scenarios, to provide a good- quality experience, the alignment of the boundaries between the two images to be stitched must be done properly. However, due to the peculiar design of dual-fisheye cameras and the high variance between different captured scenes, the stitching process can be very challenging. In this work, we present a 360 stitching solution based on feature cluster matching. It is an adaptive stitching technique based on the extraction of feature cluster templates from the stitching region. It is proposed an alignment based on the template matching of these clusters, successfully reducing the discontinuities in the full-view panorama. We evaluate our method on a dataset built from captures made with an existing camera of this kind, the Samsung's Gear 360. It is also described how we can extend these concepts from image stitching to video stitching using the temporal information of the media. Finally, we show that our matching method outperforms a state-of-the-art matching technique for image and video stitching.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122347751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Biometric Recognition in Surveillance Environments Using Master-Slave Architectures 使用主从架构的监控环境中的生物特征识别
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00068
Hugo Proença, J. Neves
{"title":"Biometric Recognition in Surveillance Environments Using Master-Slave Architectures","authors":"Hugo Proença, J. Neves","doi":"10.1109/SIBGRAPI.2018.00068","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00068","url":null,"abstract":"The number of visual surveillance systems deployed worldwide has been growing astoundingly. As a result, attempts have been made to increase the levels of automated analysis of such systems, towards the reliable recognition of human beings in fully covert conditions. Among other possibilities, master-slave architectures can be used to acquire high resolution data of subjects heads from large distances, with enough resolution to perform face recognition. This paper/tutorial provides a compre-hensive overview of the major phases behind the development of a recognition system working in outdoor surveillance scenarios, describing frameworks and methods to: 1) use coupled wide view and Pan-Tilt-Zoom (PTZ) imaging devices in surveillance settings, with a wide-view camera covering the whole scene, while a synchronized PTZ device collects high-resolution data from the head region; 2) use soft biometric information (e.g., body metrology and gait) for pruning the set of potential identities for each query; and 3) faithfully balance ethics/privacy and safety/security issues in this kind of systems.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122686752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Divide-and-Conquer Clustering Approach Based on Optimum-Path Forest 基于最优路径森林的分而治之聚类方法
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00060
Adan Echemendia Montero, A. Falcão
{"title":"A Divide-and-Conquer Clustering Approach Based on Optimum-Path Forest","authors":"Adan Echemendia Montero, A. Falcão","doi":"10.1109/SIBGRAPI.2018.00060","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00060","url":null,"abstract":"Data clustering is one of the main challenges when solving Data Science problems. Despite its progress over almost one century of research, clustering algorithms still fail in identifying groups naturally related to the semantics of the problem. Moreover, the technological advances add crucial challenges with a considerable data increase, which are not handled by most techniques. We address these issues by proposing a divide-and-conquer approach to a clustering technique, which is unique in finding one group per dome of the probability density function of the data — the Optimum-Path Forest (OPF) clustering algorithm. Our approach can use all samples, or at least many samples, in the unsupervised learning process without affecting the grouping performance and, therefore, being less likely to lose relevant grouping information. We show that it can obtain satisfactory results when segmenting natural images into superpixels.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125116876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Asynchronous Stroboscopic Structured Lighting Image Processing Using Low-Cost Cameras 使用低成本相机的异步频闪结构照明图像处理
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00048
F. H. Borsato, C. Morimoto
{"title":"Asynchronous Stroboscopic Structured Lighting Image Processing Using Low-Cost Cameras","authors":"F. H. Borsato, C. Morimoto","doi":"10.1109/SIBGRAPI.2018.00048","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00048","url":null,"abstract":"Structured lighting (SL) image processing relies on the generation of known illumination patterns synchronized with the camera frame rate and is commonly implemented using syncing capable cameras. In general, such cameras employ global shutters, that exposes the whole frame at once. However, most modern digital cameras use rolling shutters, which expose each line at different intervals, impairing most structured lighting applications. In this paper we introduce an asynchronous SL technique that can be used by any rolling shutter digital camera. While the use of stroboscopic illumination partially solves for the line exposure shift, the phase difference between the camera and lighting clocks results in stripe artifacts that move vertically in the video stream. These stripes are detected and tracked using a Kalman filter. Two asynchronous stroboscopic SL methods are proposed. The first method, image differencing, minimizes the stripe artifacts. The second method, image compositing, completely removes the artifacts. We demonstrate the use of the asynchronous differential lighting technique in a pupil detector using a low-cost high-speed camera with no synchronization means, with the lighting running independently at a higher, unknown frequency to the application.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114559614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Instance Segmentation of Teeth in Panoramic X-Ray Images 全景x射线图像中牙齿的深度实例分割
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00058
Gil Jader, Jefferson Fontineli, Marco Ruiz, Kalyf Abdalla, M. Pithon, Luciano Oliveira
{"title":"Deep Instance Segmentation of Teeth in Panoramic X-Ray Images","authors":"Gil Jader, Jefferson Fontineli, Marco Ruiz, Kalyf Abdalla, M. Pithon, Luciano Oliveira","doi":"10.1109/SIBGRAPI.2018.00058","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00058","url":null,"abstract":"In dentistry, radiological examinations help specialists by showing structure of the tooth bones with the goal of screening embedded teeth, bone abnormalities, cysts, tumors, infections, fractures, problems in the temporomandibular regions, just to cite a few. Sometimes, relying solely in the specialist's opinion can bring differences in the diagnoses, which can ultimately hinder the treatment. Although tools for complete automatic diagnosis are no yet expected, image pattern recognition has evolved towards decision support, mainly starting with the detection of teeth and their components in X-ray images. Tooth detection has been object of research during at least the last two decades, mainly relying in threshold and region-based methods. Following a different direction, this paper proposes to explore a deep learning method for instance segmentation of the teeth. To the best of our knowledge, it is the first system that detects and segment each tooth in panoramic X-ray images. It is noteworthy that this image type is the most challenging one to isolate teeth, since it shows other parts of patient's body (e.g., chin, spine and jaws). We propose a segmentation system based on mask region-based convolutional neural network to accomplish an instance segmentation. Performance was thoroughly assessed from a 1500 challenging image data set, with high variation and containing 10 categories of different types of buccal image. By training the proposed system with only 193 images of mouth containing 32 teeth in average, using transfer learning strategies, we achieved 98% of accuracy, 88% of F1-score, 94% of precision, 84% of recall and 99% of specificity over 1224 unseen images, results very superior than other 10 unsupervised methods.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117267769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 118
Active Learning Approaches for Deforested Area Classification 主动学习方法在森林砍伐面积分类中的应用
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00013
F. B. J. R. Dallaqua, F. Faria, Á. Fazenda
{"title":"Active Learning Approaches for Deforested Area Classification","authors":"F. B. J. R. Dallaqua, F. Faria, Á. Fazenda","doi":"10.1109/SIBGRAPI.2018.00013","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00013","url":null,"abstract":"The conservation of tropical forests is a social and ecological relevant subject because of its important role in the global ecosystem. Forest monitoring is mostly done by extraction and analysis of remote sensing imagery (RSI) information. In the literature many works have been successful in remote sensing image classification through the use of machine learning techniques. Generally, traditional learning algorithms demand a representative and huge training set which can be an expensive procedure, especially in RSI, where the imagery spectrum varies along seasons and forest coverage. A semi-supervised learning paradigm known as active learning (AL) is proposed to solve this problem, as it builds efficient training sets through iterative improvement of the model performance. In the construction process of training sets, unlabeled samples are evaluated by a user-defined heuristic, ranked and then the most relevant samples are labeled by an expert user. In this work two different AL approaches (Confidence Heuristics and Committee) are presented to classify remote sensing imagery. In the experiments, our AL approaches achieve excellent effectiveness results compared with well-known approaches existing in the literature for two different datasets.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134358909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信