Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)最新文献

筛选
英文 中文
FASTensor: A tensor framework for spatiotemporal description fastsensor:用于时空描述的张量框架
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/sibgrapi.est.2019.8298
V. F. Mota, J. A. D. Santos, A. Araújo
{"title":"FASTensor: A tensor framework for spatiotemporal description","authors":"V. F. Mota, J. A. D. Santos, A. Araújo","doi":"10.5753/sibgrapi.est.2019.8298","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2019.8298","url":null,"abstract":"Spatiotemporal description is a research field with applications in various areas such as video indexing, surveillance, human-computer interfaces, among others. Big Data problems in large databases are now being treated with Deep Learning tools, however we still have room for improvement in spatiotemporal handcraft description. Moreover, we still have problems that involve small data in which data augmentation and other techniques are not valid. The main contribution of this Ph.D. Thesis 1 is the development of a framework for spatiotemporal representation using orientation tensors enabling dimension reduction and invariance. This is a multipurpose framework called Features As Spatiotemporal Tensors (FASTensor). We evaluate this framework in three different applications: Human Action recognition, Video Pornography classification and Cancer Cell classification. The latter one is also a contribution of this work, since we introduce a new dataset called Melanoma Cancer Cell dataset (MCC). It is a small data that cannot be artificially augmented due the difficulty of extraction and the nature of motion. The results were competitive, while also being fast and simple to implement. Finally, our results in the MCC dataset can be used in other cancer cell treatment analysis.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"373 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115658907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Lingual Text Localization via Language-Specific Convolutional Neural Networks 基于特定语言卷积神经网络的多语言文本定位
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/sibgrapi.est.2019.8333
Jhonatas S. Conceição, A. Pinto, L. G. L. Decker, Jose Luis Flores Campana, Manuel Alberto Cordova Neira, Andrezza A. Dos Santos, H. Pedrini, R. Torres
{"title":"Multi-Lingual Text Localization via Language-Specific Convolutional Neural Networks","authors":"Jhonatas S. Conceição, A. Pinto, L. G. L. Decker, Jose Luis Flores Campana, Manuel Alberto Cordova Neira, Andrezza A. Dos Santos, H. Pedrini, R. Torres","doi":"10.5753/sibgrapi.est.2019.8333","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2019.8333","url":null,"abstract":"Scene text localization and recognition is a topic in computer vision that aims to delimit candidate regions in an input image containing incidental scene text elements. The challenge of this research consists in devising detectors capable of dealing with a wide range of variability, such as font size, font style, color, complex background, text in different languages, among others. This work presents a comparison between two strategies of building classification models, based on a Convolution Neural Network method, to detect textual elements in multiple languages in images: (i) classification model built on a multi-lingual training scenario; and (ii) classification model built on a language-specific training scenario. The experiments designed in this work indicate that language-specific model outperforms the classification model trained over a multi-lingual scenario, with an improvement of 14.79%, 8.94%, and 11.43%, in terms of precision, recall, and F-measure values, respectively.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125435298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulating Behavior Diversity in BioCrowds 模拟生物群体中的行为多样性
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/SIBGRAPI.EST.2019.8296
P. Knob, S. Musse
{"title":"Simulating Behavior Diversity in BioCrowds","authors":"P. Knob, S. Musse","doi":"10.5753/SIBGRAPI.EST.2019.8296","DOIUrl":"https://doi.org/10.5753/SIBGRAPI.EST.2019.8296","url":null,"abstract":"Most of the techniques available nowadays for crowd simulation are focused on a specific situation, e.g. evacuation inhazardous events. Very few of them consider the cultural and personality aspects present in a society to determine the behavior of agents. Therefore, this work aims to build a framework able to deal with different cultural and personality traits as input, and translate them into a group parametrization, which is going to determine the behavior of groups and crowds in virtual environments. Also, we include in BioCrowds a comfort response for agents, in terms of density and thermal characteristics of the environment. Results indicate that the cultural/psychological mappings seem promising, since agents were able to perform as intended. Additionally, agents were able to react due to thermal and density comfort, improving their ability to react to environmental changes.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114391545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speckle Denoising With NL Filter and Stochastic Distances Under the Haar Wavelet Domain Haar小波域下基于NL滤波和随机距离的散斑去噪
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/sibgrapi.est.2019.8307
Pedro A. A. Penna, N. Mascarenhas
{"title":"Speckle Denoising With NL Filter and Stochastic Distances Under the Haar Wavelet Domain","authors":"Pedro A. A. Penna, N. Mascarenhas","doi":"10.5753/sibgrapi.est.2019.8307","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2019.8307","url":null,"abstract":"Synthetic aperture radar SAR imaging systems have a coherent processing that causes the appearance of the multiplicative speckle noise. This noise gives a granular appearance to the terrestrial surface scene impairing its interpretation. The similarity between patches approach is applied by the current state-of-the-art filters in remote sensing area. The goal of this manuscript is to present a method to transform the non-local means (NLM) algorithm capable to mitigate the noise. Singlelook speckle and the NLM under the Haar wavelet domain are considered in our research with intensity SAR images. To achieve our goal, we used the Exponential-Polynomial (EP) and Gamma distributions to describe the Haar coefficients. Also, stochastic distances based on these two mentioned distributions were formulated and embedded in the original NLM technique. Finally, we present analyses and comparisons of real scenarios to demonstrate the competitive performance of the proposed method with some recent filters of the literature.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131627255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Tecnologia assistiva para reconhecimento de cartas de baralho utilizando aprendizado profundo 使用深度学习的扑克牌识别辅助技术
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/sibgrapi.est.2019.8331
S. Santos, Aline Maria Torres Santos, Junio Cesar Rodrigues Lima, F. Soares, Gabriel Silva Vieira
{"title":"Tecnologia assistiva para reconhecimento de cartas de baralho utilizando aprendizado profundo","authors":"S. Santos, Aline Maria Torres Santos, Junio Cesar Rodrigues Lima, F. Soares, Gabriel Silva Vieira","doi":"10.5753/sibgrapi.est.2019.8331","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2019.8331","url":null,"abstract":"Apresentamos neste trabalho uma aplicação inclusiva que permite pessoas com deficiência visual interagir socialmente com outros indivíduos em atividades de lazer, como jogos que envolvam carteado. Para isso, foi construído um modelo generalizável, treinado para detecção e reconhecimento de cartas de baralho através do aprendizado de máquina profundo. Neste contexto, foi projetado e implementado um sistema denominado Smart Assistant, baseado na API de detecção de objetos do TensorFlow. Em um local previamente definido, as cartas são colocadas dentro do campo de visão de uma câmera digital para que possam ser detectadas e classificadas em tempo real. A API SAPI, de sintetização de texto para fala (TTS), é usada para converter os rótulos das cartas detectadas (em formato de texto) em saída de áudio. Os experimentos iniciais mostram que em situações reais de jogo, a aplicação consegue identificar e classificar cartas com alta assertividade.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124555117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A visual approach for user-guided feature fusion 一种用户导向特征融合的可视化方法
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/sibgrapi.est.2019.8313
Gladys M. H. Hilasaca, F. Paulovich
{"title":"A visual approach for user-guided feature fusion","authors":"Gladys M. H. Hilasaca, F. Paulovich","doi":"10.5753/sibgrapi.est.2019.8313","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2019.8313","url":null,"abstract":"Dimensionality Reduction transforms data from high-dimensional space into visual space preserving the existing relationships. This abstract representation of complex data enables exploration of data similarities, but brings challenges about the analysis and interpretation for users on mismatching between their expectations and the visual representation. A possible way to model these understandings is via different feature extractors, because each feature has its own way to encode characteristics. Since there is no perfect feature extractor, the combination of multiple sets of features has been explored through a process called feature fusion. Feature fusion can be readily performed when machine learning or data mining algorithms have a cost function. However, when such a function does not exist, user support needs to be provided otherwise the process is impractical. In this project, we present a novel feature fusion approach that employs data samples and visualization to allow users to not only effortlessly control the combination of different feature sets but also to understand the attained results. The effectiveness of our approach is confirmed by a comprehensive set of qualitative and quantitative experiments, opening up different possibilities for user-guided analytical scenarios. The ability of our approach to provide real-time feedback for feature fusion is exploited in the context of unsupervised clustering techniques, where users can perform an exploratory process to discover the best combination of features that reflects their individual perceptions about similarity. A traditional way to visualize data similarities is via scatter plots, however, they suffer from overlap issues. Overlapping hides data distributions and makes the relationship among data instances difficult to observe, which hampers data exploration. To tackle this issue, we developed a technique called Distance-preserving Grid (DGrid). DGrid employs a binary space partitioning process in combination with Dimensionality Reduction output to create orthogonal regular grid layouts. DGrid ensures non-overlapping instances because each data instance is assigned only to one grid cell. Our results show that DGrid outperforms the existing state-of-the-art techniques, whereas requiring only a fraction of the running time and computational resources rendering DGrid as a very attractive method for large datasets.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126844346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive Face Tracking Based on Online Learning 基于在线学习的自适应人脸跟踪
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/SIBGRAPI.EST.2019.8297
A. Khurshid, J. Scharcanski
{"title":"Adaptive Face Tracking Based on Online Learning","authors":"A. Khurshid, J. Scharcanski","doi":"10.5753/SIBGRAPI.EST.2019.8297","DOIUrl":"https://doi.org/10.5753/SIBGRAPI.EST.2019.8297","url":null,"abstract":"Object tracking can be used to localize objects in scenes, and also can be used for locating changes in the object’s appearance or shape over time. Most of the available object tracking methods tend to perform satisfactorily in controlled environments but tend to fail when the objects appearance or shape changes, or even when the illumination changes (e.g., when tracking non-rigid objects such as a human face). Also, in many available tracking methods, the tracking error tends to increase indefinitely when the target is missed. Therefore, tracking the target objects in long and uninterrupted video sequences tends to be quite challenging for these methods. This work proposes a face tracking algorithm that contains two operating modes. Both the operating modes are based on feature learning techniques that utilize the useful data accumulated during the face tracking and implements an incremental learning framework. To accumulate the training data, the quality of the test sample is checked before its utilization in the incremental and online training scheme. Furthermore, a novel error prediction scheme is proposed that is capable of estimating the tracking error during the execution of the tracking algorithm. Furthermore, an improvement in the Constrained Local Model (CLM), called weighted-CLM (W-CLM) is proposed that utilize the raining data to assign weights to the landmarks based on their consistency. These weights are used in the CLM search process to improve CLM search optimization process. The experimental results show that the proposed tracking method (both variants) perform better than the comparative state of the art methods in terms of Root Mean Squared Error (RMSE) and Center Location Error (CLE). In order to prove the efficiency of the proposed techniques, an application in yawning detection is presented. 1","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125226880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving the performance of a SVM+HOG classifier for detection and tracking of wagon components by using geometric constraints 利用几何约束改进SVM+HOG分类器检测和跟踪货车部件的性能
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/sibgrapi.est.2019.8336
Camilo Lélis A. Gonçalves, R. Zampolo, F. Barros, A. C. S. Gomes, E. Carvalho, Bruno V. Ferreira, Rafael L. Rocha, Rodrigo C. Rodrigues, Giovanni Dias, Diego A. Freitas
{"title":"Improving the performance of a SVM+HOG classifier for detection and tracking of wagon components by using geometric constraints","authors":"Camilo Lélis A. Gonçalves, R. Zampolo, F. Barros, A. C. S. Gomes, E. Carvalho, Bruno V. Ferreira, Rafael L. Rocha, Rodrigo C. Rodrigues, Giovanni Dias, Diego A. Freitas","doi":"10.5753/sibgrapi.est.2019.8336","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2019.8336","url":null,"abstract":"The inspection of train and railway components that can cause derailment plays a key role in rail maintenance. To improve productivity and safety, service providers look for automatic and reliable inspection solutions. Although automatic inspection based on computer vision is a standard concept, such an application challenges development community due to the environmental and logistic factors to be considered. Previous publications presented automatic classifiers to evaluate integrity and placement of wagon components. Although the high classification accuracy reported, ineffective object detection affected the general performance. Our object detector/tracker consists of a descriptor based on the histogram of oriented gradients, a support vector machine classifier, and a set of geometric constraints, which takes in account the ideal trajectory path of the wagon’s components of interest and the distances between them. We detail training and validation procedures, together with the metrics used to assess the performance of the system. Presented results compare two other techniques with our approach, which exhibits a fair trade-off between reliability and computational complexity for the application of wagon component detection.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122440170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multitemporal liver analysis for surgical plan and clinical follow-up, adapted for Sibgrapi 2019 手术计划和临床随访的多颞叶肝脏分析,适用于Sibgrapi 2019
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/sibgrapi.est.2019.8328
R. B. Santos, Guilherme Fontes Dos Reis, G. Costa
{"title":"Multitemporal liver analysis for surgical plan and clinical follow-up, adapted for Sibgrapi 2019","authors":"R. B. Santos, Guilherme Fontes Dos Reis, G. Costa","doi":"10.5753/sibgrapi.est.2019.8328","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2019.8328","url":null,"abstract":"The Computed tomography (CT) images have been widely used in diagnosis of diseases, for surgical planning and transplantation. Not only targeting the relevant structures but also as the automatic volume measurement based on segmentation are a critical step for computer-aided diagnosis and surgical planning (POHLE, 2003) (F, 2002). In particular, the analysis of liver imaging obtained by CT presents some challenges. This organ presents a topographic anatomy different from the functional anatomy. This was described by Claude Couinaud (COUINAUD C.; DELMAS, 1957), from the anatomical studies that detailed the 8 functional units of the liver, known as hepatic segments.Currently the conditions of analysis of computerized tomography’s are basically visual, performing them from 2D images (R, 1999). According to (F, 2002) this ends up being a source of errors, since the expert is subjected to an exhaustive routine of analysis from avery large number of images, and the fatigue can cause errors of interpretation by human failure. In order to aid in surgical planning and clinical follow - up, an application was developed that transforms the computational tomography, previously into 2D images, into a 3D model that can be used in the web application developed in this work allowing a multitemporal analysis of the liver.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127980786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matching People Across Surveillance Cameras 通过监控摄像头匹配人
Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI) Pub Date : 2019-10-09 DOI: 10.5753/sibgrapi.est.2019.8306
Raphael C. Prates, W. R. Schwartz
{"title":"Matching People Across Surveillance Cameras","authors":"Raphael C. Prates, W. R. Schwartz","doi":"10.5753/sibgrapi.est.2019.8306","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2019.8306","url":null,"abstract":"This work addresses the person re-identification problem, which consists on matching images of individuals captured by multiple and non-overlapping surveillance cameras. Works from literature tackle this problem proposing robust feature descriptors and matching functions, where the latter is responsible to assign the correct identity for individuals and is the focus of this work. Specifically, we propose two matching methods: the Kernel MBPLS and the Kernel X-CRC. The Kernel MBPLS is a nonlinear regression model that is scalable with respect to the number of cameras and allows the inclusion of additional labelled information (e.g., attributes). Differently, the Kernel X-CRC is a nonlinear and multitask matching function that can be used jointly with subspace learning approaches to boost the matching rates. We present an extensive experimental evaluation of both approaches in four datasets (VIPeR, PRID450S, WARD and Market-1501). Experimental results demonstrate that the Kernel MBPLS and the Kernel X-CRC outperforms approaches from literature. Furthermore, we show that the Kernel X-CRC can be successfuly applied in large-scale and multiple cameras datasets.","PeriodicalId":119031,"journal":{"name":"Anais Estendidos da Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"57 16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121941978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信