International Journal of Image and Graphics最新文献

筛选
英文 中文
Hybrid Segmentation Approach for Tumors Detection in Brain Using Machine Learning Algorithms 基于机器学习算法的脑肿瘤检测混合分割方法
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-21 DOI: 10.1142/s0219467823400089
M. Praveena, M. Rao
{"title":"Hybrid Segmentation Approach for Tumors Detection in Brain Using Machine Learning Algorithms","authors":"M. Praveena, M. Rao","doi":"10.1142/s0219467823400089","DOIUrl":"https://doi.org/10.1142/s0219467823400089","url":null,"abstract":"Tumors are most dangerous to humans and cause death when patient not noticed it in the early stages. Edema is one type of brain swelling that consists of toxic particles in the human brain. Especially in the brain, the tumors are identified with magnetic resonance imaging (MRI) scanning. This scanning plays a major role in detecting the area of the affected area in the given input image. Tumors may contain cancer or non-cancerous cells. Many experts have used this MRI report as the primary confirmation of the tumors or edemas as cancer cells. Brain tumor segmentation is a significant task that is used to classify the normal and tumor tissues. In this paper, a hybrid segmentation approach (HSA) is introduced to detect the accurate regions of tumors and edemas to the given brain input image. HSA is the combination of an advanced segmentation model and edge detection technique used to find the state of the tumors or edemas. HSA is applied on the Kaggle brain image dataset consisting of MRI scanning images. Edge detection technique improves the detection of tumor or edema region. The performance of the HSA is compared with various algorithms such as Fully Automatic Heterogeneous Segmentation using support vector machine (FAHS-SVM), SVM with Normal Segmentation, etc. Performance of proposed work is calculated using mean square error (MSE), peak signal noise ratio (PSNR), and accuracy. The proposed approach achieved better performance by improving accuracy.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42142265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Classification of Multiclass Brain Tumor Image Using Hybrid Artificial Intelligence with Honey Bee Optimization and Probabilistic U-RSNet 基于蜜蜂优化和概率U-RSNet的混合人工智能多类脑肿瘤图像高效分类
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500591
Hariharan Ramamoorthy, Mohan Ramasundaram, S. Raja, Krunal Randive
{"title":"An Efficient Classification of Multiclass Brain Tumor Image Using Hybrid Artificial Intelligence with Honey Bee Optimization and Probabilistic U-RSNet","authors":"Hariharan Ramamoorthy, Mohan Ramasundaram, S. Raja, Krunal Randive","doi":"10.1142/s0219467824500591","DOIUrl":"https://doi.org/10.1142/s0219467824500591","url":null,"abstract":"The life of the human beings are considered as the most precious and the average life time has reduced from 75 to 50 age over the past two decades. This reduction of average life time is due to various health hazards namely cancer and many more. The brain tumor ranks among the top ten most common source of demise. Although brain tumors are not the leading cause of death globally, 40% of other cancers (such as breast or lung cancers) metastasize to the brain and become brain tumors. Despite being the gold norm for tumor diagnosis, a biopsy has a number of drawbacks, including inferior sensitivity/specificity, and menace when performing the biopsy, and lengthy wait times for the results. This work employs artificial intelligence integrated with the honey bee optimization (HBO) in detecting the brain tumor with high level of execution in terms of accuracy, recall, precision, F1 score and Jaccard index when compared to the deep learning algorithms of long short term memory networks (LSTM), convolutional neural networks, generative adversarial networks, recurrent neural networks, and deep belief networks. In this work, to enhance the level of prediction, the image segmentation methodology is performed by the probabilistic U-RSNet. This work is analyzed employing the BraTS 2020, BraTS 2021, and OASIS dataset for the vital parameters like accuracy, precision, recall, F1 score, Jaccard index and PPV.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46186203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement of Bounding Box and Instance Segmentation Accuracy Using ResNet-152 FPN with Modulated Deformable ConvNets v2 Backbone-based Mask Scoring R-CNN 利用ResNet-152 FPN改进基于调制变形ConvNets v2主干掩码评分的边界盒和实例分割精度
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500542
Suresh Shanmugasundaram, Natarajan Palaniappan
{"title":"Improvement of Bounding Box and Instance Segmentation Accuracy Using ResNet-152 FPN with Modulated Deformable ConvNets v2 Backbone-based Mask Scoring R-CNN","authors":"Suresh Shanmugasundaram, Natarajan Palaniappan","doi":"10.1142/s0219467824500542","DOIUrl":"https://doi.org/10.1142/s0219467824500542","url":null,"abstract":"A challenging task is to make sure that the deep learning network learns prediction accuracy by itself. Intersection-over-Union (IoU) amidst ground truth and instance mask determines mask quality. There is no relationship between classification score and mask quality. The mission is to investigate this problem and learn the predicted instance mask’s accuracy. The proposed network regresses the MaskIoU by comparing the predicted mask and the respective instance feature. The mask scoring strategy determines the disorder among mask score and mask quality, then adjusts the parameters accordingly. Adaptation ability to the object’s geometric variations decides deformable convolutional network’s performance. Using increased modeling power and stronger training, focusing ability on pertinent image regions is improved by a reformulated Deformable ConvNets. The introduction of modulation technique, which broadens the deformation modeling scope, and the integration of deformable convolution comprehensively within the network enhance the modeling power. The features which resemble region-based convolutional neural network (R-CNN) feature’s classification capability and its object focus are learned by the network with the help of feature mimicking scheme of DCNv2. Feature mimicking scheme of DCNv2 guides the network training to efficiently control this enhanced modeling capability. The backbone of the proposed Mask Scoring R-CNN network is designed with ResNet-152 FPN and DCNv2 network. The proposed Mask Scoring R-CNN network with DCNv2 network is also tested with other backbones ResNet-50 and ResNet-101. Instance segmentation and object detection on COCO benchmark and Cityscapes dataset are achieved with top accuracy and improved performance using the proposed network.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48564995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Fake Colorized Images based on Deep Learning 基于深度学习的假彩色图像检测
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-21 DOI: 10.1142/s0219467825500020
Khalid A. Salman, Khalid Shaker, Sufyan T. Faraj Al-Janabi
{"title":"Detection of Fake Colorized Images based on Deep Learning","authors":"Khalid A. Salman, Khalid Shaker, Sufyan T. Faraj Al-Janabi","doi":"10.1142/s0219467825500020","DOIUrl":"https://doi.org/10.1142/s0219467825500020","url":null,"abstract":"Image editing technologies have been advanced that can significantly enhance the image, but can also be used maliciously. Colorization is a new image editing technology that uses realistic colors to colorize grayscale photos. However, this strategy can be used on natural color images for a malicious purpose (e.g. to confuse object recognition systems that depend on the colors of objects for recognition). Image forensics is a well-developed field that examines photos of specified conditions to build confidence and authenticity. This work proposes a new fake colorized image detection approach based on the special Residual Network (ResNet) architecture. ResNets are a kind of Convolutional Neural Networks (CNNs) architecture that has been widely adopted and applied for various tasks. At first, the input image is reconstructed via a special image representation that combines color information from three separate color spaces (HSV, Lab, and Ycbcr); then, the new reconstructed images have been used for training the proposed ResNet model. Experimental results have demonstrated that our proposed method is highly generalized and significantly robust for revealing fake colorized images generated by various colorization methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44758172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network 基于卷积神经网络的PET/CT肺癌计算机辅助分类
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-15 DOI: 10.1142/s0219467824500402
Dhekra El Hamdi, Ines Elouedi, I. Slim
{"title":"Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network","authors":"Dhekra El Hamdi, Ines Elouedi, I. Slim","doi":"10.1142/s0219467824500402","DOIUrl":"https://doi.org/10.1142/s0219467824500402","url":null,"abstract":"Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47610541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RDN-NET: A Deep Learning Framework for Asthma Prediction and Classification Using Recurrent Deep Neural Network RDN-NET:一个使用循环深度神经网络进行哮喘预测和分类的深度学习框架
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-13 DOI: 10.1142/s0219467824500505
Md.ASIM Iqbal, K. Devarajan, S. M. Ahmed
{"title":"RDN-NET: A Deep Learning Framework for Asthma Prediction and Classification Using Recurrent Deep Neural Network","authors":"Md.ASIM Iqbal, K. Devarajan, S. M. Ahmed","doi":"10.1142/s0219467824500505","DOIUrl":"https://doi.org/10.1142/s0219467824500505","url":null,"abstract":"Asthma is the one of the crucial types of disease, which causes the huge deaths of all age groups around the world. So, early detection and prevention of asthma disease can save numerous lives and are also helpful to the medical field. But the conventional machine learning methods have failed to detect the asthma from the speech signals and resulted in low accuracy. Thus, this paper presented the advanced deep learning-based asthma prediction and classification using recurrent deep neural network (RDN-Net). Initially, speech signals are preprocessed by using minimum mean-square-error short-time spectral amplitude (MMSE-STSA) method, which is used to remove the noises and enhances the speech properties. Then, improved Ripplet-II Transform (IR2T) is used to extract disease-dependent and disease-specific features. Then, modified gray wolf optimization (MGWO)-based bio-optimization approach is used to select the optimal features by hunting process. Finally, RDN-Net is used to predict the asthma disease present from speech signal and classifies the type as either wheeze, crackle or normal. The simulations are carried out on real-time COSWARA dataset and the proposed method resulted in better performance for all metrics as compared to the state-of-the-art approaches.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46658873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Attention-Based Convolutional GRU for Enhancement of Adversarial Speech Examples 基于自注意的卷积GRU增强对抗性语音示例
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-08 DOI: 10.1142/s0219467824500530
Chaitanya Jannu, S. Vanambathina
{"title":"Self-Attention-Based Convolutional GRU for Enhancement of Adversarial Speech Examples","authors":"Chaitanya Jannu, S. Vanambathina","doi":"10.1142/s0219467824500530","DOIUrl":"https://doi.org/10.1142/s0219467824500530","url":null,"abstract":"Recent research has identified adversarial examples which are the challenges to DNN-based ASR systems. In this paper, we propose a new model based on Convolutional GRU and Self-attention U-Net called [Formula: see text] to improve adversarial speech signals. To represent the correlation between neighboring noisy speech frames, a two-Layer GRU is added in the bottleneck of U-Net and an attention gate is inserted in up-sampling units to increase the adversarial stability. The goal of using GRU is to combine the weights sharing technique with the use of gates to control the flow of data across multiple feature maps. As a result, it outperforms the original 1D convolution used in [Formula: see text]. Especially, the performance of the model is evaluated by explainable speech recognition metrics and its performance is analyzed by the improved adversarial training. We used adversarial audio attacks to perform experiments on automatic speech recognition (ASR). We saw (i) the robustness of ASR models which are based on DNN can be improved using the temporal features grasped by the attention-based GRU network; (ii) through adversarial training, including some additive adversarial data augmentation, we could improve the generalization power of Automatic Speech Recognition models which are based on DNN. The word-error-rate (WER) metric confirmed that the enhancement capabilities are better than the state-of-the-art [Formula: see text]. The reason for this enhancement is the ability of GRU units to extract global information within the feature maps. Based on the conducted experiments, the proposed [Formula: see text] increases the score of Speech Transmission Index (STI), Perceptual Evaluation of Speech Quality (PESQ), and the Short-term Objective Intelligibility (STOI) with adversarial speech examples in speech enhancement.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41729473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Stream Spatial–Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures 基于混合深度学习架构的异常事件检测的双流时空特征提取和分类模型
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-08 DOI: 10.1142/s0219467824500529
P. Mangai, M. Geetha, G. Kumaravelan
{"title":"Two-Stream Spatial–Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures","authors":"P. Mangai, M. Geetha, G. Kumaravelan","doi":"10.1142/s0219467824500529","DOIUrl":"https://doi.org/10.1142/s0219467824500529","url":null,"abstract":"Identifying events using surveillance videos is a major source that reduces crimes and illegal activities. Specifically, abnormal event detection gains more attention so that immediate responses can be provided. Video processing using conventional techniques identifies the events but fails to categorize them. Recently deep learning-based video processing applications provide excellent performances however the architecture considers either spatial or temporal features for event detection. To enhance the detection rate and classification accuracy in abnormal event detection from video keyframes, it is essential to consider both spatial and temporal features. Earlier approaches consider any one of the features from keyframes to detect the anomalies from video frames. However, the results are not accurate and prone to errors sometimes due to video environmental and other factors. Thus, two-stream hybrid deep learning architecture is presented to handle spatial and temporal features in the video anomaly detection process to attain enhanced detection performances. The proposed hybrid models extract spatial features using YOLO-V4 with VGG-16, and temporal features using optical FlowNet with VGG-16. The extracted features are fused and classified using hybrid CNN-LSTM model. Experimentation using benchmark UCF crime dataset validates the proposed model performances over existing anomaly detection methods. The proposed model attains maximum accuracy of 95.6% which indicates better performance compared to state-of-the-art techniques.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42437675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artistic Image Style Transfer Based on CycleGAN Network Model 基于CycleGAN网络模型的艺术图像风格传递
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-07 DOI: 10.1142/s0219467824500499
Yanxi Wei
{"title":"Artistic Image Style Transfer Based on CycleGAN Network Model","authors":"Yanxi Wei","doi":"10.1142/s0219467824500499","DOIUrl":"https://doi.org/10.1142/s0219467824500499","url":null,"abstract":"With the development of computer technology, image stylization has become one of the hottest technologies in image processing. To optimize the effect of artistic image style conversion, a method of artistic image style conversion optimized by attention mechanism is proposed. The CycleGAN network model is introduced, and then the generator is optimized by the attention mechanism. Finally, the application effect of the improved model is tested and analyzed. The results show that the improved model tends to be stable after 40 iterations, the loss value remains at 0.3, and the PSNR value can reach up to 15. From the perspective of the generated image effect, the model has a better visual effect than the CycleGAN model. In the subjective evaluation, 63 people expressed satisfaction with the converted artistic image. As a result, the cyclic generative adversarial network model optimized by the attention mechanism improves the clarity of the generated image, enhances the effect of blurring the target boundary contour, retains the detailed information of the image, optimizes the image stylization effect, and improves the image quality of the method and application value of the processing field.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48180670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and Classification of Objects in Video Content Analysis Using Ensemble Convolutional Neural Network Model 基于集成卷积神经网络模型的视频内容分析中对象的检测与分类
IF 1.6
International Journal of Image and Graphics Pub Date : 2023-07-07 DOI: 10.1142/s0219467825500068
Sita M. Yadav, S. Chaware
{"title":"Detection and Classification of Objects in Video Content Analysis Using Ensemble Convolutional Neural Network Model","authors":"Sita M. Yadav, S. Chaware","doi":"10.1142/s0219467825500068","DOIUrl":"https://doi.org/10.1142/s0219467825500068","url":null,"abstract":"Video content analysis (VCA) is the process of analyzing the contents in the video for various applications. Video classification and content analysis are two of the most difficult challenges that computer vision researchers must solve. Object detection plays an important role in the VCA and is used for identification, detection and classification of objects in the images. The Chaser Prairie Wolf optimization-based deep Convolutional Neural Network classifier (CPW opt-deep CNN classifier) is used in this research to identify and classify the objects in the videos. The deep CNN classifier correctly detected the objects in the video, and the CPW optimization boosted the deep CNN classifier’s performance, where the decision-making behavior of the chasers is enhanced by the sharing nature of the prairie wolves. The classifier’s parameters were successfully tuned by the enabled optimization, which also aids in producing better results. The Ensemble model developed for the object detection adds value to the research and is initiated by the standard hybridization of the YOLOv4 and Resnet 101 model, which evaluated the research’s accuracy, sensitivity, and specificity, improving its efficacy. The proposed CPW opt-deep CNN classifier attained the values of 89.74%, 89.50%, and 89.19% while classifying objects in dataset 1, 91.66%, 86.01%, and 91.52% while classifying objects in dataset 2, compared to the preceding method that is efficient.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48347690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信