International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
Breast cancer: A hybrid method for feature selection and classification in digital mammography 癌症乳腺:数字乳腺摄影中特征选择和分类的混合方法
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-25 DOI: 10.1002/ima.22889
Shankar Thawkar, Vijay Katta, Ajay Raj Parashar, Law Kumar Singh, Munish Khanna
{"title":"Breast cancer: A hybrid method for feature selection and classification in digital mammography","authors":"Shankar Thawkar,&nbsp;Vijay Katta,&nbsp;Ajay Raj Parashar,&nbsp;Law Kumar Singh,&nbsp;Munish Khanna","doi":"10.1002/ima.22889","DOIUrl":"https://doi.org/10.1002/ima.22889","url":null,"abstract":"<p>In this article, a hybrid approach based on the Whale optimization algorithm (WOA) and the Dragonfly algorithm (DA) is proposed for breast cancer diagnosis. The hybrid WOADA method selects features based on the fitness value. These features are used to predict the breast masses as benign or malignant using artificial neural networks (ANN) and adaptive neuro-fuzzy inference systems (ANFIS) as classifiers. The proposed solution is evaluated by using 651 mammograms. The results demonstrate that the WOADA technique outperforms the basic WOA and DA approaches. The accuracy of the suggested WOADA algorithm is 97.84%, with a Kappa value of 0.9477 and an AUC value of 0.972 ± 0.007 for the ANN classifier. Likewise, the ANFIS classifier achieved 98.00% accuracy with a Kappa value of 0.96 and an AUC value of 0.998 ± 0.001. In addition, the viability of the hybrid WOADA technique was evaluated on four benchmark datasets and then compared with four state-of-the-art algorithms and published approaches.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1696-1712"},"PeriodicalIF":3.3,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50143436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Breast lesion classification using features fusion and selection of ensemble ResNet method 基于特征融合和集合ResNet方法选择的乳腺病变分类
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-24 DOI: 10.1002/ima.22894
Gülhan Kılıçarslan, Canan Koç, Fatih Özyurt, Yeliz Gül
{"title":"Breast lesion classification using features fusion and selection of ensemble ResNet method","authors":"Gülhan Kılıçarslan,&nbsp;Canan Koç,&nbsp;Fatih Özyurt,&nbsp;Yeliz Gül","doi":"10.1002/ima.22894","DOIUrl":"https://doi.org/10.1002/ima.22894","url":null,"abstract":"<p>Medical Imaging with Deep Learning has recently become the most prominent topic in the scientific world. Significant results have been obtained in the classification of medical images using deep learning methods, and there has been an increase in studies on malignant types. The main reason for choosing breast cancer is that breast cancer is one of the critical malignant types that increase the death rate in women. In this study, 1236 ultrasound images were collected from Elazig Fethi Sekin City Hospital, and three different ResNet CNN architectures were used for feature extraction. Data were trained with an SVM classifier. In addition, the three ResNet architectures were combined, and novel fused ResNet architecture was used in this study. In addition, these features were used with three different feature selection techniques, MR-MR, NCA, and Relieff. These results are 89.3% obtained from ALL-ResNet architecture and the feature selected with NCA in normal and lesion classification. Normal, malignant, and benign classification best accuracy is 84.9% with ALL-ResNet NCA. Experimental studies show that MR-MR, NCA, and Relieff feature selection algorithms reduce features and give more results that are successful. This indicates that the proposed method is more successful than classical deep learning methods.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1779-1795"},"PeriodicalIF":3.3,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50153815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Few-shot learning for dermatological conditions with Lesion Area Aware Swin Transformer 使用病变区域感知Swin Transformer对皮肤病进行少量注射学习
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-22 DOI: 10.1002/ima.22891
Yonggong Ren, Wenqiang Xu, Yuanxin Mao, Yuechu Wu, Bo Fu, Dang N. H. Thanh
{"title":"Few-shot learning for dermatological conditions with Lesion Area Aware Swin Transformer","authors":"Yonggong Ren,&nbsp;Wenqiang Xu,&nbsp;Yuanxin Mao,&nbsp;Yuechu Wu,&nbsp;Bo Fu,&nbsp;Dang N. H. Thanh","doi":"10.1002/ima.22891","DOIUrl":"https://doi.org/10.1002/ima.22891","url":null,"abstract":"<p>Skin is the largest organ of the human body and participates in the functional activities of the human body all the time. Therefore, human beings have a large risk of getting skin diseases. The diseased skin lesion image shows visually different characteristics from the normal skin image, and sometimes unusual skin color may indicate human viscera or autoimmune issues. However, the current recognition and classification of dermatological conditions still rely on expert visual diagnosis rather than a visual algorithm. This is because there are many kinds of lesion features of skin diseases, and the lesion accounts for a small proportion of the skin image, so it is difficult to learn the required lesion features; meanwhile, some dermatology images have too few samples to deal with the problem of small samples. In view of the above limitations, we propose a model named Lesion Area Aware Shifted windows Transformer for dermatological conditions classification rely on the powerful performance and excellent result of Swin transformer recently proposed. For brief notation, we use its abbreviation later. Our main contributions are as follows. First, we modify the Swin transformer and use it in the automatic classification of dermatological conditions. Using the self-attention mechanism of the transformer, our method can mine more long-distance correlations between diseased tissue image features. Using its shifting windows, we can fuse local features and global features, so it is possible to get better classification results with a flexible receptive field. Second, we use a skip connection to grasp and reinforce global features from the previous block and use Swin transformer to extract detailed local features, which will excavate and merge global features and local features further. Third, considering Swin transformer is a lightweight model compared with traditional transformers, our model is compact for deployment and more favorable to resource-strict medical devices.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1549-1560"},"PeriodicalIF":3.3,"publicationDate":"2023-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50140368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of the impacts of dermoscopy image augmentation methods on skin cancer classification and a new augmentation method with wavelet packets 皮肤镜图像增强方法对皮肤癌症分类的影响与小波包增强方法的比较
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-21 DOI: 10.1002/ima.22890
Evgin Goceri
{"title":"Comparison of the impacts of dermoscopy image augmentation methods on skin cancer classification and a new augmentation method with wavelet packets","authors":"Evgin Goceri","doi":"10.1002/ima.22890","DOIUrl":"https://doi.org/10.1002/ima.22890","url":null,"abstract":"<p>This work aims to determine the most suitable technique for dermoscopy image augmentation to improve the performance of lesion classifications. Also, a new augmentation technique based on wavelet packet transformations has been developed. The contribution of this work is five-fold. First, a comprehensive review of the methods used for dermoscopy image augmentation has been presented. Second, a new augmentation method has been developed. Third, the augmentation methods have been implemented with the same images for meaningful comparisons. Fourth, three network architectures have been implemented to see the effects of the augmented images obtained from each augmentation method on classifications. Fifth, the results of the same classifier trained separately using expanded data sets have been compared with five different metrics. The proposed augmentation method increases the classification accuracy by at least 4.77% compared with the accuracy values obtained from the same classifier with other augmentation methods.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1727-1744"},"PeriodicalIF":3.3,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50148121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multi-branch sustainable convolutional neural network for disease classification 用于疾病分类的多分支可持续卷积神经网络
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-13 DOI: 10.1002/ima.22884
Maria Naz, Munam Ali Shah, Hasan Ali Khattak, Abdul Wahid, Muhammad Nabeel Asghar, Hafiz Tayyab Rauf, Muhammad Attique Khan, Zoobia Ameer
{"title":"Multi-branch sustainable convolutional neural network for disease classification","authors":"Maria Naz,&nbsp;Munam Ali Shah,&nbsp;Hasan Ali Khattak,&nbsp;Abdul Wahid,&nbsp;Muhammad Nabeel Asghar,&nbsp;Hafiz Tayyab Rauf,&nbsp;Muhammad Attique Khan,&nbsp;Zoobia Ameer","doi":"10.1002/ima.22884","DOIUrl":"https://doi.org/10.1002/ima.22884","url":null,"abstract":"<p>Pandemic and natural disasters are growing more often, imposing even more pressure on life care services and users. There are knowledge gaps regarding how to prevent disasters and pandemics. In recent years, after heart disease, corona virus disease-19 (COVID-19), brain stroke, and cancer are at their peak. Different machine learning and deep learning-based techniques are presented to detect these diseases. Existing technique uses two branches that have been used for detection and prediction of disease accurately such as brain hemorrhage. However, existing techniques have been focused on the detection of specific diseases with double-branches convolutional neural networks (CNNs). There is a need to develop a model to detect multiple diseases at the same time using computerized tomography (CT) scan images. We proposed a model that consists of 12 branches of CNN to detect the different types of diseases with their subtypes using CT scan images and classify them more accurately. We proposed multi-branch sustainable CNN model with deep learning architecture trained on the brain CT hemorrhage, COVID-19 lung CT scans and chest CT scans with subtypes of lung cancers. Feature extracted automatically from preprocessed input data and passed to classifiers for classification in the form of concatenated feature vectors. Six classifiers support vector machine (SVM), decision tree (DT), K-nearest neighbor (K-NN), artificial neural network (ANN), naïve Bayes (NB), linear regression (LR) classifiers, and three ensembles the random forest (RF), AdaBoost, gradient boosting ensembles were tested on our model for classification and prediction. Our model achieved the best results on RF on each dataset. Respectively, on brain CT hemorrhage achieved (99.79%) accuracy, on COVID-19 lung CT scans achieved (97.61%), and on chest CT scans dataset achieved (98.77%).</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1621-1633"},"PeriodicalIF":3.3,"publicationDate":"2023-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50131101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GR-Net: Gated axial attention ResNest network for polyp segmentation GR-Net:用于息肉分割的门控轴向注意力ResNest网络
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-13 DOI: 10.1002/ima.22887
Shen Jiang, Jinjiang Li, Zhen Hua
{"title":"GR-Net: Gated axial attention ResNest network for polyp segmentation","authors":"Shen Jiang,&nbsp;Jinjiang Li,&nbsp;Zhen Hua","doi":"10.1002/ima.22887","DOIUrl":"https://doi.org/10.1002/ima.22887","url":null,"abstract":"<p>Medical image segmentation is a key step in medical image analysis. The small differences in the background and foreground of medical images and the small size of most medical data sets make medical segmentation difficult. This paper uses a global-local training strategy to train the network. In the global structure, ResNest is used as the backbone of the network, and parallel decoders are added to aggregate features, as well as gated axial attention to adapt to small datasets. In the local structure, the extraction of image details is accomplished by dividing the images into equal patches of the same size. To evaluate the performance of the model, qualitative and quantitative comparisons were performed on five datasets, Kvasir-SEG, CVC-ColonDB, CVC-ClinicDB, CVC-300, and ETIS-LaribPolypDB, and the segmentation results were significantly better than the current mainstream polyp segmentation methods. The results show that the model has better segmentation performance and generalization ability.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1531-1548"},"PeriodicalIF":3.3,"publicationDate":"2023-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50130928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized lung cancer detection by amended whale optimizer and rough set theory 用改进的鲸鱼优化器和粗糙集理论优化癌症检测
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-11 DOI: 10.1002/ima.22888
Zuzheng Chang, Dragan Rodriguez
{"title":"Optimized lung cancer detection by amended whale optimizer and rough set theory","authors":"Zuzheng Chang,&nbsp;Dragan Rodriguez","doi":"10.1002/ima.22888","DOIUrl":"https://doi.org/10.1002/ima.22888","url":null,"abstract":"<p>The current paper proposes a new hierarchical procedure for efficient diagnosis of lung cancer computed tomography (CT) images. Here, after noise removal based on median filtering, a contrast enhancement based on general histogram equalization (GHE) has been utilized. Then, a modified version of K-means clustering has been used for the area of interest segmentation in the CT images. The major characteristics of the segmented images have been selected during an optimization technique and the outputs are injected into an optimized radial basis function (RBF) network for the final classification. Optimization in the classification stage and feature selection is by an improved metaheuristic technique, called Amended Whale Optimization Algorithm was proposed. The designed method is then applied to “The RIDER Lung CT” database and its achievements are validated by several latest techniques to show its higher efficacy.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1713-1726"},"PeriodicalIF":3.3,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50137899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMSA-Net: A lightweight multi-scale aware network for retinal vessel segmentation LMSA-Net:一种用于视网膜血管分割的轻量级多尺度感知网络
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-08 DOI: 10.1002/ima.22881
Jian Chen, Jiaze Wan, Zhenghan Fang, Lifang Wei
{"title":"LMSA-Net: A lightweight multi-scale aware network for retinal vessel segmentation","authors":"Jian Chen,&nbsp;Jiaze Wan,&nbsp;Zhenghan Fang,&nbsp;Lifang Wei","doi":"10.1002/ima.22881","DOIUrl":"https://doi.org/10.1002/ima.22881","url":null,"abstract":"<p>Retinal vessel segmentation is an essential part of ocular disease diagnosis. However, due to complex vascular structure, large-scale variations of retinal vessels, as well as inefficiency of vessel segmentation speed, accurate and fast automatic vessel segmentation for retinal images is still technically challenging. To tackle these issues, we present a lightweight multi-scale-aware network (LMSA-Net) for retinal vessel segmentation. The network leverages the encoder-decoder structure that was used in U-Net. In the encoder, we propose a ghosted sandglass residual (GSR) block, aiming at greatly reducing the parameters and computational cost while obtaining richer semantic information. After that, a multi-scale feature-aware aggregation (MFA) module is designed to perceive multi-scale semantic information for effective information extraction. Then, a global adaptive upsampling (GAU) module is proposed to guide the effective fusion of high- and low-level semantic information in the decoder. Experiments are conducted on three public datasets, including DRIVE, CHASE_DB1, and STARE. The experimental results indicate the effectiveness of the LMSA-Net, which can achieve better segmentation performance than other state-of-the-art methods.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1515-1530"},"PeriodicalIF":3.3,"publicationDate":"2023-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50138503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusing feature and output space for unsupervised domain adaptation on medical image segmentation 融合特征和输出空间实现医学图像分割的无监督域自适应
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-08 DOI: 10.1002/ima.22879
Shengsheng Wang, Zihao Fu, Bilin Wang, Yulong Hu
{"title":"Fusing feature and output space for unsupervised domain adaptation on medical image segmentation","authors":"Shengsheng Wang,&nbsp;Zihao Fu,&nbsp;Bilin Wang,&nbsp;Yulong Hu","doi":"10.1002/ima.22879","DOIUrl":"https://doi.org/10.1002/ima.22879","url":null,"abstract":"<p>Image segmentation requires large amounts of annotated data. However, collecting massive datasets with annotations is difficult since they are expensive and labor-intensive. The unsupervised domain adaptation (UDA) for image segmentation is a promising approach to address the label-scare problem on the target domain, which enables the trained model on the source labeled domain to be adaptive to the target domain. The adversarial-based methods encourage extracting the domain-invariant features by training a domain discriminator to mitigate the domain gap. Existing UDA segmentation methods fail to obtain satisfied segmentation results as they only consider the global knowledge of output space while neglecting the local information of feature space. In this paper, a fusing feature and output (FFO) space method is proposed for UDA, which in the context of medical image segmentation. The proposed model is learned by training a more powerful domain discriminator, which considers features extracted from both feature space and output space. Extensive experiments carried out on several medical image datasets show the adaptation effectiveness of our approach in improving the segmentation performance.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1672-1681"},"PeriodicalIF":3.3,"publicationDate":"2023-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50138516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting lung cancer treatment response from CT images using deep learning 利用深度学习从CT图像预测癌症治疗反应
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2023-04-06 DOI: 10.1002/ima.22883
Shweta Tyagi, Sanjay N. Talbar
{"title":"Predicting lung cancer treatment response from CT images using deep learning","authors":"Shweta Tyagi,&nbsp;Sanjay N. Talbar","doi":"10.1002/ima.22883","DOIUrl":"https://doi.org/10.1002/ima.22883","url":null,"abstract":"<p>Lung cancer is the deadliest type of cancer and is one of the most frequently occurring cancers. It is primarily diagnosed in later stages when treatment becomes difficult. For better treatment and higher chances of survival, the treatment response of lung cancer patients needs to be analyzed to check whether the patients are responding to the treatment or not. This analysis can be done with the help of follow-up computed tomography (CT) imaging before and after the treatment. However, manually analyzing the baseline and post-treatment CT scan images of so many lung cancer patients is a tedious task. This study proposes an intuitive approach based on deep learning to analyze lung cancer through CT scan images before and after the treatment. In this approach, we utilized a segmentation network to segment the lung tumor in the follow-up CT images. The segmented tumor is then analyzed to check the treatment effect, as suggested by the Response Evaluation Criteria in Solid Tumors (RECIST) guidelines. The segmentation network combines a vision transformer and a convolutional neural network. The segmentation network is first trained on a public dataset and then fine-tuned on the local dataset to improve the segmentation performance. For this study, we have collected a lung cancer dataset from an Indian hospital. The dataset is divided into two parts dataset I and dataset II. Dataset I consists of 100 CT scans, which we use to fine-tune the proposed segmentation network. Dataset II comprises 220 CT scans of 110 patients, consisting of baseline and post-treatment scans. We use dataset II for testing. We achieved significant performance.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 5","pages":"1577-1592"},"PeriodicalIF":3.3,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50132868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信