Computational Intelligence最新文献

筛选
英文 中文
Privacy preserving support vector machine based on federated learning for distributed IoT-enabled data analysis 基于联合学习的隐私保护支持向量机,用于分布式物联网数据分析
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-04-03 DOI: 10.1111/coin.12636
Yu-Chi Chen, Song-Yi Hsu, Xin Xie, Saru Kumari, Sachin Kumar, Joel Rodrigues, Bander A. Alzahrani
{"title":"Privacy preserving support vector machine based on federated learning for distributed IoT-enabled data analysis","authors":"Yu-Chi Chen,&nbsp;Song-Yi Hsu,&nbsp;Xin Xie,&nbsp;Saru Kumari,&nbsp;Sachin Kumar,&nbsp;Joel Rodrigues,&nbsp;Bander A. Alzahrani","doi":"10.1111/coin.12636","DOIUrl":"https://doi.org/10.1111/coin.12636","url":null,"abstract":"<p>In a smart city, IoT devices are required to support monitoring of normal operations such as traffic, infrastructure, and the crowd of people. IoT-enabled systems offered by many IoT devices are expected to achieve sustainable developments from the information collected by the smart city. Indeed, artificial intelligence (AI) and machine learning (ML) are well-known methods for achieving this goal as long as the system framework and problem statement are well prepared. However, to better use AI/ML, the training data should be as global as possible, which can prevent the model from working only on local data. Such data can be obtained from different sources, but this induces the privacy issue where at least one party collects all data in the plain. The main focus of this article is on support vector machines (SVM). We aim to present a solution to the privacy issue and provide confidentiality to protect the data. We build a privacy-preserving scheme for SVM (SecretSVM) based on the framework of federated learning and distributed consensus. In this scheme, data providers self-organize and obtain training parameters of SVM without revealing their own models. Finally, experiments with real data analysis show the feasibility of potential applications in smart cities. This article is the extended version of that of Hsu et al. (Proceedings of the 15th ACM Asia Conference on Computer and Communications Security. ACM; 2020:904-906).</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel algorithm machine translation for language translation tool 语言翻译工具的新算法机器翻译
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-04-03 DOI: 10.1111/coin.12643
K. Jayasakthi Velmurugan, G. Sumathy, K. V. Pradeep
{"title":"Novel algorithm machine translation for language translation tool","authors":"K. Jayasakthi Velmurugan,&nbsp;G. Sumathy,&nbsp;K. V. Pradeep","doi":"10.1111/coin.12643","DOIUrl":"https://doi.org/10.1111/coin.12643","url":null,"abstract":"<p>Fuzzy matching techniques are the presently used methods in translating the words. Neural machine translation and statistical machine translation are the methods used in MT. In machine translator tool, the strategy employed for translation needs to handle large amount of datasets and therefore the performance in retrieving correct matching output can be affected. In order to improve the matching score of MT, the advanced techniques can be presented by modifying the existing fuzzy based translator and neural machine translator. The conventional process of modifying architectures and encoding schemes are tedious process. Similarly, the preprocessing of datasets also involves more time consumption and memory utilization. In this article, a new spider web based searching enhanced translation is presented to be employed with the neural machine translator. The proposed scheme enables deep searching of available dataset to detect the accurate matching result. In addition, the quality of translation is improved by presenting an optimal selection scheme for using the sentence matches in source augmentation. The matches retrieved using various matching scores are applied to an optimization algorithm. The source augmentation using optimal retrieved matches increases the translation quality. Further, the selection of optimal match combination helps to reduce time requirement, since it is not necessary to test all retrieved matches in finding target sentence. The performance of translation is validated by measuring the quality of translation using BLEU and METEOR scores. These two scores can be achieved for the TA-EN language pairs in different configurations of about 92% and 86%, correspondingly. The results are evaluated and compared with other available NMT methods to validate the work.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust fine-grained visual recognition with images based on internet of things 基于物联网的稳健细粒度图像视觉识别
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-03-19 DOI: 10.1111/coin.12638
Zhenhuang Cai, Shuai Yan, Dan Huang
{"title":"Robust fine-grained visual recognition with images based on internet of things","authors":"Zhenhuang Cai,&nbsp;Shuai Yan,&nbsp;Dan Huang","doi":"10.1111/coin.12638","DOIUrl":"https://doi.org/10.1111/coin.12638","url":null,"abstract":"<p>Labeling fine-grained objects manually is extremely challenging, as it is not only label-intensive but also requires professional knowledge. Accordingly, robust learning methods for fine-grained recognition with web images collected from Internet of Things have drawn significant attention. However, training deep fine-grained models directly using untrusted web images is confronted by two primary obstacles: (1) label noise in web images and (2) domain variance between the online sources and test datasets. To this end, in this study, we mainly focus on addressing these two pivotal problems associated with untrusted web images. To be specific, we introduce an end-to-end network that collaboratively addresses these concerns in the process of separating trusted data from untrusted web images. To validate the efficacy of our proposed model, untrusted web images are first collected by utilizing the text category labels found within fine-grained datasets. Subsequently, we employ the designed deep model to eliminate label noise and ameliorate domain mismatch. And the chosen trusted web data are utilized for model training. Comprehensive experiments and ablation studies validate that our method consistently surpasses other state-of-the-art approaches for fine-grained recognition tasks in real-world scenarios, demonstrating a significant improvement margin (2.51% on CUB200-2011 and 2.92% on Stanford Dogs). The source code and models can be accessed at: \u0000https://github.com/Codeczh/FGVC-IoT.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140164392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid data fusion approach with twin CNN architecture for enhancing image source identification in IoT environment 采用双 CNN 架构的混合数据融合方法,增强物联网环境中的图像源识别能力
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-03-18 DOI: 10.1111/coin.12631
Surjeet Singh, Vivek Kumar Sehgal
{"title":"A hybrid data fusion approach with twin CNN architecture for enhancing image source identification in IoT environment","authors":"Surjeet Singh,&nbsp;Vivek Kumar Sehgal","doi":"10.1111/coin.12631","DOIUrl":"https://doi.org/10.1111/coin.12631","url":null,"abstract":"<p>With the proliferation of digital devices in internet of things (IoT) environment featuring advanced visual capabilities, the task of Image Source Identification (ISI) has become increasingly vital for legal purposes, ensuring the verification of image authenticity and integrity, as well as identifying the device responsible for capturing the original scene. Over the past few decades, researchers have employed both traditional and machine-learning methods to classify image sources. In the current landscape, data-driven approaches leveraging deep learning models have emerged as powerful tools for achieving higher accuracy and precision in source prediction. The primary focus of this research is to address the complexities arising from diverse image sources and variable quality in IoT-generated multimedia data. To achieve this, a Hybrid Data Fusion Approach is introduced, leveraging multiple sources of information to bolster the accuracy and robustness of ISI. This fusion methodology integrates diverse data streams from IoT devices, including metadata, sensor information, and contextual data, amalgamating them into a comprehensive data set for analysis. This study introduces an innovative approach to ISI through the implementation of a Twin Convolutional Neural Network Architecture (TCA) aimed at enhancing the efficacy of source classification. In TCA, the first CNN architecture, referred to as DnCNN, is employed to eliminate noise from the original data set, generating 256 × 256 patches for both training and testing. Subsequently, the second CNN architecture is employed to classify images based on features extracted from various convolutional layers using a 3 × 3 filter, thereby enhancing prediction efficiency. The proposed model demonstrates exceptional accuracy in effectively classifying image sources, showcasing its potential as a robust solution in the realm of ISI.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentiment analysis incorporating convolutional neural network into hidden Markov model 将卷积神经网络纳入隐马尔可夫模型的情感分析
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-03-18 DOI: 10.1111/coin.12633
Maryam Khanian Najafabadi
{"title":"Sentiment analysis incorporating convolutional neural network into hidden Markov model","authors":"Maryam Khanian Najafabadi","doi":"10.1111/coin.12633","DOIUrl":"https://doi.org/10.1111/coin.12633","url":null,"abstract":"<p>The analysis of sentiments and mining of opinions have become more and more important in years because of the development of social media technologies. The methods that utilize natural language processing and lexicon-based sentiment analysis techniques to analyze people's opinions in texts require the proper extraction of sentiment words to ensure accuracy. The current issue is tackled with a novel perspective in this paper by introducing a hybrid sentiment analysis technique. This technique brings together Convolutional Neural Network (CNN) and Hidden Markov Models (HMMs), to accurately categorize text data and pinpoint feelings. The proposed method involves 1D convolutional-layer CNN to extract hidden features from comments and applying HMMs on a feature-sentence matrix, allowing for the utilization of word sequences in extracting opinions. The method effectively captures diverse text patterns by extracting a range of features from texts using CNN. Text patterns are learned using text HMM by calculating the probabilities between sequences of feature vectors and clustering feature vectors. The paper's experimental evaluation employs benchmark datasets such as CR, MR, Subj, and SST2, demonstrating that the proposed method surpasses existing sentiment analysis techniques and traditional HMMs. One of its strengths is to analyze a range of text patterns and identify crucial features that recognize the emotion of different pieces of a sentence. Additionally, the research findings highlight the improved performance of sentiment analysis tasks through the strategic use of zero padding in conjunction with the masking technique.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.12633","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An effective graph embedded YOLOv5 model for forest fire detection 用于林火探测的有效图嵌入式 YOLOv5 模型
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-03-18 DOI: 10.1111/coin.12640
Hui Yuan, Zhumao Lu, Ruizhe Zhang, Jinsong Li, Shuai Wang, Jingjing Fan
{"title":"An effective graph embedded YOLOv5 model for forest fire detection","authors":"Hui Yuan,&nbsp;Zhumao Lu,&nbsp;Ruizhe Zhang,&nbsp;Jinsong Li,&nbsp;Shuai Wang,&nbsp;Jingjing Fan","doi":"10.1111/coin.12640","DOIUrl":"https://doi.org/10.1111/coin.12640","url":null,"abstract":"<p>The existing YOLOv5-based framework has achieved great success in the field of target detection. However, in forest fire detection tasks, there are few high-quality forest fire images available, and the performance of the YOLO model has suffered a serious decline in detecting small-scale forest fires. Making full use of context information can effectively improve the performance of small target detection. To this end, this paper proposes a new graph-embedded YOLOv5 forest fire detection framework, which can improve the performance of small-scale forest fire detection using different scales of context information. To mine local context information, we design a spatial graph convolution operation based on the message passing neural network (MPNN) mechanism. To utilize global context information, we introduce a multi-head self-attention (MSA) module before each YOLO head. The experimental results on FLAME and our self-built fire dataset show that our proposed model improves the accuracy of small-scale forest fire detection. The proposed model achieves high performance in real-time performance by fully utilizing the advantages of the YOLOv5 framework.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale attention for few-shot image classification 用于少量图像分类的多尺度注意力
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-03-18 DOI: 10.1111/coin.12639
Tong Zhou, Changyin Dong, Junshu Song, Zhiqiang Zhang, Zhen Wang, Bo Chang, Dechun Chen
{"title":"Multiscale attention for few-shot image classification","authors":"Tong Zhou,&nbsp;Changyin Dong,&nbsp;Junshu Song,&nbsp;Zhiqiang Zhang,&nbsp;Zhen Wang,&nbsp;Bo Chang,&nbsp;Dechun Chen","doi":"10.1111/coin.12639","DOIUrl":"https://doi.org/10.1111/coin.12639","url":null,"abstract":"<p>In recent years, the application of traditional deep learning methods in the agricultural field using remote sensing techniques, such as crop area and growth monitoring, crop classification, and agricultural disaster monitoring, has been greatly facilitated by advancements in deep learning. The accuracy of image classification plays a crucial role in these applications. Although traditional deep learning methods have achieved significant success in remote sensing image classification, they often involve convolutional neural networks with a large number of parameters that require extensive optimization using numerous remote sensing images for training purposes. To address these challenges, we propose a novel approach called multiscale attention network (MAN) for sample-based remote sensing image classification. This method consists primarily of feature extractors and attention modules to effectively utilize different scale features through multiscale feature training during the training phase. We evaluate our proposed method on three datasets comprising agricultural remote sensing images and observe superior performance compared to existing approaches. Furthermore, we validate its generalizability by testing it on an oil well indicator diagram specifically designed for classification tasks.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FVCNet: Detection obstacle method based on feature visual clustering network in power line inspection FVCNet:电力线路检测中基于特征视觉聚类网络的障碍物检测方法
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-03-18 DOI: 10.1111/coin.12634
Qiu-Yu Wang, Xian-Long Lv, Shi-Kai Tang
{"title":"FVCNet: Detection obstacle method based on feature visual clustering network in power line inspection","authors":"Qiu-Yu Wang,&nbsp;Xian-Long Lv,&nbsp;Shi-Kai Tang","doi":"10.1111/coin.12634","DOIUrl":"https://doi.org/10.1111/coin.12634","url":null,"abstract":"<p>Power line inspection is an important means to eliminate hidden dangers of power lines. It is a difficult research problem how to solve the low accuracy of power line inspection based on deep neural network (DNN) due to the problems of multi-view-shape, small-size object. In this paper, an automatic detection model based on Feature visual clustering network (FVCNet) for power line inspection is established. First, an unsupervised clustering method for power line inspection is proposed, and applied to construct a detection model which can recognize multi-view-shape objects and enhanced object features. Then, the bilinear interpolation method is used to Feature enhancement method, and the enhanced high-level semantics and low-level semantics are fused to solve the problems of small object size and single sample. In this paper, FVCNet is applied to the MS-COCO 2017 data set and self-made power line inspection data set, and the test accuracy is increased to 61.2% and 82.0%, respectively. Compared with other models, especially for the categories that are greatly affected by multi-view-shape, the test accuracy has been improved significantly.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 2","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing scene-text visual question answering with relational reasoning, attention and dynamic vocabulary integration 利用关系推理、注意力和动态词汇整合加强场景-文本视觉问题解答
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-02-20 DOI: 10.1111/coin.12635
Mayank Agrawal, Anand Singh Jalal, Himanshu Sharma
{"title":"Enhancing scene-text visual question answering with relational reasoning, attention and dynamic vocabulary integration","authors":"Mayank Agrawal,&nbsp;Anand Singh Jalal,&nbsp;Himanshu Sharma","doi":"10.1111/coin.12635","DOIUrl":"https://doi.org/10.1111/coin.12635","url":null,"abstract":"<p>Visual question answering (VQA) is a challenging task in computer vision. Recently, there has been a growing interest in text-based VQA tasks, emphasizing the important role of textual information for better understanding of images. Effectively utilizing text information within the image is crucial for achieving success in this task. However, existing approaches often overlook the contextual information and neglect to utilize the relationships between scene-text tokens and image objects. They simply incorporate the scene-text tokens mined from the image into the VQA model without considering these important factors. In this paper, the proposed model initially analyzes the image to extract text and identify scene objects. It then comprehends the question and mines relationships among the question, OCRed text, and scene objects, ultimately generating an answer through relational reasoning by conducting semantic and positional attention. Our decoder with attention map loss enables prediction of complex answers and handles dynamic vocabularies, reducing decoding space. It outperforms softmax-based cross entropy loss in accuracy and efficiency by accommodating varying vocabulary sizes. We evaluated our model's performance on the TextVQA dataset and achieved an accuracy of 53.91% on the validation set and 53.98% on the test set. Moreover, on the ST-VQA dataset, our model obtained ANLS scores of 0.699 on the validation set and 0.692 on the test set.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139915719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Greedy-based user selection for federated graph neural networks with limited communication resources 为通信资源有限的联合图神经网络选择基于贪婪的用户
IF 2.8 4区 计算机科学
Computational Intelligence Pub Date : 2024-02-20 DOI: 10.1111/coin.12637
Hancong Huangfu, Zizhen Zhang
{"title":"Greedy-based user selection for federated graph neural networks with limited communication resources","authors":"Hancong Huangfu,&nbsp;Zizhen Zhang","doi":"10.1111/coin.12637","DOIUrl":"https://doi.org/10.1111/coin.12637","url":null,"abstract":"<p>Recently, graph neural networks (GNNs) have attracted much attention in the field of machine learning due to their remarkable success in learning from graph-structured data. However, implementing GNNs in practice faces a critical bottleneck from the high complexity of communication and computation, which arises from the frequent exchange of graphic data during model training, especially in limited communication scenarios. To address this issue, we propose a novel framework of federated graph neural networks, where multiple mobile users collaboratively train the global model of graph neural networks in a federated way. The utilization of federated learning into the training of graph neural networks can help reduce the communication overhead of the system and protect the data privacy of local users. In addition, the federated training can help reduce the system computational complexity significantly. We further introduce a greedy-based user selection for the federated graph neural networks, where the wireless bandwidth is dynamically allocated among users to encourage more users to attend the federated training of neural networks. We perform the convergence analysis on the federated training of neural networks, in order to obtain some more insights on the impact of critical parameters on the system design. Finally, we perform the simulations on the coriolis ocean for reAnalysis (CORA) dataset and show the advantages of the proposed method in this paper.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139915720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信