NeurocomputingPub Date : 2025-05-15DOI: 10.1016/j.neucom.2025.130418
Wei Meng, Fazheng Hou, Mengyuan Zhao, Jingjing Kong, Jiahao Wu, Jie Zuo, Quan Liu
{"title":"Emotion recognition via affective EEG signals: State of the art","authors":"Wei Meng, Fazheng Hou, Mengyuan Zhao, Jingjing Kong, Jiahao Wu, Jie Zuo, Quan Liu","doi":"10.1016/j.neucom.2025.130418","DOIUrl":"10.1016/j.neucom.2025.130418","url":null,"abstract":"<div><div>With advancements in brain–computer interface technology, research on emotion recognition based on electroencephalogram (EEG) signals has gained significant attention. This review systematically explores signal acquisition, feature extraction, classification methods, and applications related to emotion recognition. We begin by reviewing the acquisition of affective EEG signals, including emotion models, emotion induction methods, signal acquisition techniques, and popular public datasets. Second, we provide a detailed discussion of feature extraction methods for emotional EEG signals, including time-domain, frequency-domain, time–frequency domain, and spatial domain features, as well as feature fusion techniques. The classification methods section highlights recent developments in machine learning, deep learning, and multimodal learning, exploring their applications in emotion recognition tasks. Additionally, we assess practical applications of emotion recognition technologies in areas such as cognitive workload, fatigue estimation, neuropsychiatric condition assessment, and affective care. Finally, the article summarizes the major challenges currently faced and the future development opportunities. By synthesizing existing research, we provide valuable insights and guidance for further studies on EEG-based emotion recognition and its applications in various fields such as education, transportation, and healthcare.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"643 ","pages":"Article 130418"},"PeriodicalIF":5.5,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-14DOI: 10.1016/j.neucom.2025.130491
Guanghao Zhu , Jing Zhang , Juanxiu Liu , Xiaohui Du , Ruqian Hao , Yong Liu , Lin Liu
{"title":"AstMatch: Adversarial self-training consistency framework for semi-supervised medical image segmentation","authors":"Guanghao Zhu , Jing Zhang , Juanxiu Liu , Xiaohui Du , Ruqian Hao , Yong Liu , Lin Liu","doi":"10.1016/j.neucom.2025.130491","DOIUrl":"10.1016/j.neucom.2025.130491","url":null,"abstract":"<div><div>Semi-supervised learning (SSL) has demonstrated significant potential in medical image segmentation, primarily through consistency regularization and pseudo-labeling. However, many SSL approaches focus predominantly on low-level consistency, neglecting the significance of pseudo-label reliability. Therefore, in this work, we propose an adversarial self-training consistency framework (AstMatch). First, we design an adversarial consistency regularization (ACR) approach to enhance knowledge transfer and strengthen prediction consistency under varying intensity perturbations. Second, we incorporate a feature matching loss within adversarial training to achieve high-level consistency regularization. Furthermore, we present the pyramid channel attention (PCA) and efficient channel and spatial attention (ECSA) modules to enhance the discriminator’s effectiveness. Finally, we propose an adaptive self-training (AST) approach to ensure high-quality pseudo-labels. The proposed AstMatch has been extensively evaluated with state-of-the-art SSL methods on three publicly available datasets. Experimental results across different labeled ratios demonstrate that AstMatch outperforms other existing methods, achieving new state-of-the-art performance. Our code is publicly available at <span><span>http://github.com/GuanghaoZhu663/AstMatch</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"643 ","pages":"Article 130491"},"PeriodicalIF":5.5,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-14DOI: 10.1016/j.neucom.2025.130419
Liguo Fei , Tao Li , Weiping Ding
{"title":"A state-of-the-art survey on neural computation-enhanced Dempster–Shafer theory for safety accidents: Applications, challenges, and future directions","authors":"Liguo Fei , Tao Li , Weiping Ding","doi":"10.1016/j.neucom.2025.130419","DOIUrl":"10.1016/j.neucom.2025.130419","url":null,"abstract":"<div><div>Frequent workplace accidents associated with rapid industrialization has become the global focus. Such accidents not only pose a serious threat to the safety and health of employees but can also incur substantial economic losses and reputational damage for companies. Therefore, determining ways to effectively reduce workplace accidents and their possibilities has become an urgent problem that needs to be solved. In the field of safety management, numerous scholars and practitioners are committed to researching and applying various theories and methods to improve workplace safety. Recently, the Dempster–Shafer theory (DST) has garnered attention an important uncertainty reasoning method in the field of safety management. By integrating information from multiple sources, the theory can make effective reasoning using uncertain or incomplete information and provide strong support for safety management decision-making. Recently, neural computation techniques have been explored to enhance the reasoning capabilities of DST, enabling more efficient fusion and analysis of complex and high-dimensional safety data. In view of this, this study systematically reviews relevant papers on DST in the field of safety, accident, and emergency management and forms a systematic literature review (SLR). To support and guide the completion of this work, this paper proposes three research questions based on the 4R crisis management theoretical framework. On the same theoretical basis, the selected literature is sorted and analyzed according to the four dimensions of <span><math><mrow><mi>R</mi><mi>e</mi><mi>d</mi><mi>u</mi><mi>c</mi><mi>t</mi><mi>i</mi><mi>o</mi><mi>n</mi></mrow></math></span>, <span><math><mrow><mi>R</mi><mi>e</mi><mi>a</mi><mi>d</mi><mi>i</mi><mi>n</mi><mi>e</mi><mi>s</mi><mi>s</mi></mrow></math></span>, <span><math><mrow><mi>R</mi><mi>e</mi><mi>s</mi><mi>p</mi><mi>o</mi><mi>n</mi><mi>s</mi><mi>e</mi></mrow></math></span> and <span><math><mrow><mi>R</mi><mi>e</mi><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi><mi>y</mi></mrow></math></span>, covering all aspects of these dimensions; moreover, the results are obtained through SLR. The final results show that DST enhanced with neural computation plays vital roles in the field of safety, accident, and emergency management. This coupling reduces the limitations of DST, expanding its scope in the process. However, several limitations remain to be solved. Accordingly, this paper analyzes ways to solve the extended theory, practical application dimension, and related defects of DST in the field of safety, accident, and emergency management and draws some conclusions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"643 ","pages":"Article 130419"},"PeriodicalIF":5.5,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-13DOI: 10.1016/j.neucom.2025.130360
Xiaotong Zhai , Shu Li , Guoqiang Zhong , Tao Li , Fuchang Zhang , Rachid Hedjam
{"title":"Generative neural architecture search","authors":"Xiaotong Zhai , Shu Li , Guoqiang Zhong , Tao Li , Fuchang Zhang , Rachid Hedjam","doi":"10.1016/j.neucom.2025.130360","DOIUrl":"10.1016/j.neucom.2025.130360","url":null,"abstract":"<div><div>Neural architecture search (NAS) is an important approach for automatic neural architecture design and has been applied to many tasks, such as image classification and object detection. However, most of the conventional NAS algorithms mainly focus on reducing the prohibitive computational cost, while choosing commonly used reinforcement learning (RL), evolutionary algorithm (EA) or gradient-based methods as their search strategy. In this paper, we propose a novel search strategy for NAS, called Generative NAS (GNAS). Specifically, we assume that high-performing convolutional neural networks adhere to a latent distribution, and design a generator to learn this distribution for generating neural architectures. Furthermore, in order to update the generator for better learning the latent distribution, we use the policy gradient and the performance of the generated CNNs on the validation datasets as a reward signal. To evaluate GNAS, we have conducted extensive experiments on the CIFAR-10, SVHN, MNIST, Fashion-MNIST and ImageNet datasets. The results demonstrate the effectiveness of GNAS compared to previous NAS strategies.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"642 ","pages":"Article 130360"},"PeriodicalIF":5.5,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143946676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-13DOI: 10.1016/j.neucom.2025.130369
Zhicun Zhang , Yu Han , Xiaoqi Xi , Linlin Zhu , Chunhui Wang , Siyu Tan , Lei Li , Bin Yan
{"title":"Beneficial and flowing: Omni efficient feature aggregation network for image super-resolution","authors":"Zhicun Zhang , Yu Han , Xiaoqi Xi , Linlin Zhu , Chunhui Wang , Siyu Tan , Lei Li , Bin Yan","doi":"10.1016/j.neucom.2025.130369","DOIUrl":"10.1016/j.neucom.2025.130369","url":null,"abstract":"<div><div>Image super-resolution (SR) is a classic low-level vision task that reconstructs high-resolution (HR) images from low-resolution (LR) ones. Recent Transformer-based SR methods have achieved remarkable performance through modeling long-range dependencies. However, there are two critical challenges in existing approaches: (1) Invalid feature interaction and homogeneous aggregation scheme; (2) Network feature propagation blocking. These challenges imply that the potential of Self-Attention (SA) and Transformer architecture is still not fully exploited. To this end, we propose a novel Transformer model, Omni Efficient Aggregation Transformer (OEAT), boosting SR performance by mining and aggregating efficient information across the omni-dimensions and omni-stages. Specifically, we first design an Omni Efficient Aggregation Self-Attention (OEASA) with a local–global-channel feature interaction scheme, to aggregate heterogeneous features from multi-scales and multi-dimensions while facilitating information flow. Specifically, we design a global semantic SA based on content self-similarity in the spatial dimension and an adaptive sparse channel SA in the channel dimension, efficiently gathering the most useful feature for accurate reconstruction. Furthermore, we design a simple yet effective Omni Feature Fusion (OFF) to enhance global groups feature fusion and inter-layer adaptive feature aggregation, thus introducing more critical information for an accurate reconstruction. Extensive experiments demonstrate that our OEAT outperforms recent state-of-the-art methods both quantitatively and qualitatively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"643 ","pages":"Article 130369"},"PeriodicalIF":5.5,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144071101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-13DOI: 10.1016/j.neucom.2025.130336
Wenjie Chen , Xuemei Xie
{"title":"Hierarchical language description knowledge base for LLM-based human pose estimation","authors":"Wenjie Chen , Xuemei Xie","doi":"10.1016/j.neucom.2025.130336","DOIUrl":"10.1016/j.neucom.2025.130336","url":null,"abstract":"<div><div>Language plays an important role in human communication and knowledge representation, and human cognition involves numerous top-down and bottom-up processes that rely on different levels of knowledge guidance. To align with human cognition, hierarchical language descriptions can be used in human pose estimation as different levels of knowledge guidance, which is lacking in existing studies. We propose HLanD-Pose, the Hierarchical Language Description Knowledge Base for Human Pose Estimation using the Large Language Model (LLM). It describes human posture from the whole to the components and models the poses within a scene to construct a hierarchical knowledge base. When the relevant knowledge is activated by visual information, the matched hierarchical language description of the current human pose can serve as a guide for performing the keypoint localization task. With the powerful reasoning and language comprehension abilities of large language models, human poses in images can be effectively understood, which helps to recognize and accurately locate the target keypoints. Experiments show the remarkable performance of our method on standard keypoint localization benchmarks. Moreover, the designed hierarchical language description and external knowledge base enhance the model’s superior ability to understand the human body in scene-specific datasets, demonstrating strong generalizable capability in cross-dataset keypoint localization.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"642 ","pages":"Article 130336"},"PeriodicalIF":5.5,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143946675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-13DOI: 10.1016/j.neucom.2025.130421
Wesley F. Maia , António M. Lopes , Sergio A. David
{"title":"Automatic sign language to text translation using MediaPipe and transformer architectures","authors":"Wesley F. Maia , António M. Lopes , Sergio A. David","doi":"10.1016/j.neucom.2025.130421","DOIUrl":"10.1016/j.neucom.2025.130421","url":null,"abstract":"<div><div>This study presents a transformer-based architecture for translating Sign Language to spoken language text using embeddings of body keypoints, with the mediation of glosses. To the best of our knowledge, this work is the first to successfully leverage body keypoints for Sign Language-to-text translation, achieving comparable performance to baseline models without reducing translation quality. Our approach introduces extensive augmentation techniques for body keypoints, and convolutional keypoint embeddings, and integrates Connectionist Temporal Classification Loss and position encoding for Sign2Gloss translation. For the Gloss2Text stage, we employ fine-tuning of BART, a state-of-the-art transformer model. Evaluation on the Phoenix14T dataset demonstrates that our integrated Sign2Gloss2Text model achieves competitive performance, with BLEU-4 scores that show marginal differences compared to baseline models using pixel embeddings. On the How2Sign dataset, which lacks gloss annotations, direct Sign2Text translation posed challenges, as reflected in lower BLEU-4 scores, highlighting the limitations of gloss-free approaches. This work addresses the narrow domain of the datasets and the unidirectional nature of the translation process while demonstrating the potential of body keypoints for Sign Language Translation. Future work will focus on enhancing the model’s ability to capture nuanced and complex contexts, thereby advancing accessibility and assistive technologies for bridging communication between individuals with hearing impairments and the hearing community.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"642 ","pages":"Article 130421"},"PeriodicalIF":5.5,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144069584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-13DOI: 10.1016/j.neucom.2025.130367
Lei Shi , Yumao Ma , Yongcai Tao , Haowen Liu , Lin Wei , Yucheng Shi , Yufei Gao
{"title":"Bridging modal gaps: A Cross-Modal Feature Complementation and Feature Projection Network for visible-infrared person re-identification","authors":"Lei Shi , Yumao Ma , Yongcai Tao , Haowen Liu , Lin Wei , Yucheng Shi , Yufei Gao","doi":"10.1016/j.neucom.2025.130367","DOIUrl":"10.1016/j.neucom.2025.130367","url":null,"abstract":"<div><div>Visible-infrared person re-identification (VI-ReID) presents a significant challenge due to the substantial modal differences between infrared (IR) and visible (VIS) images, primarily resulting from their distinct color distributions and textural characteristics. One effective strategy for reducing this modal gap is to utilize feature projection to create a shared embedded space for the modal features. However, a key research question remains: how to effectively align cross-modal features during projection while minimizing the loss of information. To address this challenge, this paper proposed a Cross-Modal Feature Complementation and Feature Projection Network (FCFPN). Specifically, a modal complementation strategy was introduced to bridge the discrepancies between cross-modal features and facilitate their alignment. Additionally, a cross-modal feature projection mechanism was employed to embed modality-correlated features into the shared feature space, thereby mitigating feature loss caused by modality differences. Furthermore, multi-channel and multi-level features were extracted from the shared space to enhance the overall feature representation. Extensive experimental results demonstrated that the proposed FCFPN model effectively mitigated the modal discrepancy, achieving 84.7% Rank-1 accuracy and 86.9% mAP in the indoor test mode of the SYSU-MM01 dataset, and 93.0% Rank-1 accuracy and 87.3% mAP in the VIS-to-IR test mode of the RegDB dataset, thereby outperforming several state-of-the-art methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"642 ","pages":"Article 130367"},"PeriodicalIF":5.5,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144069535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-13DOI: 10.1016/j.neucom.2025.130374
Dan Luo , Kangfeng Zheng , Chunhua Wu , Xiujuan Wang
{"title":"MSLoRA: Meta-learned scaling for adaptive fine-tuning of LoRA","authors":"Dan Luo , Kangfeng Zheng , Chunhua Wu , Xiujuan Wang","doi":"10.1016/j.neucom.2025.130374","DOIUrl":"10.1016/j.neucom.2025.130374","url":null,"abstract":"<div><div>Low rank adaptation (LoRA) methods have demonstrated strong capabilities in efficiently fine-tuning large models. However, existing LoRA-based approaches typically require manually setting the scaling factor, a process that involves extensive search efforts to find optimal values. To address this challenge, we first develop data-driven heuristic methods that automatically determine layer-wise scaling factors through either activation pattern analysis during forward propagation or gradient behavior monitoring during backward updates. However,their practical performance remains unsatisfactory in applications. Building upon these theoretical foundations, we present MSLoRA, a novel framework that reformulates scaling factor determination as a dynamic optimization problem in parameter-efficient fine-tuning. Our approach innovatively models scaling factors as self-adaptive meta-parameters whose optimal values emerge organically through the interplay between transformer architecture hierarchies and task-specific learning objectives. Extensive experiments conducted across both natural language understanding and generative tasks reveal that MSLoRA consistently outperforms baseline models. This highlights the effectiveness of MSLoRA’s dynamic, layer-specific adjustment mechanism in capturing the complex nature of task-specific activation patterns, making it a more robust and scalable solution for parameter-efficient fine-tuning of large models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"643 ","pages":"Article 130374"},"PeriodicalIF":5.5,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-05-13DOI: 10.1016/j.neucom.2025.130371
Haitao Xiao , Linkun Ma , Qinyao Li , Shuo Ma , Hongxuan Guo , Wenjie Wang , Harutoshi Ogai
{"title":"A novel adaptive weighted fusion network based on pixel level feature importance for two-stage 6D pose estimation","authors":"Haitao Xiao , Linkun Ma , Qinyao Li , Shuo Ma , Hongxuan Guo , Wenjie Wang , Harutoshi Ogai","doi":"10.1016/j.neucom.2025.130371","DOIUrl":"10.1016/j.neucom.2025.130371","url":null,"abstract":"<div><div>In intelligent industry, accurate recognition and localization of objects in an image is the basis for robots to perform autonomous and intelligent operations. With the rapid development and application of deep learning data fusion technology in pose estimation, the existing 6D pose estimation methods have made many achievements. However, most of the existing methods are not accurate enough to cope with scenes with cluttered backgrounds, inconspicuous textures, and occluded objects. In addition, the existing methods ignore the effect of the accuracy of instance segmentation on the accuracy of pose estimation. To address above issues, this paper proposes a two-stage 6D pose estimation method based on adaptive pixel-importance weighted fusion network with lightweight instance segmentation, named TAPWFusion. In the instance segmentation stage, a lightweight instance segmentation network based on multiscale attention and boundary constraints, named CVi-BC-YOLO, is proposed to improve segmentation accuracy and efficiency. In the pose estimation stage, to eliminate the interference of lighting and occlusion, and enhance the accuracy of the pose estimation, we propose an adaptive pixel-importance weighted fusion network, named APWFusion, which adaptively evaluates the importance of RGB color and the geometrical information of the point cloud. Experiments on LineMOD, YCB-Video and T-LESS datasets prove the advanced and effective nature of our proposed method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"642 ","pages":"Article 130371"},"PeriodicalIF":5.5,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143946677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}