Complex & Intelligent Systems最新文献

筛选
英文 中文
Self-attention-based graph transformation learning for anomaly detection in multivariate time series
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-17 DOI: 10.1007/s40747-025-01839-3
Qiushi Wang, Yueming Zhu, Zhicheng Sun, Dong Li, Yunbin Ma
{"title":"Self-attention-based graph transformation learning for anomaly detection in multivariate time series","authors":"Qiushi Wang, Yueming Zhu, Zhicheng Sun, Dong Li, Yunbin Ma","doi":"10.1007/s40747-025-01839-3","DOIUrl":"https://doi.org/10.1007/s40747-025-01839-3","url":null,"abstract":"<p>Multivariate time series anomaly detection has widely applications in many fields such as finance, power, and industry. Recently, Graph Neural Network (GNN) have achieved great success in this task due to their powerful ability of modeling multivariate relationships. However, most existing methods employ shallow networks with only two layers, resulting in restricted node information transfer range and limited sensing field. In this paper, we propose a self-attention based graph transformation learning (AT-GTL) method to solve this problem. AT-GTL uses a global self-attention graph pooling (GATP) module to aggregate all node features to obtain global features. Then, a graph transformation learning pipeline is constructed based on neural transformation learning, and a triplet contrastive loss (TCL) is constructed to optimize the global feature extraction networks using potential features from multi-viewpoints. Extensive experiments on three real-world datasets demonstrate that our method can effectively aggregate global graph features and detect anomalies, providing a new transformation learning solution for multivariate time series anomaly detection.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"55 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143641115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight vision transformer with weighted global average pooling: implications for IoMT applications
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-17 DOI: 10.1007/s40747-025-01842-8
Huiyao Dong, Igor Kotenko, Shimin Dong
{"title":"A lightweight vision transformer with weighted global average pooling: implications for IoMT applications","authors":"Huiyao Dong, Igor Kotenko, Shimin Dong","doi":"10.1007/s40747-025-01842-8","DOIUrl":"https://doi.org/10.1007/s40747-025-01842-8","url":null,"abstract":"<p>Vision Transformers (ViTs) have garnered significant interest for analysing medical images in Internet of Medical Things (IoMT) systems due to their ability to capture global context. However, deploying ViTs in resource-constrained IoMT environments requires addressing the challenge of adapting these computationally intensive models to meet device limitations while maintaining efficiency. To tackle this issue, we introduce LightAMViT, a lightweight attention mechanism-enhanced ViT, which incorporates K-means clustering layers to reduce the computational complexity of the self-attention matrix, along with an optimized global average pooling layer that leverages all stacked attention block outputs, each weighted by learnable parameters. Additionally, it employs an adaptive learning strategy that facilitates faster convergence by dynamically adjusting the learning rate. We evaluate the proposed technique on two medical image datasets: BUSI and ISIC2020. Our model outperforms conventional CNNs and demonstrates competitive performance compared to the original ViTs, showcasing improvements in both accuracy and computational efficiency. These findings indicate the model’s robustness and generalisation across various medical image analysis tasks, thereby enhancing the applicability of ViTs in resource-limited IoMT devices.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"23 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143641116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-17 DOI: 10.1007/s40747-025-01847-3
Baotian Li, Jing Zhou, Fangfang Gou, Jia Wu
{"title":"TransRNetFuse: a highly accurate and precise boundary FCN-transformer feature integration for medical image segmentation","authors":"Baotian Li, Jing Zhou, Fangfang Gou, Jia Wu","doi":"10.1007/s40747-025-01847-3","DOIUrl":"https://doi.org/10.1007/s40747-025-01847-3","url":null,"abstract":"<p>Imaging examinations are integral to the diagnosis and treatment of cancer. Nevertheless, the intricate nature of medical images frequently necessitates that physicians follow time-consuming and potentially fallible diagnostic procedures. In response to these challenges, deep learning-based image segmentation technology has emerged as a potent instrument for aiding physicians in navigating diagnostic complexities by extracting pivotal information from extensive sets of medical images. Nonetheless, the majority of existing models prioritize overall high accuracy, often overlooking the sensitivity to local salient features and the precision of segmentation boundaries. This oversight limits the full realization of the practical utility of deep learning models in clinical settings. This study introduces a novel pathological image segmentation method, termed TransRNetFuse, which incorporates stepwise feature aggregation and a residual fully convolutional network architecture. The objective of this method is to address the issues associated with the extraction of local key features and the accurate delineation of boundaries in medical image segmentation. The proposed model achieves enhanced overall performance by merging a fully convolutional network branch with a Transformer branch and utilizing residual blocks along with dense U-net skip connections. It prevents attentional dispersion by emphasizing local features, and further employs an automatic augmentation strategy to identify the optimal data augmentation scheme, which is particularly advantageous for small-sample datasets. Furthermore, this paper introduces an edge enhancement loss function to enhance the model's sensitivity to tumor boundaries. A dataset comprising 2164 pathological images, provided by Hunan Medical University General Hospital, was utilized for model training. The experimental results indicate that the proposed method outperforms existing techniques, such as MedT, in terms of both accuracy and edge precision, thereby demonstrating its significant potential for application in the medical field. Code: https://github.com/GFF1228/-TransRNetFuse.git.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"90 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143641114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bi-objective optimization approach for scheduling electric ground-handling vehicles in an airport
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-13 DOI: 10.1007/s40747-025-01815-x
Weigang Fu, Jiawei Li, Zhe Liao, Yaoming Fu
{"title":"A bi-objective optimization approach for scheduling electric ground-handling vehicles in an airport","authors":"Weigang Fu, Jiawei Li, Zhe Liao, Yaoming Fu","doi":"10.1007/s40747-025-01815-x","DOIUrl":"https://doi.org/10.1007/s40747-025-01815-x","url":null,"abstract":"<p>To reduce airport operating costs and minimize environmental pollution, converting ground-handling vehicles from fuel-powered to electric ones is inevitable. However, this transformation introduces complexity in scheduling due to additional factors, such as battery capacities and charging requirements. This study models the electric ground-handling vehicle scheduling problem as a bi-objective integer programming model to address these challenges. The objectives of this model are to minimize the total travel distance of vehicles serving flights and the standard deviation of the total occupancy time for each vehicle. In order to solve this model and generate optimal scheduling solutions, this study combines the non-dominated sorting genetic algorithm 2 (NSGA2) with the large neighborhood search (LNS) algorithm, proposing a novel NSGA2-LNS algorithm. A dynamic priority method is used by the NSGA2-LNS to construct the initial population, thereby speeding up the convergence. The NSGA2-LNS employs the LNS algorithm to overcome the problem that metaheuristic algorithms often lack clear directions in the process of finding solutions. In addition, this study designs the correlation-based destruction operator and the priority-based repair operator in the NSGA2-LNS algorithm, thereby significantly enhancing its ability to find optimal solutions for the electric ground-handling vehicle scheduling problem. The algorithm is verified using flight data from Chengdu Shuangliu International Airport and is compared with manual scheduling methods and traditional multi-objective optimization algorithms. Experimental results demonstrate that the NSGA2-LNS can rapidly solve the scheduling problem of allocating electric ground-handling vehicles for hundreds of flights and produce high-quality scheduling solutions.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"41 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143608061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminator guided visible-to-infrared image translation
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-13 DOI: 10.1007/s40747-025-01827-7
Decao Ma, Juan Su, Yong Xian, Shaopeng Li
{"title":"Discriminator guided visible-to-infrared image translation","authors":"Decao Ma, Juan Su, Yong Xian, Shaopeng Li","doi":"10.1007/s40747-025-01827-7","DOIUrl":"https://doi.org/10.1007/s40747-025-01827-7","url":null,"abstract":"<p>This paper proposes a discriminator-guided visible-to-infrared image translation algorithm based on a generative adversarial network and designs a multi-scale fusion generative network. The generative network enhances the perception of the image’s fine-grained features by fusing features of different scales in the channel direction. Meanwhile, the discriminator performs the infrared image reconstruction task, which provides additional infrared information to train the generator. This enhances the convergence efficiency of generator training through soft label guidance generated through knowledge distillation. The experimental results show that compared to the existing typical infrared image generation algorithms, the proposed method can generate higher-quality infrared images and achieve better performance in both subjective visual description and objective metric evaluation, and that it has better performance in the downstream tasks of the template matching and image fusion tasks.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"183 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143608060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target pursuit for multi-AUV system: zero-sum stochastic game with WoLF-PHC assisted 多 AUV 系统的目标追逐:WoLF-PHC 辅助的零和随机博弈
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-13 DOI: 10.1007/s40747-025-01788-x
Le Hong, Weicheng Cui
{"title":"Target pursuit for multi-AUV system: zero-sum stochastic game with WoLF-PHC assisted","authors":"Le Hong, Weicheng Cui","doi":"10.1007/s40747-025-01788-x","DOIUrl":"https://doi.org/10.1007/s40747-025-01788-x","url":null,"abstract":"<p>Due to the complexity of the underwater environment and the difficulty of the underwater energy recharging, utilizing multiple autonomous underwater vehicles (AUVs) to pursue the invading vehicle is a challenging project. This paper focuses on devising the rational and energy-efficient pursuit motion for a multi-AUV system in an unknown three-dimensional environment. Firstly, the pursuit system model is constructed on the two-player zero-sum stochastic game (ZSSG) framework. This framework enables the fictitious play on the behaviors of the invading AUV. Fictitious play involves players updating their strategies by observing and inferring the actions of others under incomplete information. Under this framework, a relay-pursuit mechanism is adopted by the pursuit system to form the action set in an energy-efficient way. Then, to reflect the pursuit goals of capturing the invading vehicle as soon as possible and avoid it from reaching its point of attack, two corresponding pursuit factors are considered in the designed reward function. To enable the pursuit AUVs to navigate in an unknown environment, WoLF-PHC algorithm is introduced and applied to the proposed ZSSG-based model. Finally, simulations demonstrate the effectiveness, the advantages, and the robustness of the proposed approach.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"92 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143608059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modelling object mask interaction for compositional action recognition 为合成动作识别建立物体遮罩交互模型
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-10 DOI: 10.1007/s40747-025-01823-x
Xinya Li, Zhongwei Shen, Benlian Xu, Rongchang Li, Mingli Lu, Jinliang Cong, Longxin Zhang
{"title":"Modelling object mask interaction for compositional action recognition","authors":"Xinya Li, Zhongwei Shen, Benlian Xu, Rongchang Li, Mingli Lu, Jinliang Cong, Longxin Zhang","doi":"10.1007/s40747-025-01823-x","DOIUrl":"https://doi.org/10.1007/s40747-025-01823-x","url":null,"abstract":"<p>Human actions can be abstracted as interactions between humans and objects. The recently proposed task of compositional action recognition emphasizes the independence and combinability of verbs (actions) and nouns (humans or objects) constituting human actions. Nonetheless, most traditional appearance-based action recognition methods usually extract spatial-temporal features from input videos concurrently to understand actions. This approach tends to excessively rely on overall appearance features and lacks precise modelling of interactions between objects, often leading to the neglect of the actions themselves. Consequently, the biases introduced by the appearance prevent the model from effectively generalizing to unseen combinations of actions and objects. To address this issue, we propose a method that explicitly models the object interaction path, aiming to capture interactions between humans and objects. The advantage of this approach is that these interactions are not affected by the object or environmental appearance bias, providing additional clues for appearance-based action recognition methods. Our method can easily be combined with any appearance-based visual encoder, significantly improving the compositional generalization ability of action recognition algorithms. Extensive experimental results on the Something-Else dataset and the IKEA-Assembly dataset demonstrate the effectiveness of our approach.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"91 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143582811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mamba meets tracker: exploiting token aggregation and diffusion for robust unmanned aerial vehicles tracking
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-10 DOI: 10.1007/s40747-025-01821-z
Guocai Du, Peiyong Zhou, Nurbiya Yadikar, Alimjan Aysa, Kurban Ubul
{"title":"Mamba meets tracker: exploiting token aggregation and diffusion for robust unmanned aerial vehicles tracking","authors":"Guocai Du, Peiyong Zhou, Nurbiya Yadikar, Alimjan Aysa, Kurban Ubul","doi":"10.1007/s40747-025-01821-z","DOIUrl":"https://doi.org/10.1007/s40747-025-01821-z","url":null,"abstract":"<p>The Transformer-based tracking approach achieves excellent results in unmanned aerial vehicles (UAV) tracking tasks. However, the existing tracking framework usually deals with this problem by visual grounding and visual tracking separately. This independent framework does not consider the correlation between the two steps mentioned above, that is, natural language description can provide global semantic information. Meanwhile, a separate framework is unable to conduct end-to-end training. As a remedy, We propose a joint natural language Mamba based tracking framework (named TADMT). Specifically, we propose a token aggregator that condenses rich features into a small number of visual tokens through a coarse to fine strategy to improve subsequent tracking speed. Then, we designed a mamba module based on the serpentine scanning strategy to effectively establish the relationship between natural language and visual images. In addition, we have designed a novel shift add multilayer perceptron in the prediction head, with the aim of achieving final classification and localization with less computation. Numerous experimental results have shown that TADMT achieves good tracking performance on six UAV tracking datasets and three general tracking datasets, with an average speed of 120FPS. The experimental results on the embedded platform also demonstrate the applicability of TADMT on UAV platforms.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"2 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143582738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AILDP: a research on ship number recognition technology for complex scenarios
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-10 DOI: 10.1007/s40747-025-01820-0
Tianjiao Wei, Zhuhua Hu, Yaochi Zhao, Xiyu Fan
{"title":"AILDP: a research on ship number recognition technology for complex scenarios","authors":"Tianjiao Wei, Zhuhua Hu, Yaochi Zhao, Xiyu Fan","doi":"10.1007/s40747-025-01820-0","DOIUrl":"https://doi.org/10.1007/s40747-025-01820-0","url":null,"abstract":"<p>With the rapid growth of global maritime trade and the increasingly urgent need for maritime surveillance and security management, fast and accurate identification of vessels has become a crucial aspect. The task of ship number recognition mainly faces two challenges: first, the ship number is usually located in different parts of the hull, and due to the shooting distance, the size of the ship number can vary greatly on different vessels, making automated recognition complex. Second, adverse weather conditions and complex sea surface environments may affect the accuracy of visual recognition. To address the above issues, we produce a private dataset containing 2436 images of ships in a variety of scenarios and propose an algorithm (AILDP) for interactive feature learning and adaptive enhancement to tackle multiple challenges in ship number recognition. Firstly, in the detection phase, for the problem of varying size and position in the ship number recognition task, the detection effect is optimized by a module (AIFI_LPE) that combines feature interaction and learned position encoding. Secondly, to deal with the issues of blurring and occlusion of ship numbers due to ship movement or bad weather, a module (C2f_IRMB_DRB) is proposed that can capture high-quality features while weighing the computational effort when processing low-quality images. After detection, the results are divided into two categories: clear ship number and low-quality ship number. In order to save computational resources, only the low-quality images are first subjected to preliminary image enhancement processing, and then the Thin Plate Spline (TPS) is introduced in the recognition part based on the framework of PaddleOCRv4 and combined with the feature extraction and enhancement module to adjust the spatial features of the images to ensure that both types of ship number images can be accurately processed in the feature extraction and recognition process. Experimental results show that the AILDP can improve the accuracy of ship number recognition, with the precision, recall, and mAP0.5 for ship number detection increased to 95.7%, 94.5%, and 94.8%. The Character_accuracy of the recognition task can reach 95.23%.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"91 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143582744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupled pixel-wise correction for abdominal multi-organ segmentation 用于腹部多器官分割的解耦像素校正
IF 5.8 2区 计算机科学
Complex & Intelligent Systems Pub Date : 2025-03-08 DOI: 10.1007/s40747-025-01796-x
Xiangchun Yu, Longjun Ding, Dingwen Zhang, Jianqing Wu, Miaomiao Liang, Jian Zheng, Wei Pang
{"title":"Decoupled pixel-wise correction for abdominal multi-organ segmentation","authors":"Xiangchun Yu, Longjun Ding, Dingwen Zhang, Jianqing Wu, Miaomiao Liang, Jian Zheng, Wei Pang","doi":"10.1007/s40747-025-01796-x","DOIUrl":"https://doi.org/10.1007/s40747-025-01796-x","url":null,"abstract":"<p>The attention mechanism has emerged as a crucial component in medical image segmentation. Attention-based deep neural networks (ADNNs) fundamentally engage in the iterative computation of gradients for both input layers and weight parameters. Our research reveals a remarkable similarity between the optimization trajectory of ADNN and non-negative matrix factorization (NMF), where the latter involves the alternate adjustment of the base and coefficient matrices. This similarity implies that the alternating optimization strategy—characterized by the adjustment of input features by the attention mechanism and the adjustment of network weights—is central to the efficacy of attention mechanisms in ADNNs. Drawing an analogy to the NMF approach, we advocate for a pixel-wise adjustment of the input layer within ADNNs. Furthermore, to reduce the computational burden, we have developed a decoupled pixel-wise attention module (DPAM) and a self-attention module (DPSM). These modules are designed to counteract the challenges posed by the high inter-class similarity among different organs when performing multi-organ segmentation. The integration of our DPAM and DPSM into traditional network architectures facilitates the creation of an NMF-inspired ADNN framework, known as the DPC-Net, which comes in two variants: DPCA-Net for attention and DPCS-Net for self-attention. Our extensive experiments on the Synapse and FLARE22 datasets demonstrate that the DPC-Net achieves satisfactory performance and visualization results with lower computational cost. Specifically, the DPC-Net achieved a Dice score of 77.98% on the Synapse dataset and 87.04% on the FLARE22 dataset, while possessing merely 14.991 million parameters. Notably, our findings indicate that DPC-Net, when equipped with convolutional attention, surpasses those networks utilizing Transformer attention mechanisms on multi-organ segmentation tasks. Our code is available at https://github.com/605671435/DPC-Net.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"37 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信