Xinyang Wen, Zhuoxuan Liu, Yanbo Chu, Min Le, Liang Li
{"title":"MRCM-UCTransNet: Automatic and Accurate 3D Tooth Segmentation Network From Cone-Beam CT Images","authors":"Xinyang Wen, Zhuoxuan Liu, Yanbo Chu, Min Le, Liang Li","doi":"10.1002/ima.23139","DOIUrl":"https://doi.org/10.1002/ima.23139","url":null,"abstract":"<div>\u0000 \u0000 <p>Many scenarios in dental clinical diagnosis and treatment require the segmentation and identification of a specific tooth or the entire dentition in cone-beam computed tomography (CBCT) images. However, traditional segmentation methods struggle to ensure accuracy. In recent years, there has been significant progress in segmentation algorithms based on deep learning, garnering considerable attention. Inspired by models from present neuro networks such as UCTransNet and DC-Unet, this study proposes an MRCM-UCTransNet for accurate three-dimensional tooth segmentation from cone-beam CT images. To enhance feature extraction while preserving the multi-head attention mechanism, a multi-scale residual convolution module (MRCM) is integrated into the UCTransNet architecture. This modification addresses the limitations of traditional segmentation methods and aims to improve accuracy in tooth segmentation from CBCT images. Comparative experiments indicate that, in the situation with a specific image size and small data volume, the proposed method exhibits certain advantages in segmentation accuracy and precision. Compared to traditional Unet approaches, MRCM-UCTransNet's dice accuracy is improved by 7%, while its sensitivity is improved by about 10%. These findings highlight the efficacy of the proposed approach, particularly in scenarios with specific image size constraints and limited data availability. The proposed MRCM-UCTransNet algorithm integrates the latest architectural advancements in the Unet model which achieves effective segmentation of six types of teeth within the tooth. It was proved to be efficient for image segmentation on small datasets, requiring less training time and fewer parameters.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi-Fusion Residual Attention U-Net Using Temporal Information for Segmentation of Left Ventricular Structures in 2D Echocardiographic Videos","authors":"Kai Wang, Hirotaka Hachiya, Haiyuan Wu","doi":"10.1002/ima.23141","DOIUrl":"10.1002/ima.23141","url":null,"abstract":"<div>\u0000 \u0000 <p>The interpretation of cardiac function using echocardiography requires a high level of diagnostic proficiency and years of experience. This study proposes a multi-fusion residual attention U-Net, MURAU-Net, to construct automatic segmentation for evaluating cardiac function from echocardiographic video. MURAU-Net has two benefits: (1) Multi-fusion network to strengthen the links between spatial features. (2) Inter-frame links can be established to augment the temporal coherence of sequential image data, thereby enhancing its continuity. To evaluate the effectiveness of the proposed method, we performed nine-fold cross-validation using CAMUS dataset. Among state-of-the-art methods, MURAU-Net achieves highly competitive score, for example, Dice similarity of 0.952 (ED phase) and 0.931 (ES phase) in <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msub>\u0000 <mi>LV</mi>\u0000 <mtext>Endo</mtext>\u0000 </msub>\u0000 </mrow>\u0000 <annotation>$$ {mathrm{LV}}_{mathrm{Endo}} $$</annotation>\u0000 </semantics></math>, 0.966 (ED phase) and 0.957 (ES phase) in <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msub>\u0000 <mi>LV</mi>\u0000 <mi>Epi</mi>\u0000 </msub>\u0000 </mrow>\u0000 <annotation>$$ {mathrm{LV}}_{mathrm{Epi}} $$</annotation>\u0000 </semantics></math>, and 0.901 (ED phase) and 0.917 (ES phase) in <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>LA</mi>\u0000 </mrow>\u0000 <annotation>$$ mathrm{LA} $$</annotation>\u0000 </semantics></math>, respectively. It also achieved the Dice similarity of 0.9313 in the EchoNet-Dynamic dataset for the overall left ventricle segmentation. In addition, we show MURAU-Net can accurately segment multiclass cardiac ultrasound videos and output the animation of segmentation results using the original two-chamber cardiac ultrasound dataset MUCO.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CFIFusion: Dual-Branch Complementary Feature Injection Network for Medical Image Fusion","authors":"Yiyuan Xie, Lei Yu, Cheng Ding","doi":"10.1002/ima.23144","DOIUrl":"https://doi.org/10.1002/ima.23144","url":null,"abstract":"<div>\u0000 \u0000 <p>The goal of fusing medical images is to integrate the diverse information that multimodal medical images hold. However, the challenges lie in the limitations of imaging sensors and the issue of incomplete modal information retention, which make it difficult to produce images encompassing both functional and anatomical information. To overcome these obstacles, several medical image fusion techniques based on CNN or transformer architectures have been presented. Nevertheless, CNN technique struggles to establish extensive dependencies between the fused and source images, and transformer architecture often overlooks shallow complementary features. To augment both the feature extraction capacity and the stability of the model, we introduce a framework, called dual-branch complementary feature injection fusion (CFIFusion) technique, a for multimodal medical image fusion framework that combines unsupervised models of CNN model and transformer techniques. Specifically, in our framework, the entire source image and segmented source image are input into an adaptive backbone network to learn global and local features, respectively. To further retain the source images' complementary information, we design a multi-scale complementary feature extraction framework as an auxiliary module, focusing on calculating feature differences at each level to capture the shallow complementary information. Then, we design a shallow information preservation module tailored for sliced image characteristics. Experimental results on the Harvard whole brain atlas dataset demonstrate that CFIFusion shows greater benefits than recent state-of-the-art algorithms in terms of both subjective and objective evaluations.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Hybrid Deep Learning Framework Using Scaling-Basis Chirplet Transform for Motor Imagery EEG Recognition in Brain–Computer Interface Applications","authors":"Manvir Kaur, Rahul Upadhyay, Vinay Kumar","doi":"10.1002/ima.23127","DOIUrl":"https://doi.org/10.1002/ima.23127","url":null,"abstract":"<div>\u0000 \u0000 <p>The emerging field of brain–computer interface has significantly facilitated the analysis of electroencephalogram signals required for motor imagery classification tasks. However, the accuracy of EEG classification models has been restricted by the low signal-to-noise ratio, nonlinear nature of brain signals, and a lack of sufficient EEG data for training. To address these challenges, this study proposes a new approach that combines time-frequency analysis with a hybrid parallel–series attention-based deep learning network for EEG signal classification. The proposed framework comprises three main elements: first, a scaling-basis chirplet transform designed to effectively capture the characteristics of nonstationary EEG signals; second, a hybrid parallel–series attention-based deep learning network to extract features. The serial information flow continuously expands the receptive fields of output neurons, whereas parallel information flow extracts features based on different regions. Finally, machine learning classifiers are utilized to predict the corresponding motor imagery state. The developed EEG-based motor imagery classification framework is assessed by two open-source datasets, BCI competition III, dataset IIIa and BCI competition IV, dataset IIa and has achieved the average classification accuracy of 95.55% on BCI competition III, dataset IIIa and 90.18% on BCI competition IV, dataset IIa. The experimental findings illustrate that this study has attained promising motor imagery discrimination performance, surpassing existing techniques in terms of classification accuracy and kappa coefficient.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141536618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Huang, Luyi Qiu, Zifeng Liu, Yi Ding, Mingsheng Cao
{"title":"Res-MulFra: Multilevel and Multiscale Framework for Brain Tumor Segmentation","authors":"Dan Huang, Luyi Qiu, Zifeng Liu, Yi Ding, Mingsheng Cao","doi":"10.1002/ima.23135","DOIUrl":"https://doi.org/10.1002/ima.23135","url":null,"abstract":"<div>\u0000 \u0000 <p>In clinical diagnosis and surgical planning, extracting brain tumors from magnetic resonance images (MRI) is very important. Nevertheless, considering the high variability and imbalance of the brain tumor datasets, the way of designing a deep neural network for accurately segmenting the brain tumor still challenges the researchers. Moreover, as the number of convolutional layers increases, the deep feature maps cannot provide fine-grained spatial information, and this feature information is useful for segmenting brain tumors from the MRI. Aiming to solve this problem, a brain tumor segmenting method of residual multilevel and multiscale framework (Res-MulFra) is proposed in this article. In the proposed framework, the multilevel is realized by stacking the proposed RMFM-based segmentation network (RMFMSegNet), which is mainly used to leverage the prior knowledge to gain a better brain tumor segmentation performance. The multiscale is implemented by the proposed RMFMSegNet, which includes both the parallel multibranch structure and the serial multibranch structure, and is mainly designed for obtaining the multiscale feature information. Moreover, from various receptive fields, a residual multiscale feature fusion module (RMFM) is also proposed to effectively combine the contextual feature information. Furthermore, in order to gain a better brain tumor segmentation performance, the channel attention module is also adopted. Through assessing the devised framework on the BraTS dataset and comparing it with other advanced methods, the effectiveness of the Res-MulFra is verified by the extensive experimental results. For the BraTS2015 testing dataset, the Dice value of the proposed method is 0.85 for the complete area, 0.72 for the core area, and 0.62 for the enhanced area.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141536906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fengying Ma, Zhi Wang, Peng Ji, Chengcai Fu, Feng Wang
{"title":"ResTrans-Unet: A Residual-Aware Transformer-Based Approach to Medical Image Segmentation","authors":"Fengying Ma, Zhi Wang, Peng Ji, Chengcai Fu, Feng Wang","doi":"10.1002/ima.23122","DOIUrl":"https://doi.org/10.1002/ima.23122","url":null,"abstract":"<div>\u0000 \u0000 <p>The convolutional neural network has significantly enhanced the efficacy of medical image segmentation. However, challenges persist in the deep learning-based method for medical image segmentation, necessitating the resolution of the following issues: (1) Medical images, characterized by a vast spatial scale and complex structure, pose difficulties in accurate edge information extraction; (2) In the decoding process, the assumption of equal importance among different channels contradicts the reality of their varying significance. This study addresses challenges observed in earlier medical image segmentation networks, particularly focusing on the precise extraction of edge information and the inadequate consideration of inter-channel importance during decoding. To address these challenges, we introduce ResTrans-Unet (residual transformer medical image segmentation network), an automatic segmentation model based on Residual-aware transformer. The Transformer is enhanced through the incorporation of ResMLP, resulting in enhanced edge information capture in images and improved network convergence speed. Additionally, Squeeze-and-Excitation Networks, which emphasize channel relationships, are integrated into the decoder to precisely highlight important features and suppress irrelevant ones. Experimental validations on two public datasets were carried out to assess the proposed model, comparing its performance with that of advanced models. The experimental results unequivocally demonstrate the superior performance of ResTrans-Unet in medical image segmentation tasks.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"University Rankings Are Hurting Academia in Developing Countries: An Urgent Call to Action","authors":"Mohamed L. Seghier, Habib Zaidi","doi":"10.1002/ima.23140","DOIUrl":"https://doi.org/10.1002/ima.23140","url":null,"abstract":"<p>Higher education institutions in developing countries are increasingly relying on university rankings in the decision-making process on how to improve reputation and impact [<span>1</span>]. Such ranking schemes, some being promoted by unaccountable for-profit agencies, have many well-documented limitations [<span>2</span>], such as the overly subjective and biased measurement of excellence and reputation for universities operating in diverse socio-economic and political contexts. Despite these limitations, these rankings are still being promoted as critical indicators of academic excellence [<span>3</span>], thereby influencing the higher education landscape at an unsustainable pace. Indeed, every year, in pursuing an elusive high rank, academics in emerging universities feel the pressure to make quick changes, sometimes by espousing short-sighted strategies that do not always align with long-term goals [<span>4</span>]. There are indeed stories from some universities in developing countries where research programmes and even whole departments were closed because they operated within domains with low citation dynamics. Such obsession with university rankings is hurting academia with dear consequences: talent deterred and income affected [<span>5</span>]. This race for top spots in the league table of universities has brought the worst of academia to emerging universities, for example, the publish-and-perish model and the conversion to numbers-centred instead of people-centred institutions.</p><p>As recently advocated by the United Nations University International Institute for Global Health [<span>6</span>], it is urgent to raise awareness about the damaging effects of university rankings in developing countries. An examination of current university rankings schemes shows that the whole process is affected by many fallacies at different degrees: the supremacy of quantitative measures (the McNamara fallacy), indicators taken as goals (Goodhart's law), indicators replacing the original dimensions they aim to measure (surrogation), and high susceptibility to corruption (Campbell's law) including process manipulation (gaming the system) and perverse incentives (the cobra effect). It is thus essential to take a more proactive instance by moving from ‘ranking takers’ to ‘ranking makers’ [<span>7</span>] and espouse a more responsible evaluation process [<span>8</span>]. By analogy to the San Francisco Declaration on Research Assessment (DORA), this call is a plea for a paradigm shift. Specifically, here we recommend the following measures:</p><p><i>Avoiding the McNamara fallacy</i>: Numbers are not everything. Quantitative indicators have inherent drawbacks and biases and cannot comprehensively measure complex constructs like academic reputation and excellence. These indicators also lack validity across different social and political contexts and tend to reverberate the same privileges top universities typically enjoy. It thus makes sense to adopt ","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23140","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changxiong Xie, Jianming Ye, Xiaofei Ma, Leshui Dong, Guohua Zhao, Jingliang Cheng, Guang Yang, Xiaobo Lai
{"title":"Automated Segmentation of Brain Gliomas in Multimodal MRI Data","authors":"Changxiong Xie, Jianming Ye, Xiaofei Ma, Leshui Dong, Guohua Zhao, Jingliang Cheng, Guang Yang, Xiaobo Lai","doi":"10.1002/ima.23128","DOIUrl":"https://doi.org/10.1002/ima.23128","url":null,"abstract":"<div>\u0000 \u0000 <p>Brain gliomas, common in adults, pose significant diagnostic challenges. Accurate segmentation from multimodal magnetic resonance imaging (MRI) scans is critical for effective treatment planning. Traditional manual segmentation methods, labor-intensive and error-prone, often lead to inconsistent diagnoses. To overcome these limitations, our study presents a sophisticated framework for the automated segmentation of brain gliomas from multimodal MRI images. This framework consists of three integral components: a 3D UNet, a classifier, and a Classifier Weight Transformer (CWT). The 3D UNet, acting as both an encoder and decoder, is instrumental in extracting comprehensive features from MRI scans. The classifier, employing a streamlined 1 × 1 convolutional architecture, performs detailed pixel-wise classification. The CWT integrates self-attention mechanisms through three linear layers, a multihead attention module, and layer normalization, dynamically refining the classifier's parameters based on the features extracted by the 3D UNet, thereby improving segmentation accuracy. Our model underwent a two-stage training process for maximum efficiency: in the first stage, supervised learning was used to pre-train the encoder and decoder, focusing on robust feature representation. In the second stage, meta-training was applied to the classifier, with the encoder and decoder remaining unchanged, ensuring precise fine-tuning based on the initially developed features. Extensive evaluation of datasets such as BraTS2019, BraTS2020, BraTS2021, and a specialized private dataset (ZZU) underscored the robustness and clinical potential of our framework, highlighting its superiority and competitive advantage over several state-of-the-art approaches across various segmentation metrics in training and validation sets.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformer Skip-Fusion Based SwinUNet for Liver Segmentation From CT Images","authors":"S. S. Kumar, R. S. Vinod Kumar","doi":"10.1002/ima.23126","DOIUrl":"https://doi.org/10.1002/ima.23126","url":null,"abstract":"<div>\u0000 \u0000 <p>Liver segmentation is a crucial step in medical image analysis and is essential for diagnosing and treating liver diseases. However, manual segmentation is time-consuming and subject to variability among observers. To address these challenges, a novel liver segmentation approach, SwinUNet with transformer skip-fusion is proposed. This method harnesses the Swin Transformer's capacity to model long-range dependencies efficiently, the U-Net's ability to preserve fine spatial details, and the transformer skip-fusion's effectiveness in enabling the decoder to learn intricate features from encoder feature maps. In experiments using the 3DIRCADb and CHAOS datasets, this technique outperformed traditional CNN-based methods, achieving a mean DICE coefficient of 0.988% and a mean Jaccard coefficient of 0.973% by aggregating the results obtained from each dataset, signifying outstanding agreement with ground truth. This remarkable accuracy in liver segmentation holds significant promise for improving liver disease diagnosis and enhancing healthcare outcomes for patients with liver conditions.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roaa Soloh, Hassan Alabboud, Ahmad Shahin, Adnan Yassine, Abdallah El Chakik
{"title":"Brain Tumor Segmentation Based on α-Expansion Graph Cut","authors":"Roaa Soloh, Hassan Alabboud, Ahmad Shahin, Adnan Yassine, Abdallah El Chakik","doi":"10.1002/ima.23132","DOIUrl":"https://doi.org/10.1002/ima.23132","url":null,"abstract":"<p>In recent years, there has been an increased interest in using image processing, computer vision, and machine learning in biological and medical imaging research. One area of this interest is the diagnosis of brain tumors, which is considered a difficult and time-consuming task traditionally performed manually. In this study, we present a method for tumor detection from magnetic resonance images (MRI) using a well-known graph-based algorithm, the Boykov–Kolmogorov algorithm, and the α-expansion method. This approach involves pre-processing the MRIs, representing the image positions as nodes, and calculations of the weights between edges as differences in intensity. The problem is formulated as an energy minimization problem and is solved by finding the 0,1 score for the image. Post-processing is also performed to enhance the overall segmentation. The proposed method is easy to implement and shows high accuracy, precision, and efficiency in the results. We believe that this approach will bring significant benefits to scientists and healthcare researchers in qualitative research and can be applied to various imaging modalities for future research.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23132","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141488604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}