{"title":"Retinal Blood Vessels Segmentation With Improved SE-UNet Model","authors":"Yibo Wan, Gaofeng Wei, Renxing Li, Yifan Xiang, Dechao Yin, Minglei Yang, Deren Gong, Jiangang Chen","doi":"10.1002/ima.23145","DOIUrl":"https://doi.org/10.1002/ima.23145","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate segmentation of retinal vessels is crucial for the early diagnosis and treatment of eye diseases, for example, diabetic retinopathy, glaucoma, and macular degeneration. Due to the intricate structure of retinal vessels, it is essential to extract their features with precision for the semantic segmentation of medical images. In this study, an improved deep learning neural network was developed with a focus on feature extraction based on the U-Net structure. The enhanced U-Net combines the architecture of convolutional neural networks (CNNs) with SE blocks (squeeze-and-excitation blocks) to adaptively extract image features after each U-Net encoder's convolution. This approach aids in suppressing nonvascular regions and highlighting features for specific segmentation tasks. The proposed method was trained and tested on the DRIVECHASE_DB1 and STARE datasets. As a result, the proposed model had an algorithmic accuracy, sensitivity, specificity, Dice coefficient (Dc), and Matthews correlation coefficient (MCC) of 95.62/0.9853/0.9652, 0.7751/0.7976/0.7773, 0.9832/0.8567/0.9865, 82.53/87.23/83.42, and 0.7823/0.7987/0.8345, respectively, outperforming previous methods, including UNet++, attention U-Net, and ResUNet. The experimental results demonstrated that the proposed method improved the retinal vessel segmentation performance.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141730269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiscale Feature Fusion Method for Liver Cirrhosis Classification","authors":"Shanshan Wang, Ling Jian, Kaiyan Li, Pingping Zhou, Liang Zeng","doi":"10.1002/ima.23143","DOIUrl":"https://doi.org/10.1002/ima.23143","url":null,"abstract":"<div>\u0000 \u0000 <p>Liver cirrhosis is one of the most common liver diseases in the world, posing a threat to people's daily lives. In advanced stages, cirrhosis can lead to severe symptoms and complications, making early detection and treatment crucial. This study aims to address this critical healthcare challenge by improving the accuracy of liver cirrhosis classification using ultrasound imaging, thereby assisting medical professionals in early diagnosis and intervention. This article proposes a new multiscale feature fusion network model (MSFNet), which uses the feature extraction module to capture multiscale features from ultrasound images. This approach enables the neural network to utilize richer information to accurately classify the stage of cirrhosis. In addition, a new loss function is proposed to solve the class imbalance problem in medical datasets, which makes the model pay more attention to the samples that are difficult to classify and improves the performance of the model. The effectiveness of the proposed MSFNet was evaluated using ultrasound images from 61 subjects. Experimental results demonstrate that our method achieves high classification accuracy, with 98.08% on convex array datasets and 97.60% on linear array datasets. Our proposed method can classify early, middle, and late cirrhosis very accurately. It provides valuable insights for the clinical treatment of liver cirrhosis and may be helpful for the rehabilitation of patients.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Skin Disease Diagnosis Through Deep Learning: A Comprehensive Study on Dermoscopic Image Preprocessing and Classification","authors":"Elif Nur Haner Kırğıl, Çağatay Berke Erdaş","doi":"10.1002/ima.23148","DOIUrl":"https://doi.org/10.1002/ima.23148","url":null,"abstract":"<p>Skin cancer occurs when abnormal cells in the top layer of the skin, known as the epidermis, undergo uncontrolled growth due to unrepaired DNA damage, leading to the development of mutations. These mutations lead to rapid cell growth and development of cancerous tumors. The type of cancerous tumor depends on the cells of origin. Overexposure to ultraviolet rays from the sun, tanning beds, or sunlamps is a primary factor in the occurrence of skin cancer. Since skin cancer is one of the most common types of cancer and has a high mortality, early diagnosis is extremely important. The dermatology literature has many studies of computer-aided diagnosis for early and highly accurate skin cancer detection. In this study, the classification of skin cancer was provided by Regnet x006, EfficientNetv2 B0, and InceptionResnetv2 deep learning methods. To increase the classification performance, hairs and black pixels in the corners due to the nature of dermoscopic images, which could create noise for deep learning, were eliminated in the preprocessing step. Preprocessing was done by hair removal, cropping, segmentation, and applying a median filter to dermoscopic images. To measure the performance of the proposed preprocessing technique, the results were obtained with both raw images and preprocessed images. The model developed to provide a solution to the classification problem is based on deep learning architectures. In the four experiments carried out within the scope of the study, classification was made for the eight classes in the dataset, squamous cell carcinoma and basal cell carcinoma classification, benign keratosis and actinic keratosis classification, and finally benign and malignant disease classification. According to the results obtained, the best accuracy values of the experiments were obtained as 0.858, 0.929, 0.917, and 0.906, respectively. The study underscores the significance of early and accurate diagnosis in addressing skin cancer, a prevalent and potentially fatal condition. The primary aim of the preprocessing procedures was to attain enhanced performance results by concentrating solely on the area spanning the lesion instead of analyzing the complete image. Combining the suggested preprocessing strategy with deep learning techniques shows potential for enhancing skin cancer diagnosis, particularly in terms of sensitivity and specificity.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23148","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convolutional Neural Network-Based CT Image Segmentation of Kidney Tumours","authors":"Cong Hu, Wenwen Jiang, Tian Zhou, Chunting Wan, Aijun Zhu","doi":"10.1002/ima.23142","DOIUrl":"https://doi.org/10.1002/ima.23142","url":null,"abstract":"<div>\u0000 \u0000 <p>Kidney tumours are one of the most common tumours in humans and the main current treatment is surgical removal. The CT images are usually manually segmented by a specialist for pre-operative planning, but this can be influenced by the surgeon's experience and skill and can be time-consuming. Due to the complex lesions and different morphologies of kidney tumours that make segmentation difficult, this article proposes a convolutional neural network-based automatic segmentation method for CT images of kidney tumours to address the most common problems of boundary blurring and false positives in tumour segmentation images. The method is highly accurate and reliable, and is used to assist doctors in surgical planning as well as diagnostic treatment, relieving medical pressure to a certain extent. The EfficientNetV2-UNet segmentation model proposed in this article includes three main parts: feature extractor, reconstruction network and Bayesian decision algorithm. Firstly, for the phenomenon of tumour false positives, the EfficientNetV2 feature extractor, which has high training accuracy and efficiency, is selected as the backbone network, which extracts shallow features such as tumour location, morphology and texture in the CT image by downsampling. Secondly, on the basis of the backbone network, the reconstruction network is designed, which mainly consists of conversion block, deconvolution block, convolution block and output block. Then, the up-sampling architecture is constructed to gradually recover the spatial resolution of the feature map, fully identify the contextual information and form a complete encoding–decoding structure. Multi-scale feature fusion is achieved by superimposing all levels of feature map channels on the left and right sides of the network, preventing the loss of details and performing accurate tumour segmentation. Finally, a Bayesian decision algorithm is designed for the edge blurring phenomenon of segmented tumours and cascaded over the output of the reconstruction network, combining the edge features of the original CT image and the segmented image for probability estimation, which is used to improve the accuracy of the model edge segmentation. Medical images in NII special format were converted to Numpy matrix format using python, and then more than 2000 CT images containing only kidney tumours were selected from the KiTS19 dataset as the dataset for the model, and the dimensions were standardised to 128 × 128, and the experimental results show that the model outperforms many other advanced models with good segmentation performance.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Infusing Weighted Average Ensemble Diversity for Advanced Breast Cancer Detection","authors":"Barsha Abhisheka, Saroj Kumar Biswas, Biswajit Purkayastha","doi":"10.1002/ima.23146","DOIUrl":"https://doi.org/10.1002/ima.23146","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer is a widespread health threat for women globally, often difficult to detect early due to its asymptomatic nature. As the disease advances, treatment becomes intricate and costly, ultimately resulting in elevated fatality rates. Currently, despite the widespread use of advanced machine learning (ML) and deep learning (DL) techniques, a comprehensive diagnosis of breast cancer remains elusive. Most of the existing methods primarily utilize either attention-based deep models or models based on handcrafted features to capture and gather local details. However, both of these approaches lack the capability to offer essential local information for precise tumor detection. Additionally, the available breast cancer datasets suffer from class imbalance issue. Hence, this paper presents a novel weighted average ensemble network (WA-ENet) designed for early-stage breast cancer detection that leverages the ability of ensemble technique over single classifier-based models for more robust and accurate prediction. The proposed model employs a weighted average-based ensemble technique, combining predictions from three diverse classifiers. The optimal combination of weights is determined using the hill climbing (HC) algorithm. Moreover, the proposed model enhances overall system performance by integrating deep features and handcrafted features through the use of HOG, thereby providing precise local information. Additionally, the proposed work addresses class imbalance by incorporating borderline synthetic minority over-sampling technique (BSMOTE). It achieves 99.65% accuracy on BUSI and 97.48% on UDIAT datasets.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141631188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Yazdani, Ahmadreza Okhovat, Raheleh Doosti, Hamid Soltanian-Zadeh
{"title":"A New Herbal Source of Synthesizing Contrast Agents for Magnetic Resonance Imaging","authors":"Ali Yazdani, Ahmadreza Okhovat, Raheleh Doosti, Hamid Soltanian-Zadeh","doi":"10.1002/ima.23136","DOIUrl":"https://doi.org/10.1002/ima.23136","url":null,"abstract":"<div>\u0000 \u0000 <p>This study explores the potential of halophytes, plants adapted to saline environments, as a novel source for developing herbal MRI contrast agents. Halophytes naturally accumulate various metals within their tissues. These metal ions, potentially complexed with organic molecules, are released into aqueous solutions prepared from the plants. We investigated the ability of these compounds to generate contrast enhancement in MRI using a sequential approach. First, aqueous extracts were prepared from seven selected halophytes, and their capacity to induce contrast in MR images was evaluated. Based on these initial findings, sample halophytes were chosen for further investigations. Second, chemical analysis revealed aluminum as the primary potent metal which enhances the contrast. Third, the halophyte extract was fractionated based on polarity, and the most polar fraction exhibited the strongest contrast-generating effect. Finally, the relaxivity of this fraction, a key parameter for MRI contrast agents, was measured. We propose that aluminum, likely complexed with a polar molecule within the plant extract, is responsible for the observed contrast enhancement in MRI.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pythagorean Fuzzy Set for Enhancement of Low Contrast Mammogram Images","authors":"Tamalika Chaira, Arun Sarkar","doi":"10.1002/ima.23137","DOIUrl":"https://doi.org/10.1002/ima.23137","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast masses are often one of the primary signs of breast cancer, and precise segmentation of these masses is essential for accurate diagnosis and treatment planning. Diagnosis may be complex depending on the size and visibility of the mass. When the mass is not visible clearly, precise segmentation becomes very difficult and in that case enhancement is essential. Inadequate compression, patient movement, or paddle/breast movement during the exposure process might cause hazy mammogram images. Without enhancement, accurate segmentation and detection cannot be done. As there exists uncertainties in different regions, reducing uncertainty is still a main problem and so fuzzy methods may deal these uncertainties in a better way. Though there are many fuzzy and advanced fuzzy methods, we consider Pythagorean fuzzy set as one of the fuzzy sets that may be powerful to deal with uncertainty. This research proposes a new Pythagorean fuzzy methodology for mammography image enhancement. The image is first transformed into a fuzzy image, and the nonmembership function is then calculated using a newly created Pythagorean fuzzy generator. Membership function of Pythagorean fuzzy image is computed from nonmembership function. The plot between the membership value and the hesitation degree is used to calculate a constant term in the membership function. Next, an enhanced image is obtained by applying fuzzy intensification operator to the Pythagorean fuzzy image. The proposed method is compared qualitatively and quantitatively with those of non-fuzzy, intuitionistic fuzzy, Type 2 fuzzy, and Pythagorean fuzzy methods, it is found that the suggested method outperforms the other methods. To show the usefulness of the proposed enhanced method, segmentation is carried out on the enhanced images.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Species Segmentation of Animal Prostate Using a Human Prostate Dataset and Limited Preoperative Animal Images: A Sampled Experiment on Dog Prostate Tissue","authors":"Yang Yang, Seong Young Ko","doi":"10.1002/ima.23138","DOIUrl":"https://doi.org/10.1002/ima.23138","url":null,"abstract":"<div>\u0000 \u0000 <p>In the development of medical devices and surgical robot systems, animal models are often used for evaluation, necessitating accurate organ segmentation. Deep learning-based image segmentation provides a solution for automatic and precise organ segmentation. However, a significant challenge in this approach arises from the limited availability of training data for animal models. In contrast, human medical image datasets are readily available. To address this imbalance, this study proposes a fine-tuning approach that combines a limited set of animal model images with a comprehensive human image dataset. Various postprocessing algorithms were applied to ensure that the segmentation results met the positioning requirements for the evaluation of a medical robot under development. As one of the target applications, magnetic resonance images were used to determine the position of the dog's prostate, which was then used to determine the target location of the robot under development. The MSD TASK5 dataset was used as the human dataset for pretraining, which involved a modified U-Net network. Ninety-nine pretrained backbone networks were tested as encoders for U-Net. The cross-training validation was performed using the selected network backbone. The highest accuracy, with an IoU score of 0.949, was achieved using the independent validation set from the MSD TASK5 human dataset. Subsequently, fine-tuning was performed using a small set of dog prostate images, resulting in the highest accuracy of an IoU score of 0.961 across different cross-validation groups. The processed results demonstrate the feasibility of the proposed approach for accurate prostate segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyang Wen, Zhuoxuan Liu, Yanbo Chu, Min Le, Liang Li
{"title":"MRCM-UCTransNet: Automatic and Accurate 3D Tooth Segmentation Network From Cone-Beam CT Images","authors":"Xinyang Wen, Zhuoxuan Liu, Yanbo Chu, Min Le, Liang Li","doi":"10.1002/ima.23139","DOIUrl":"https://doi.org/10.1002/ima.23139","url":null,"abstract":"<div>\u0000 \u0000 <p>Many scenarios in dental clinical diagnosis and treatment require the segmentation and identification of a specific tooth or the entire dentition in cone-beam computed tomography (CBCT) images. However, traditional segmentation methods struggle to ensure accuracy. In recent years, there has been significant progress in segmentation algorithms based on deep learning, garnering considerable attention. Inspired by models from present neuro networks such as UCTransNet and DC-Unet, this study proposes an MRCM-UCTransNet for accurate three-dimensional tooth segmentation from cone-beam CT images. To enhance feature extraction while preserving the multi-head attention mechanism, a multi-scale residual convolution module (MRCM) is integrated into the UCTransNet architecture. This modification addresses the limitations of traditional segmentation methods and aims to improve accuracy in tooth segmentation from CBCT images. Comparative experiments indicate that, in the situation with a specific image size and small data volume, the proposed method exhibits certain advantages in segmentation accuracy and precision. Compared to traditional Unet approaches, MRCM-UCTransNet's dice accuracy is improved by 7%, while its sensitivity is improved by about 10%. These findings highlight the efficacy of the proposed approach, particularly in scenarios with specific image size constraints and limited data availability. The proposed MRCM-UCTransNet algorithm integrates the latest architectural advancements in the Unet model which achieves effective segmentation of six types of teeth within the tooth. It was proved to be efficient for image segmentation on small datasets, requiring less training time and fewer parameters.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi-Fusion Residual Attention U-Net Using Temporal Information for Segmentation of Left Ventricular Structures in 2D Echocardiographic Videos","authors":"Kai Wang, Hirotaka Hachiya, Haiyuan Wu","doi":"10.1002/ima.23141","DOIUrl":"10.1002/ima.23141","url":null,"abstract":"<div>\u0000 \u0000 <p>The interpretation of cardiac function using echocardiography requires a high level of diagnostic proficiency and years of experience. This study proposes a multi-fusion residual attention U-Net, MURAU-Net, to construct automatic segmentation for evaluating cardiac function from echocardiographic video. MURAU-Net has two benefits: (1) Multi-fusion network to strengthen the links between spatial features. (2) Inter-frame links can be established to augment the temporal coherence of sequential image data, thereby enhancing its continuity. To evaluate the effectiveness of the proposed method, we performed nine-fold cross-validation using CAMUS dataset. Among state-of-the-art methods, MURAU-Net achieves highly competitive score, for example, Dice similarity of 0.952 (ED phase) and 0.931 (ES phase) in <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msub>\u0000 <mi>LV</mi>\u0000 <mtext>Endo</mtext>\u0000 </msub>\u0000 </mrow>\u0000 <annotation>$$ {mathrm{LV}}_{mathrm{Endo}} $$</annotation>\u0000 </semantics></math>, 0.966 (ED phase) and 0.957 (ES phase) in <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msub>\u0000 <mi>LV</mi>\u0000 <mi>Epi</mi>\u0000 </msub>\u0000 </mrow>\u0000 <annotation>$$ {mathrm{LV}}_{mathrm{Epi}} $$</annotation>\u0000 </semantics></math>, and 0.901 (ED phase) and 0.917 (ES phase) in <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>LA</mi>\u0000 </mrow>\u0000 <annotation>$$ mathrm{LA} $$</annotation>\u0000 </semantics></math>, respectively. It also achieved the Dice similarity of 0.9313 in the EchoNet-Dynamic dataset for the overall left ventricle segmentation. In addition, we show MURAU-Net can accurately segment multiclass cardiac ultrasound videos and output the animation of segmentation results using the original two-chamber cardiac ultrasound dataset MUCO.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141572940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}