Xiangyu Xiong , Yue Sun , Xiaohong Liu , Wei Ke , Chan-Tong Lam , Jiangang Chen , Mingfeng Jiang , Mingwei Wang , Hui Xie , Tong Tong , Qinquan Gao , Hao Chen , Tao Tan
{"title":"Distance guided generative adversarial network for explainable medical image classifications","authors":"Xiangyu Xiong , Yue Sun , Xiaohong Liu , Wei Ke , Chan-Tong Lam , Jiangang Chen , Mingfeng Jiang , Mingwei Wang , Hui Xie , Tong Tong , Qinquan Gao , Hao Chen , Tao Tan","doi":"10.1016/j.compmedimag.2024.102444","DOIUrl":"10.1016/j.compmedimag.2024.102444","url":null,"abstract":"<div><div>Despite the potential benefits of data augmentation for mitigating data insufficiency, traditional augmentation methods primarily rely on prior intra-domain knowledge. On the other hand, advanced generative adversarial networks (GANs) generate inter-domain samples with limited variety. These previous methods make limited contributions to describing the decision boundaries for binary classification. In this paper, we propose a distance-guided GAN (DisGAN) that controls the variation degrees of generated samples in the hyperplane space. Specifically, we instantiate the idea of DisGAN by combining two ways. The first way is vertical distance GAN (VerDisGAN) where the inter-domain generation is conditioned on the vertical distances. The second way is horizontal distance GAN (HorDisGAN) where the intra-domain generation is conditioned on the horizontal distances. Furthermore, VerDisGAN can produce the class-specific regions by mapping the source images to the hyperplane. Experimental results show that DisGAN consistently outperforms the GAN-based augmentation methods with explainable binary classification. The proposed method can apply to different classification architectures and has the potential to extend to multi-class classification. We provide the code in <span><span>https://github.com/yXiangXiong/DisGAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102444"},"PeriodicalIF":5.4,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lipeng Xie , Yongrui Xu , Mingfeng Zheng , Yundi Chen , Min Sun , Michael A. Archer , Wenjun Mao , Yubing Tong , Yuan Wan
{"title":"An anthropomorphic diagnosis system of pulmonary nodules using weak annotation-based deep learning","authors":"Lipeng Xie , Yongrui Xu , Mingfeng Zheng , Yundi Chen , Min Sun , Michael A. Archer , Wenjun Mao , Yubing Tong , Yuan Wan","doi":"10.1016/j.compmedimag.2024.102438","DOIUrl":"10.1016/j.compmedimag.2024.102438","url":null,"abstract":"<div><div>The accurate categorization of lung nodules in CT scans is an essential aspect in the prompt detection and diagnosis of lung cancer. The categorization of grade and texture for nodules is particularly significant since it can aid radiologists and clinicians to make better-informed decisions concerning the management of nodules. However, currently existing nodule classification techniques have a singular function of nodule classification and rely on an extensive amount of high-quality annotation data, which does not meet the requirements of clinical practice. To address this issue, we develop an anthropomorphic diagnosis system of pulmonary nodules (PN) based on deep learning (DL) that is trained by weak annotation data and has comparable performance to full-annotation based diagnosis systems. The proposed system uses DL models to classify PNs (benign vs. malignant) with weak annotations, which eliminates the need for time-consuming and labor-intensive manual annotations of PNs. Moreover, the PN classification networks, augmented with handcrafted shape features acquired through the ball-scale transform technique, demonstrate capability to differentiate PNs with diverse labels, including pure ground-glass opacities, part-solid nodules, and solid nodules. Through 5-fold cross-validation on two datasets, the system achieved the following results: (1) an Area Under Curve (AUC) of 0.938 for PN localization and an AUC of 0.912 for PN differential diagnosis on the LIDC-IDRI dataset of 814 testing cases, (2) an AUC of 0.943 for PN localization and an AUC of 0.815 for PN differential diagnosis on the in-house dataset of 822 testing cases. In summary, our system demonstrates efficient localization and differential diagnosis of PNs in a resource limited environment, and thus could be translated into clinical use in the future.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102438"},"PeriodicalIF":5.4,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juyoung Lee, Jaehee Chun, Hojin Kim, Jin Sung Kim, Seong Yong Park
{"title":"Corrigendum to 'Development and evaluation of an integrated model based on a deep segmentation network and demography-added radiomics algorithm for segmentation and diagnosis of early lung adenocarcinoma' [Computerized Medical Imaging and Graphics Volume 109 (2023) 102299].","authors":"Juyoung Lee, Jaehee Chun, Hojin Kim, Jin Sung Kim, Seong Yong Park","doi":"10.1016/j.compmedimag.2024.102428","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102428","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":" ","pages":"102428"},"PeriodicalIF":5.4,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mainak Biswas , Luca Saba , Mannudeep Kalra , Rajesh Singh , J. Fernandes e Fernandes , Vijay Viswanathan , John R. Laird , Laura E. Mantella , Amer M. Johri , Mostafa M. Fouda , Jasjit S. Suri
{"title":"MultiNet 2.0: A lightweight attention-based deep learning network for stenosis measurement in carotid ultrasound scans and cardiovascular risk assessment","authors":"Mainak Biswas , Luca Saba , Mannudeep Kalra , Rajesh Singh , J. Fernandes e Fernandes , Vijay Viswanathan , John R. Laird , Laura E. Mantella , Amer M. Johri , Mostafa M. Fouda , Jasjit S. Suri","doi":"10.1016/j.compmedimag.2024.102437","DOIUrl":"10.1016/j.compmedimag.2024.102437","url":null,"abstract":"<div><h3>Background</h3><div>Cardiovascular diseases (CVD) cause 19 million fatalities each year and cost nations billions of dollars. Surrogate biomarkers are established methods for CVD risk stratification; however, manual inspection is costly, cumbersome, and error-prone. The contemporary artificial intelligence (AI) tools for segmentation and risk prediction, including older deep learning (DL) networks employ simple merge connections which may result in semantic loss of information and hence low in accuracy.</div></div><div><h3>Methodology</h3><div>We hypothesize that DL networks enhanced with attention mechanisms can do better segmentation than older DL models. The attention mechanism can concentrate on relevant features aiding the model in better understanding and interpreting images. This study proposes MultiNet 2.0 (AtheroPoint, Roseville, CA, USA), two attention networks have been used to segment the lumen from common carotid artery (CCA) ultrasound images and predict CVD risks.</div></div><div><h3>Results</h3><div>The database consisted of 407 ultrasound CCA images of both the left and right sides taken from 204 patients. Two experts were hired to delineate borders on the 407 images, generating two ground truths (GT1 and GT2). The results were far better than contemporary models. The lumen dimension (LD) error for GT1 and GT2 were 0.13±0.08 and 0.16±0.07 mm, respectively, the best in market. The AUC for low, moderate and high-risk patients’ detection from stenosis data for GT1 were 0.88, 0.98, and 1.00 respectively. Similarly, for GT2, the AUC values for low, moderate, and high-risk patient detection were 0.93, 0.97, and 1.00, respectively.</div><div>The system can be fully adopted for clinical practice in AtheroEdge™ model by AtheroPoint, Roseville, CA, USA.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102437"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Padmaja Jonnalagedda , Brent Weinberg , Taejin L. Min , Shiv Bhanu , Bir Bhanu
{"title":"Computational modeling of tumor invasion from limited and diverse data in Glioblastoma","authors":"Padmaja Jonnalagedda , Brent Weinberg , Taejin L. Min , Shiv Bhanu , Bir Bhanu","doi":"10.1016/j.compmedimag.2024.102436","DOIUrl":"10.1016/j.compmedimag.2024.102436","url":null,"abstract":"<div><div>For diseases with high morbidity rates such as Glioblastoma Multiforme, the prognostic and treatment planning pipeline requires a comprehensive analysis of imaging, clinical, and molecular data. Many mutations have been shown to correlate strongly with the median survival rate and response to therapy of patients. Studies have demonstrated that these mutations manifest as specific visual biomarkers in tumor imaging modalities such as MRI. To minimize the number of invasive procedures on a patient and for the overall resource optimization for the prognostic and treatment planning process, the correlation of imaging and molecular features has garnered much interest. While the tumor mass is the most significant feature, the impacted tissue surrounding the tumor is also a significant biomarker contributing to the visual manifestation of mutations — which has not been studied as extensively. The pattern of tumor growth impacts the surrounding tissue accordingly, which is a reflection of tumor properties as well. Modeling how the tumor growth impacts the surrounding tissue can reveal important information about the patterns of tumor enhancement, which in turn has significant diagnostic and prognostic value. This paper presents the first work to automate the computational modeling of the impacted tissue surrounding the tumor using generative deep learning. The paper isolates and quantifies the impact of the Tumor Invasion (TI) on surrounding tissue based on change in mutation status, subsequently assessing its prognostic value. Furthermore, a TI Generative Adversarial Network (TI-GAN) is proposed to model the tumor invasion properties. Extensive qualitative and quantitative analyses, cross-dataset testing, and radiologist blind tests are carried out to demonstrate that TI-GAN can realistically model the tumor invasion under practical challenges of medical datasets such as limited data and high intra-class heterogeneity.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102436"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting thyroid nodules along with surrounding tissues and tracking nodules using motion prior in ultrasound videos","authors":"Song Gao , Yueyang Li , Haichi Luo","doi":"10.1016/j.compmedimag.2024.102439","DOIUrl":"10.1016/j.compmedimag.2024.102439","url":null,"abstract":"<div><div>Ultrasound examination plays a crucial role in the clinical diagnosis of thyroid nodules. Although deep learning technology has been applied to thyroid nodule examinations, the existing methods all overlook the prior knowledge of nodules moving along a straight line in the video. We propose a new detection model, DiffusionVID-Line, and design a novel tracking algorithm, ByteTrack-Line, both of which fully leverage the prior knowledge of linear motion of nodules in thyroid ultrasound videos. Among them, ByteTrack-Line groups detected nodules, further reducing the workload of doctors and significantly improving their diagnostic speed and accuracy. In DiffusionVID-Line, we propose two new modules: Freq-FPN and Attn-Line. Freq-FPN module is used to extract frequency features, taking advantage of these features to reduce the impact of image blur in ultrasound videos. Based on the standard practice of segmented scanning by doctors, Attn-Line module enhances the attention on targets moving along a straight line, thus improving the accuracy of detection. In ByteTrack-Line, considering the characteristic of linear motion of nodules, we propose the Match-Line association module, which reduces the number of nodule ID switches. In the testing of the detection and tracking datasets, DiffusionVID-Line achieved a mean Average Precision (mAP50) of 74.2 for multiple tissues and 85.6 for nodules, while ByteTrack-Line achieved a Multiple Object Tracking Accuracy (MOTA) of 83.4. Both nodule detection and tracking have achieved state-of-the-art performance.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102439"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riel Castro-Zunti , Kaike Li , Aleti Vardhan , Younhee Choi , Gong Yong Jin , Seok-bum Ko
{"title":"RibFractureSys: A gem in the face of acute rib fracture diagnoses","authors":"Riel Castro-Zunti , Kaike Li , Aleti Vardhan , Younhee Choi , Gong Yong Jin , Seok-bum Ko","doi":"10.1016/j.compmedimag.2024.102429","DOIUrl":"10.1016/j.compmedimag.2024.102429","url":null,"abstract":"<div><div>Rib fracture patients, common in trauma wards, have different mortality rates and comorbidities depending on how many and which ribs are fractured. This knowledge is therefore paramount to make accurate prognoses and prioritize patient care. However, tracking 24 ribs over upwards 200+ frames in a patient’s scan is time-consuming and error-prone for radiologists, especially depending on their experience.</div><div>We propose an automated, modular, three-stage solution to assist radiologists. Using 9 fully annotated patient scans, we trained a multi-class U-Net to segment rib lesions and common anatomical clutter. To recognize rib fractures and mitigate false positives, we fine-tuned a ResNet-based model using 5698 false positives, 2037 acute fractures, 4786 healed fractures, and 14,904 unfractured rib lesions. Using almost 200 patient cases, we developed a highly task-customized multi-object rib lesion tracker to determine which lesions in a frame belong to which of the 12 ribs on either side; bounding box intersection over union- and centroid-based tracking, a line-crossing methodology, and various heuristics were utilized. Our system accepts an axial CT scan and processes, labels, and color-codes the scan.</div><div>Over an internal validation dataset of 1000 acute rib fracture and 1000 control patients, our system, assessed by a 3-year radiologist resident, achieved 96.1% and 97.3% correct fracture classification accuracy for rib fracture and control patients, respectively. However, 18.0% and 20.8% of these patients, respectively, had incorrect rib labeling. Percentages remained consistent across sex and age demographics. Labeling issues include anatomical clutter being mislabeled as ribs and ribs going unlabeled.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102429"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaraslau Padrez , Lena Golubewa , Igor Timoshchenko , Adrian Enache , Lucian G. Eftimie , Radu Hristu , Danielis Rutkauskas
{"title":"Machine learning-based diagnostics of capsular invasion in thyroid nodules with wide-field second harmonic generation microscopy","authors":"Yaraslau Padrez , Lena Golubewa , Igor Timoshchenko , Adrian Enache , Lucian G. Eftimie , Radu Hristu , Danielis Rutkauskas","doi":"10.1016/j.compmedimag.2024.102440","DOIUrl":"10.1016/j.compmedimag.2024.102440","url":null,"abstract":"<div><div>Papillary thyroid carcinoma (PTC) is one of the most common, well-differentiated carcinomas of the thyroid gland. PTC nodules are often surrounded by a collagen capsule that prevents the spread of cancer cells. However, as the malignant tumor progresses, the integrity of this protective barrier is compromised, and cancer cells invade the surroundings. The detection of capsular invasion is, therefore, crucial for the diagnosis and the choice of treatment and the development of new approaches aimed at the increase of diagnostic performance are of great importance. In the present study, we exploited the wide-field second harmonic generation (SHG) microscopy in combination with texture analysis and unsupervised machine learning (ML) to explore the possibility of quantitative characterization of collagen structure in the capsule and designation of different capsule areas as either intact, disrupted by invasion, or apt to invasion. Two-step <em>k</em>-means clustering showed that the collagen capsules in all analyzed tissue sections were highly heterogeneous and exhibited distinct segments described by characteristic ML parameter sets. The latter allowed a structural interpretation of the collagen fibers at the sites of overt invasion as fragmented and curled fibers with rarely formed distributed networks. Clustering analysis also distinguished areas in the PTC capsule that were not categorized as invasion sites by the initial histopathological analysis but could be recognized as prospective micro-invasions after additional inspection. The characteristic features of suspicious and invasive sites identified by the proposed unsupervised ML approach can become a reliable complement to existing methods for diagnosing encapsulated PTC, increase the reliability of diagnosis, simplify decision making, and prevent human-related diagnostic errors. In addition, the proposed automated ML-based selection of collagen capsule images and exclusion of non-informative regions can greatly accelerate and simplify the development of reliable methods for fully automated ML diagnosis that can be integrated into clinical practice.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102440"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic MRI interpolation in temporal direction using an unsupervised generative model","authors":"Corbin Maciel , Qing Zou","doi":"10.1016/j.compmedimag.2024.102435","DOIUrl":"10.1016/j.compmedimag.2024.102435","url":null,"abstract":"<div><h3>Purpose</h3><div>Cardiac cine magnetic resonance imaging (MRI) is an important tool in assessing dynamic heart function. However, this technique requires long acquisition time and long breath holds, which presents difficulties. The aim of this study is to propose an unsupervised neural network framework that can perform cardiac cine interpolation in time, so that we can increase the temporal resolution of cardiac cine without increasing acquisition time.</div></div><div><h3>Methods</h3><div>In this study, a subject-specific unsupervised generative neural network is designed to perform temporal interpolation for cardiac cine MRI. The network takes in a 2D latent vector in which each element corresponds to one cardiac phase in the cardiac cycle and then the network outputs the cardiac cine images which are acquired on the scanner. After the training of the generative network, we can interpolate the 2D latent vector and input the interpolated latent vector into the network and the network will output the frame-interpolated cine images. The results of the proposed cine interpolation neural network (CINN) framework are compared quantitatively and qualitatively with other state-of-the-art methods, the ground truth training cine frames, and the ground truth frames removed from the original acquisition. Signal-to-noise ratio (SNR), structural similarity index measures (SSIM), peak signal-to-noise ratio (PSNR), strain analysis, as well as the sharpness calculated using the Tenengrad algorithm were used for image quality assessment.</div></div><div><h3>Results</h3><div>As shown quantitatively and qualitatively, the proposed framework learns the generative task well and hence performs the temporal interpolation task well. Furthermore, both quantitative and qualitative comparison studies show the effectiveness of the proposed framework in cardiac cine interpolation in time.</div></div><div><h3>Conclusion</h3><div>The proposed generative model can effectively learn the generative task and perform high quality cardiac cine interpolation in time.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102435"},"PeriodicalIF":5.4,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142318453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixiao Lu , Kai Tang , Yi Wu , Xiaoxuan Zhang , Ziqi An , Xiongfeng Zhu , Qianjin Feng , Yinghua Zhao
{"title":"BreasTDLUSeg: A coarse-to-fine framework for segmentation of breast terminal duct lobular units on histopathological whole-slide images","authors":"Zixiao Lu , Kai Tang , Yi Wu , Xiaoxuan Zhang , Ziqi An , Xiongfeng Zhu , Qianjin Feng , Yinghua Zhao","doi":"10.1016/j.compmedimag.2024.102432","DOIUrl":"10.1016/j.compmedimag.2024.102432","url":null,"abstract":"<div><div>Automatic segmentation of breast terminal duct lobular units (TDLUs) on histopathological whole-slide images (WSIs) is crucial for the quantitative evaluation of TDLUs in the diagnostic and prognostic analysis of breast cancer. However, TDLU segmentation remains a great challenge due to its highly heterogeneous sizes, structures, and morphologies as well as the small areas on WSIs. In this study, we propose BreasTDLUSeg, an efficient coarse-to-fine two-stage framework based on multi-scale attention to achieve localization and precise segmentation of TDLUs on hematoxylin and eosin (H&E)-stained WSIs. BreasTDLUSeg consists of two networks: a superpatch-based patch-level classification network (SPPC-Net) and a patch-based pixel-level segmentation network (PPS-Net). SPPC-Net takes a superpatch as input and adopts a sub-region classification head to classify each patch within the superpatch as TDLU positive or negative. PPS-Net takes the TDLU positive patches derived from SPPC-Net as input. PPS-Net deploys a multi-scale CNN-Transformer as an encoder to learn enhanced multi-scale morphological representations and an upsampler to generate pixel-wise segmentation masks for the TDLU positive patches. We also constructed two breast cancer TDLU datasets containing a total of 530 superpatch images with patch-level annotations and 2322 patch images with pixel-level annotations to enable the development of TDLU segmentation methods. Experiments on the two datasets demonstrate that BreasTDLUSeg outperforms other state-of-the-art methods with the highest Dice similarity coefficients of 79.97% and 92.93%, respectively. The proposed method shows great potential to assist pathologists in the pathological analysis of breast cancer. An open-source implementation of our approach can be found at <span><span>https://github.com/Dian-kai/BreasTDLUSeg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102432"},"PeriodicalIF":5.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}