{"title":"Comparing synthetic mammograms based on wide-angle digital breast tomosynthesis with digital mammograms.","authors":"Magnus Dustler, Gustav Hellgren, Pontus Timberg","doi":"10.1117/1.JMI.12.S1.S13011","DOIUrl":"10.1117/1.JMI.12.S1.S13011","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to investigate the characteristics and evaluate the performance of synthetic mammograms (SMs) based on wide-angle digital breast tomosynthesis (DBT) compared with digital mammography (DM).</p><p><strong>Approach: </strong>Fifty cases with both synthetic and digital mammograms were selected from the Malmö Breast Tomosynthesis Screening Trial. They were categorized into five groups consisting of normal cases and recalled cases with false-positive and true-positive findings from DM and DBT only. The DBT system used was a wide-angle (WA) system from Siemens, and the SM images were reconstructed from the DBT images. Visual grading, detection, and recall were evaluated by experienced breast radiologists in both SM and DM images.</p><p><strong>Results: </strong>Some image quality criteria of the SM images were rated as qualitatively inferior to DM images. However, reader-averaged diagnostic accuracy (0.57 versus 0.55), sensitivity (0.46 versus 0.50), and specificity (0.64 versus 0.58) were not significantly different between SM and DM, respectively.</p><p><strong>Conclusions: </strong>Synthetic mammography plays a promising role to complement or even replace DM. The study could not find any indications of substantial differences in the sensitivity or specificity of SM for WA DBT systems compared with DM. However, certain image quality criteria of SM fall slightly short compared with DM images. Next-generation DBT systems could address such limitations through improved reconstruction algorithms and system design, and their performance should be the focus of future research studies.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 1","pages":"S13011"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11745418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OPHash: learning of organ and pathology context-sensitive hashing for medical image retrieval.","authors":"Asim Manna, Rakshith Sathish, Ramanathan Sethuraman, Debdoot Sheet","doi":"10.1117/1.JMI.12.1.017503","DOIUrl":"10.1117/1.JMI.12.1.017503","url":null,"abstract":"<p><strong>Purpose: </strong>Retrieving images of organs and their associated pathologies is essential for evidence-based clinical diagnosis. Deep neural hashing (DNH) has demonstrated the ability to retrieve images fast on large datasets. Conventional pairwise DNH methods can focus on semantic similarity between either organs or pathology of an image pair but not on both simultaneously.</p><p><strong>Approach: </strong>We propose an organ and pathology contextual-supervised hashing approach (OPHash) learned using three types of samples (called bags) to learn accurate hash representation. Because only semantic similarity is inadequate to incorporate with these bags, we introduce relational similarity to generate identical hash codes from most similar image pairs. OPHash is trained by minimizing classification loss, two retrieval losses implemented using Cauchy cross-entropy and maximizing discriminator loss over training samples.</p><p><strong>Results: </strong>Experiments are performed with two radiology datasets derived from the publicly available datasets. OPHash achieves 24% higher mean average precision than the state-of-the-art for top-100 retrieval.</p><p><strong>Conclusion: </strong>OPHash retrieves images with semantic similarity of organs and their associated pathology. It is agnostic to image size as well. This method improves retrieval efficiency across diverse medical imaging datasets, accommodating multiple organs and pathologies. The code is available at https://github.com/asimmanna17/OPHash.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 1","pages":"017503"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11838790/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Micael Oliveira Diniz, Mohammad Khalil, Erika Fagman, Jenny Vikgren, Faiz Haj, Angelica Svalkvist, Magnus Båth, Åse Allansdotter Johnsson
{"title":"Lung nodule localization and size estimation on chest tomosynthesis.","authors":"Micael Oliveira Diniz, Mohammad Khalil, Erika Fagman, Jenny Vikgren, Faiz Haj, Angelica Svalkvist, Magnus Båth, Åse Allansdotter Johnsson","doi":"10.1117/1.JMI.12.S1.S13007","DOIUrl":"https://doi.org/10.1117/1.JMI.12.S1.S13007","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to investigate the localization, visibility, and measurement of lung nodules in digital chest tomosynthesis (DTS).</p><p><strong>Approach: </strong>Computed tomography (CT), maximum intensity projections (CT-MIP) (transaxial versus coronal orientation), and computer-aided detection (CAD) were used as location reference, and inter- and intra-observer agreement regarding lung nodule size was assessed. Five radiologists analyzed DTS and CT images from 24 participants with lung <math><mrow><mtext>nodules</mtext> <mo>≥</mo> <mn>100</mn> <mtext> </mtext> <msup><mrow><mi>mm</mi></mrow> <mrow><mn>3</mn></mrow> </msup> </mrow> </math> , focusing on lung nodule localization, visibility, and measurement on DTS. Visual grading was used to compare if coronal or transaxial CT-MIP better facilitated the localization of lung nodules in DTS.</p><p><strong>Results: </strong>The majority of the lung nodules (79%) were rated as visible in DTS, although less clearly in comparison with CT. Coronal CT-MIP was the preferred orientation in the task of locating nodules on DTS. On DTS, area-based lung nodule size estimates resulted in significantly less measurement variability when compared with nodule size estimated based on mean diameter (mD) ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). Also, on DTS, area-based lung nodule size estimates were more accurate ( <math><mrow><mi>SEE</mi> <mo>=</mo> <mn>38.7</mn> <mtext> </mtext> <msup><mi>mm</mi> <mn>3</mn></msup> </mrow> </math> ) than lung nodule size estimates based on mean diameter ( <math><mrow><mi>SEE</mi> <mo>=</mo> <mn>42.7</mn> <mtext> </mtext> <msup><mi>mm</mi> <mn>3</mn></msup> </mrow> </math> ).</p><p><strong>Conclusions: </strong>Coronal CT-MIP images are superior to transaxial CT-MIP images in facilitating lung nodule localization in DTS. Most <math><mrow><mtext>nodules</mtext> <mo>≥</mo> <mn>100</mn> <mtext> </mtext> <msup><mrow><mi>mm</mi></mrow> <mrow><mn>3</mn></mrow> </msup> </mrow> </math> found on CT can be visualized, correctly localized, and measured in DTS, and area-based measurement may be the key to more precise and less variable nodule measurements on DTS.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 1","pages":"S13007"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11514701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolution of tomosynthesis.","authors":"Mitchell M Goodsitt, Andrew D A Maidment","doi":"10.1117/1.JMI.12.S1.S13012","DOIUrl":"10.1117/1.JMI.12.S1.S13012","url":null,"abstract":"<p><strong>Purpose: </strong>Tomosynthesis is a limited-angle multi-projection method that was conceived to address a significant limitation of conventional single-projection x-ray imaging: the overlap of structures in an image. We trace the historical evolution of tomosynthesis.</p><p><strong>Approach: </strong>Relevant papers are discussed including descriptions of technical advances and clinical applications.</p><p><strong>Results: </strong>We start with the invention of tomosynthesis by Ziedses des Plantes in the Netherlands and Kaufman in the United States in the mid-1930s and end with our predictions of future technical advances. Some of the other topics that are covered include a respiratory-gated chest tomosynthesis system of the late 1930s, film-based systems of the 1960s and 1970s, coded aperture tomosynthesis, fluoroscopy tomosynthesis, digital detector-based tomosynthesis for imaging the breast and body, orthopedic, dental and radiotherapy applications, optimization of acquisition parameters for breast and body tomosynthesis, reconstruction methods, characteristics of present-day tomosynthesis systems, x-ray tubes, and promising new applications including contrast-enhanced and multimodal breast imaging systems.</p><p><strong>Conclusion: </strong>Tomosynthesis has had an exciting history that continues today. This should serve as a foundation for other papers in the special issue \"Celebrating Digital Tomosynthesis: Past, Present and Future\" in the <i>Journal of Medical Imaging</i>.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 1","pages":"S13012"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11817815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143415683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maral Mirzai, Jenny Nilsson, Patrik Sund, Rauni Rossi Norrlund, Micael Oliveira Diniz, Bengt Gottfridsson, Ida Häggström, Åse A Johnsson, Magnus Båth, Angelica Svalkvist
{"title":"Breathing motion compensation in chest tomosynthesis: evaluation of the effect on image quality and presence of artifacts.","authors":"Maral Mirzai, Jenny Nilsson, Patrik Sund, Rauni Rossi Norrlund, Micael Oliveira Diniz, Bengt Gottfridsson, Ida Häggström, Åse A Johnsson, Magnus Båth, Angelica Svalkvist","doi":"10.1117/1.JMI.12.S1.S13004","DOIUrl":"https://doi.org/10.1117/1.JMI.12.S1.S13004","url":null,"abstract":"<p><strong>Purpose: </strong>Chest tomosynthesis (CTS) has a relatively longer acquisition time compared with chest X-ray, which may increase the risk of motion artifacts in the reconstructed images. Motion artifacts induced by breathing motion adversely impact the image quality. This study aims to reduce these artifacts by excluding projection images identified with breathing motion prior to the reconstruction of section images and to assess if motion compensation improves overall image quality.</p><p><strong>Approach: </strong>In this study, 2969 CTS examinations were analyzed to identify examinations where breathing motion has occurred using a method based on localizing the diaphragm border in each of the projection images. A trajectory over diaphragm positions was estimated from a second-order polynomial curve fit, and projection images where the diaphragm border deviated from the trajectory were removed before reconstruction. The image quality between motion-compensated and uncompensated examinations was evaluated using the image quality criteria for anatomical structures and image artifacts in a visual grading characteristic (VGC) study. The resulting rating data were statistically analyzed using the software VGC analyzer.</p><p><strong>Results: </strong>A total of 58 examinations were included in this study with breathing motion occurring either at the beginning or end ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>17</mn></mrow> </math> ) or throughout the entire acquisition ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>41</mn></mrow> </math> ). In general, no significant difference in image quality or presence of motion artifacts was shown between the motion-compensated and uncompensated examinations. However, motion compensation significantly improved the image quality and reduced the motion artifacts in cases where motion occurred at the beginning or end. In examinations where motion occurred throughout the acquisition, motion compensation led to a significant increase in ripple artifacts and noise.</p><p><strong>Conclusions: </strong>Compensation for respiratory motion in CTS by excluding projection images may improve the image quality if the motion occurs mainly at the beginning or end of the examination. However, the disadvantages of excluding projections may outweigh the benefits of motion compensation.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 1","pages":"S13004"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11399550/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dye amount quantification of Papanicolaou-stained cytological images by multispectral unmixing: spectral analysis of cytoplasmic mucin.","authors":"Saori Takeyama, Tomoaki Watanabe, Nanxin Gong, Masahiro Yamaguchi, Takumi Urata, Fumikazu Kimura, Keiko Ishii","doi":"10.1117/1.JMI.12.1.017501","DOIUrl":"10.1117/1.JMI.12.1.017501","url":null,"abstract":"<p><strong>Purpose: </strong>The color of Papanicolaou-stained specimens is a crucial feature in cytology diagnosis. However, the quantification of color using digital images is challenging due to the variations in the staining process and characteristics of imaging equipment. The dye amount estimation of stained specimens is helpful for quantitatively interpreting the color based on a physical model. It has been realized with color unmixing and applied to staining with three or fewer dyes. Nevertheless, the Papanicolaou stain comprises five dyes. Thus, we employ multispectral imaging with more channels for quantitative analysis of the Papanicolaou-stained cervical cytology samples.</p><p><strong>Approach: </strong>We estimate the dye amount map from a 14-band multispectral observation capturing a Papanicolaou-stained specimen using the actual measured spectral characteristics of the single-stained samples. The estimated dye amount maps were employed for the quantitative interpretation of the color of cytoplasmic mucin of lobular endocervical glandular hyperplasia (LEGH) and normal endocervical (EC) cells in a uterine cervical lesion.</p><p><strong>Results: </strong>We demonstrated the dye amount estimation performance of the proposed method using single-stain images and Papanicolaou-stain images. Moreover, the yellowish color in the LEGH cells is found to be interpreted with more orange G (OG) and less Eosin Y (EY) dye amounts. We also elucidated that LEGH and EC cells could be classified using linear classifiers from the dye amount.</p><p><strong>Conclusions: </strong>Multispectral imaging enables the quantitative analysis of dye amount maps of Papanicolaou-stained cytology specimens. The effectiveness is demonstrated in interpreting and classifying the cytoplasmic mucin of EC and LEGH cells in cervical cytology.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 1","pages":"017501"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11681424/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Justin N Kim, Yingnan Song, Hao Wu, Ananya Subramaniam, Jihye Lee, Mohamed H E Makhlouf, Neda S Hassani, Sadeer Al-Kindi, David L Wilson, Juhwan Lee
{"title":"Improving coronary artery segmentation with self-supervised learning and automated pericoronary adipose tissue segmentation: a multi-institutional study on coronary computed tomography angiography images.","authors":"Justin N Kim, Yingnan Song, Hao Wu, Ananya Subramaniam, Jihye Lee, Mohamed H E Makhlouf, Neda S Hassani, Sadeer Al-Kindi, David L Wilson, Juhwan Lee","doi":"10.1117/1.JMI.12.1.016002","DOIUrl":"10.1117/1.JMI.12.1.016002","url":null,"abstract":"<p><strong>Purpose: </strong>Coronary artery disease (CAD) is a leading cause of morbidity and mortality worldwide, with coronary computed tomography angiography (CCTA) playing a crucial role in its diagnosis. The mean Hounsfield unit (HU) of pericoronary adipose tissue (PCAT) is linked to cardiovascular risk. We utilized a self-supervised learning framework (SSL) to improve the accuracy and generalizability of coronary artery segmentation on CCTA volumes while addressing the limitations of small-annotated datasets.</p><p><strong>Approach: </strong>We utilized self-supervised pretraining followed by supervised fine-tuning to segment coronary arteries. To evaluate the data efficiency of SSL, we varied the number of CCTA volumes used during pretraining. In addition, we developed an automated PCAT segmentation algorithm utilizing centerline extraction, spatial-geometric coronary identification, and landmark detection. We evaluated our method on a multi-institutional dataset by assessing coronary artery and PCAT segmentation accuracy via Dice scores and comparing mean PCAT HU values with the ground truth.</p><p><strong>Results: </strong>Our approach significantly improved coronary artery segmentation, achieving Dice scores up to 0.787 after self-supervised pretraining. The automated PCAT segmentation achieved near-perfect performance, with <math><mrow><mi>R</mi></mrow> </math> -squared values of 0.9998 for both the left anterior descending artery and the right coronary artery indicating excellent agreement between predicted and actual mean PCAT HU values. Self-supervised pretraining notably enhanced model generalizability on external datasets, improving overall segmentation accuracy.</p><p><strong>Conclusions: </strong>We demonstrate the potential of SSL to advance CCTA image analysis, enabling more accurate CAD diagnostics. Our findings highlight the robustness of SSL for automated coronary artery and PCAT segmentation, offering promising advancements in cardiovascular care.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 1","pages":"016002"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11831809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liping Wang, Lin Chen, Kaixi Wei, Huiyu Zhou, Reyer Zwiggelaar, Weiwei Fu, Yingchao Liu
{"title":"Weakly supervised pathological differentiation of primary central nervous system lymphoma and glioblastoma on multi-site whole slide images.","authors":"Liping Wang, Lin Chen, Kaixi Wei, Huiyu Zhou, Reyer Zwiggelaar, Weiwei Fu, Yingchao Liu","doi":"10.1117/1.JMI.12.1.017502","DOIUrl":"10.1117/1.JMI.12.1.017502","url":null,"abstract":"<p><strong>Purpose: </strong>Differentiating primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM) is crucial because their prognosis and treatment differ substantially. Manual examination of their histological characteristics is considered the golden standard in clinical diagnosis. However, this process is tedious and time-consuming and might lead to misdiagnosis caused by morphological similarity between their histology and tumor heterogeneity. Existing research focuses on radiological differentiation, which mostly uses multi-parametric magnetic resonance imaging. By contrast, we investigate the pathological differentiation between the two types of tumors using whole slide images (WSIs) of postoperative formalin-fixed paraffin-embedded samples.</p><p><strong>Approach: </strong>To learn the specific and intrinsic histological feature representations from the WSI patches, a self-supervised feature extractor is trained. Then, the patch representations are fused by feeding into a weakly supervised multiple-instance learning model for the WSI classification. We validate our approach on 134 PCNSL and 526 GBM cases collected from three hospitals. We also investigate the effect of feature extraction on the final prediction by comparing the performance of applying the feature extractors trained on the PCNSL/GBM slides from specific institutions, multi-site PCNSL/GBM slides, and large-scale histopathological images.</p><p><strong>Results: </strong>Different feature extractors perform comparably with the overall area under the receiver operating characteristic curve value exceeding 85% for each dataset and close to 95% for the combined multi-site dataset. Using the institution-specific feature extractors generally obtains the best overall prediction with both of the PCNSL and GBM classification accuracies reaching 80% for each dataset.</p><p><strong>Conclusions: </strong>The excellent classification performance suggests that our approach can be used as an assistant tool to reduce the pathologists' workload by providing an accurate and objective second diagnosis. Moreover, the discriminant regions indicated by the generated attention heatmap improve the model interpretability and provide additional diagnostic information.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 1","pages":"017502"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724367/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142972751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennie Karlsson, Ida Arvidsson, Freja Sahlin, Kalle Åström, Niels Christian Overgaard, Kristina Lång, Anders Heyden
{"title":"Breast cancer classification in point-of-care ultrasound imaging-the impact of training data.","authors":"Jennie Karlsson, Ida Arvidsson, Freja Sahlin, Kalle Åström, Niels Christian Overgaard, Kristina Lång, Anders Heyden","doi":"10.1117/1.JMI.12.1.014502","DOIUrl":"10.1117/1.JMI.12.1.014502","url":null,"abstract":"<p><strong>Purpose: </strong>The survival rate of breast cancer for women in low- and middle-income countries is poor compared with that in high-income countries. Point-of-care ultrasound (POCUS) combined with deep learning could potentially be a suitable solution enabling early detection of breast cancer. We aim to improve a classification network dedicated to classifying POCUS images by comparing different techniques for increasing the amount of training data.</p><p><strong>Approach: </strong>Two data sets consisting of breast tissue images were collected, one captured with POCUS and another with standard ultrasound (US). The data sets were expanded by using different techniques, including augmentation, histogram matching, histogram equalization, and cycle-consistent adversarial networks (CycleGANs). A classification network was trained on different combinations of the original and expanded data sets. Different types of augmentation were investigated and two different CycleGAN approaches were implemented.</p><p><strong>Results: </strong>Almost all methods for expanding the data sets significantly improved the classification results compared with solely using POCUS images during the training of the classification network. When training the classification network on POCUS and CycleGAN-generated POCUS images, it was possible to achieve an area under the receiver operating characteristic curve of 95.3% (95% confidence interval 93.4% to 97.0%).</p><p><strong>Conclusions: </strong>Applying augmentation during training showed to be important and increased the performance of the classification network. Adding more data also increased the performance, but using standard US images or CycleGAN-generated POCUS images gave similar results.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 1","pages":"014502"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11740782/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scatter correction for contrast-enhanced digital breast tomosynthesis with a dual-layer detector.","authors":"Xiangyi Wu, Xiaoyu Duan, Hailiang Huang, Wei Zhao","doi":"10.1117/1.JMI.12.S1.S13008","DOIUrl":"10.1117/1.JMI.12.S1.S13008","url":null,"abstract":"<p><strong>Purpose: </strong>Contrast-enhanced digital breast tomosynthesis (CEDBT) highlights breast tumors with neo-angiogenesis. A recently proposed CEDBT system with a dual-layer (DL) flat-panel detector enables simultaneous acquisition of high-energy (HE) and low-energy (LE) projection images with a single exposure, which reduces acquisition time and eliminates motion artifacts. However, x-ray scatter degrades image quality and lesion detectability. We propose a practical method for accurate and robust scatter correction (SC) for DL-CEDBT.</p><p><strong>Approach: </strong>The proposed hybrid SC method combines the advantages of a two-kernel iterative convolution method and an empirical interpolation strategy, which accounts for the reduced scatter from the peripheral breast region due to thickness roll-off and the scatter contribution from the region outside the breast. Scatter point spread functions were generated using Monte Carlo simulations with different breast glandular fractions, compressed thicknesses, and projection angles. Projection images and ground truth scatter maps of anthropomorphic digital breast phantoms were simulated to evaluate the performance of the proposed SC method and three other kernel- and interpolation-based methods. The mean absolute relative error (MARE) between scatter estimates and ground truth was used as the metric for SC accuracy.</p><p><strong>Results: </strong>DL-CEDBT shows scatter characteristics different from dual-shot, primarily due to the two energy peaks of the incident spectrum and the structure of the DL detector. Compared with the other methods investigated, the proposed hybrid SC method showed superior accuracy and robustness, with MARE of <math><mrow><mo>∼</mo> <mn>3.1</mn> <mo>%</mo></mrow> </math> for all LE and HE projection images of different phantoms in both cranial-caudal and mediolateral-oblique views. After SC, cupping artifacts in the dual-energy image were removed, and the signal difference-to-noise ratio was improved by 82.0% for 8 mm iodine objects.</p><p><strong>Conclusions: </strong>A practical SC method was developed, which provided accurate and robust scatter estimates to improve image quality and lesion detectability for DL-CEDBT.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 1","pages":"S13008"},"PeriodicalIF":1.9,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11615639/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}