Chenchen Ma, Jiongtao Zhu, Xin Zhang, Han Cui, Yuhang Tan, Jinchuan Guo, Hairong Zheng, Dong Liang, Ting Su, Yi Sun, Yongshuai Ge
{"title":"Learning-based multi-material CBCT image reconstruction with ultra-slow kV switching.","authors":"Chenchen Ma, Jiongtao Zhu, Xin Zhang, Han Cui, Yuhang Tan, Jinchuan Guo, Hairong Zheng, Dong Liang, Ting Su, Yi Sun, Yongshuai Ge","doi":"10.1177/08953996251331790","DOIUrl":"10.1177/08953996251331790","url":null,"abstract":"<p><p>ObjectiveThe purpose of this study is to perform multiple (<math><mo>≥</mo><mn>3</mn></math>) material decomposition with deep learning method for spectral cone-beam CT (CBCT) imaging based on ultra-slow kV switching.ApproachIn this work, a novel deep neural network called SkV-Net is developed to reconstruct multiple material density images from the ultra-sparse spectral CBCT projections acquired using the ultra-slow kV switching technique. In particular, the SkV-Net has a backbone structure of U-Net, and a multi-head axial attention module is adopted to enlarge the perceptual field. It takes the CT images reconstructed from each kV as input, and output the basis material images automatically based on their energy-dependent attenuation characteristics. Numerical simulations and experimental studies are carried out to evaluate the performance of this new approach.Main ResultsIt is demonstrated that the SkV-Net is able to generate four different material density images, i.e., fat, muscle, bone and iodine, from five spans of kV switched spectral projections. Physical experiments show that the decomposition errors of iodine and CaCl<math><msub><mrow></mrow><mn>2</mn></msub></math> are less than 6<math><mi>%</mi></math>, indicating high precision of this novel approach in distinguishing materials.SignificanceSkV-Net provides a promising multi-material decomposition approach for spectral CBCT imaging systems implemented with the ultra-slow kV switching scheme.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"831-843"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum to \"Mask R-CNN assisted diagnosis of spinal tuberculosis\".","authors":"","doi":"10.1177/08953996251346352","DOIUrl":"10.1177/08953996251346352","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1012"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144133174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MHASegNet: A multi-scale hybrid aggregation network of segmenting coronary artery from CCTA images.","authors":"Shang Li, Yanan Wu, Bojun Jiang, Lingkai Liu, Tiande Zhang, Yu Sun, Jie Hou, Patrice Monkam, Wei Qian, Shouliang Qi","doi":"10.1177/08953996251346484","DOIUrl":"10.1177/08953996251346484","url":null,"abstract":"<p><strong>Background: </strong>Segmentation of coronary arteries in Coronary Computed Tomography Angiography (CCTA) images is crucial for diagnosing coronary artery disease (CAD), but remains challenging due to small artery size, uneven contrast distribution, and issues like over-segmentation or omission.</p><p><strong>Objective: </strong>The aim of this study is to improve coronary artery segmentation in CCTA images using both conventional and deep learning techniques.</p><p><strong>Methods: </strong>We propose MHASegNet, a lightweight network for coronary artery segmentation, combined with a tailored refinement method. MHASegNet employs multi-scale hybrid attention to capture global and local features, and integrates a 3D context anchor attention module to focus on key coronary artery structures while suppressing background noise. An iterative, region-growth-based refinement addresses crown breaks and reduces false alarms. We evaluated the method on an in-house dataset of 90 subjects and two public datasets with 1060 subjects.</p><p><strong>Results: </strong>MHASegNet, coupled with tailored refinement, outperforms state-of-the-art algorithms, achieving a Dice Similarity Coefficient (DSC) of 0.867 on the in-house dataset, 0.875 on the ASOCA dataset, and 0.827 on the ImageCAS dataset.</p><p><strong>Conclusion: </strong>The tailored refinement significantly reduces false positives and resolves most discontinuities, even for other networks. MHASegNet and the tailored refinement may aid in diagnosing and quantifying CAD following further validation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"916-934"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Basic acceleration technique with theoretical analysis on iterative algorithms for image reconstruction.","authors":"Shuhua Ji, Boyan Ren, Xing Zhao, Xuying Zhao","doi":"10.1177/08953996251335119","DOIUrl":"10.1177/08953996251335119","url":null,"abstract":"<p><p>In image reconstruction and processing, incorporating prior information, particularly the nonnegativity of pixel values, is essential. Existing computed tomography (CT) iterative reconstruction algorithms, including the algebraic reconstruction technique (ART), simultaneous ART (SART), and the simultaneous iterative reconstruction technique (SIRT), typically address negative components during the iteration process by either setting them to zero, introducing regularization terms to prevent negativity, or leaving them unchanged. This paper establishes a general framework in which enforcing the nonnegativity prior accelerates the convergence of the reconstructed image toward the true solution. Within this framework, we propose two efficient and simple acceleration techniques: setting negative pixel values to their absolute values and updating them to the estimated values from the previous update. Experiments were conducted using ART, SIRT, and SART algorithms, integrated with the corresponding acceleration techniques, on full-angle, limited-angle, and noisy simulated data, as well as real data. The results validate the effectiveness of the proposed acceleration methods by evaluating image quality using the PSNR and SSIM metrics. Notably, the proposed technique that sets negative pixel values to their absolute values is strongly recommended, as it significantly outperforms the existing technique that sets them to zero, both in terms of image quality and iteration time.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"844-865"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proposal of a phantom for analyzing out-of-plane artifact in digital breast tomosynthesis.","authors":"Emu Yamamoto, Keisuke Kondo, Masato Imahana, Mayumi Otani, Ayako Yoshida, Miki Okazaki","doi":"10.1177/08953996251351621","DOIUrl":"10.1177/08953996251351621","url":null,"abstract":"<p><p>BackgroundOut-of-plane artifacts in digital breast tomosynthesis (DBT) can affect image quality, even subtly, and are influenced by the size and z-position of features with contrast of clinical images.ObjectiveTo propose a phantom and metric to further characterize out-of-plane artifacts in DBT.MethodsPhantoms with a signal inserted were manufactured, and the reconstructed planes were obtained using the DBT system. Normalized maximum contrast within the plane area was used to quantitatively evaluate out-of-plane artifacts. The spread of out-of-plane artifacts within the reconstructed plane was qualitatively evaluated by observing the profile within the plane area.ResultsThe larger the signal diameter, the stronger the effect of out-of-plane artifacts on the z-position far from the in-focus plane. When the z-position of the signal was on the upper side of the z-position of the center of X-ray tube rotation, out-of-plane artifacts were stronger on the upper side and weaker on the lower side of the signal. The spread of out-of-plane artifacts in the off-focus plane changed from monomodal to bimodal, with movement away from the signal's location in the z-direction.ConclusionsThis work proposes new phantoms and analysis methods to investigate the characteristics of out-of-plane artifacts, supplementing conventional methods.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"945-958"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaojie Tang, Jin Liu, Guo Li, Zhiwei Qiao, Yang Chen, Xuanqin Mou
{"title":"Statistical cone-beam CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain.","authors":"Shaojie Tang, Jin Liu, Guo Li, Zhiwei Qiao, Yang Chen, Xuanqin Mou","doi":"10.1177/08953996251337889","DOIUrl":"10.1177/08953996251337889","url":null,"abstract":"<p><strong>Purposes: </strong> Suppressing noise can effectively promote image quality and save radiation dose in clinical imaging with x-ray computed tomography (CT). To date, numerous statistical noise reduction approaches have ever been proposed in image domain, projection domain or both domains. Especially, a multiscale decomposition strategy can be exploited to enhance the performance of noise suppression while preserving image sharpness. Recognizing the inherent advantage of noise suppression in the projection domain, we have previously proposed a projection domain multiscale penalized weighted least squares (PWLS) method for fan-beam CT imaging, wherein the sampling intervals are explicitly taken into account for the possible variation of sampling rates. In this work, we extend our previous method into cone-beam (CB) CT imaging, which is more relevant to practical imaging applications.</p><p><strong>Methods: </strong> The projection domain multiscale PWLS method is derived for CBCT imaging by converting an isotropic diffusion partial differential equation (PDE) in the three-dimensional (3D) image domain into its counterpart in the CB projection domain. With adoption of the Markov random field (MRF) objective function, the CB projection domain multiscale PWLS method suppresses noise at each scale. The performance of the proposed method for statistical noise reduction in CBCT imaging is experimentally evaluated and verified using the projection data acquired by an actual micro-CT scanner.</p><p><strong>Results: </strong> The preliminary result shows that the proposed CB projection domain multiscale PWLS method outperforms the CB projection domain single-scale PWLS, the 3D image domain discriminative feature representation (DFR), and the 3D image domain multiscale nonlinear diffusion methods in noise reduction. Moreover, the proposed method can preserve image sharpness effectively while avoiding generation of novel artifacts.</p><p><strong>Conclusions: </strong> Since the sampling intervals are explicitly taken into account in the projection domain multiscale decomposition, the proposed method would be beneficial to advanced applications where the CBCT imaging is employed and the sampling rates vary.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"959-977"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multi-stage training and deep supervision based segmentation approach for 3D abdominal multi-organ segmentation.","authors":"Panpan Wu, Peng An, Ziping Zhao, Runpeng Guo, Xiaofeng Ma, Yue Qu, Yurou Xu, Hengyong Yu","doi":"10.1177/08953996251355806","DOIUrl":"10.1177/08953996251355806","url":null,"abstract":"<p><p>Accurate X-ray Computed tomography (CT) image segmentation of the abdominal organs is fundamental for diagnosing abdominal diseases, planning cancer treatment, and formulating radiotherapy strategies. However, the existing deep learning based models for three-dimensional (3D) CT image abdominal multi-organ segmentation face challenges, including complex organ distribution, scarcity of labeled data, and diversity of organ structures, leading to difficulties in model training and convergence and low segmentation accuracy. To address these issues, a novel multi-stage training and a deep supervision model based segmentation approach is proposed. It primary integrates multi-stage training, pseudo- labeling technique, and a developed deep supervision model with attention mechanism (DLAU-Net), specifically designed for 3D abdominal multi-organ segmentation. The DLAU-Net enhances segmentation performance and model adaptability through an improved network architecture. The multi-stage training strategy accelerates model convergence and enhances generalizability, effectively addressing the diversity of abdominal organ structures. The introduction of pseudo-labeling training alleviates the bottleneck of labeled data scarcity and further improves the model's generalization performance and training efficiency. Experiments were conducted on a large dataset provided by the FLARE 2023 Challenge. Comprehensive ablation studies and comparative experiments were conducted to validate the effectiveness of the proposed method. Our method achieves an average organ accuracy (AVG) of 90.5% and a Dice Similarity Coefficient (DSC) of 89.05% and exhibits exceptional performance in terms of training speed and handling data diversity, particularly in the segmentation tasks of critical abdominal organs such as the liver, spleen, and kidneys, significantly outperforming existing comparative methods.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"998-1011"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yihao Sun, Tianming Du, Bin Wang, Md Mamunur Rahaman, Xinghao Wang, Xinyu Huang, Tao Jiang, Marcin Grzegorzek, Hongzan Sun, Jian Xu, Chen Li
{"title":"COVID-19CT+: A public dataset of CT images for COVID-19 retrospective analysis.","authors":"Yihao Sun, Tianming Du, Bin Wang, Md Mamunur Rahaman, Xinghao Wang, Xinyu Huang, Tao Jiang, Marcin Grzegorzek, Hongzan Sun, Jian Xu, Chen Li","doi":"10.1177/08953996251332793","DOIUrl":"10.1177/08953996251332793","url":null,"abstract":"<p><p>Background and objectiveCOVID-19 is considered as the biggest global health disaster in the 21st century, and it has a huge impact on the world.MethodsThis paper publishes a publicly available dataset of CT images of multiple types of pneumonia (COVID-19CT+). Specifically, the dataset contains 409,619 CT images of 1333 patients, with subset-A containing 312 community-acquired pneumonia cases and subset-B containing 1021 COVID-19 cases. In order to demonstrate that there are differences in the methods used to classify COVID-19CT+ images across time, we selected 13 classical machine learning classifiers and 5 deep learning classifiers to test the image classification task.ResultsIn this study, two sets of experiments are conducted using traditional machine learning and deep learning methods, the first set of experiments is the classification of COVID-19 in Subset-B versus COVID-19 white lung disease, and the second set of experiments is the classification of community-acquired pneumonia in Subset-A versus COVID-19 in Subset-B, demonstrating that the different periods of the methods differed on COVID-19CT+. On the first set of experiments, the accuracy of traditional machine learning reaches a maximum of 97.3% and a minimum of only 62.6%. Deep learning algorithms reaches a maximum of 97.9% and a minimum of 85.7%. On the second set of experiments, traditional machine learning reaches a high of 94.6% accuracy and a low of 56.8%. The deep learning algorithm reaches a high of 91.9% and a low of 86.3%.ConclusionsThe COVID-19CT+ in this study covers a large number of CT images of patients with COVID-19 and community-acquired pneumonia and is one of the largest datasets available. We expect that this dataset will attract more researchers to participate in exploring new automated diagnostic algorithms to contribute to the improvement of the diagnostic accuracy and efficiency of COVID-19.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"901-915"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An improved U-NET3+ with transformer and adaptive attention map for lung segmentation.","authors":"V Joseph Raj, P Christopher","doi":"10.1177/08953996251351623","DOIUrl":"10.1177/08953996251351623","url":null,"abstract":"<p><p>Accurate segmentation of lung regions from CT scan images is critical for diagnosing and monitoring respiratory diseases. This study introduces a novel hybrid architecture Adaptive Attention U-NetAA, which combines the strengths of U-Net3 + and Transformer based attention mechanisms models for high-precision lung segmentation. The U-Net3 + module effectively segments the lung region by leveraging its deep convolutional network with nested skip connections, ensuring rich multi-scale feature extraction. A key innovation is introducing an adaptive attention mechanism within the Transformer module, which dynamically adjusts the focus on critical regions in the image based on local and global contextual relationships. This model's adaptive attention mechanism addresses variations in lung morphology, image artifacts, and low-contrast regions, leading to improved segmentation accuracy. The combined convolutional and attention-based architecture enhances robustness and precision. Experimental results on benchmark CT datasets demonstrate that the proposed model achieves an IoU of 0.984, a Dice coefficient of 0.989, a MIoU of 0.972, and an HD95 of 1.22 mm, surpassing state-of-the-art methods. These results establish U-NetAA as a superior tool for clinical lung segmentation, with enhanced accuracy, sensitivity, and generalization capability.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"978-997"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-domain information fusion diffusion model (MDIF-DM) for limited-angle computed tomography.","authors":"Genwei Ma, Dimeng Xia, Shusen Zhao","doi":"10.1177/08953996251339368","DOIUrl":"10.1177/08953996251339368","url":null,"abstract":"<p><p>BackgroundLimited-angle Computed Tomography imaging suffers from severe artifacts in the reconstructed image due to incomplete projection data. Deep learning methods have been developed currently to address the challenges of robustness and low contrast of the limited-angle CT reconstruction with a relatively effective way.ObjectiveTo improve the low contrast of the current limited-angle CT reconstruction image, enhance the robustness of the reconstruction method and the contrast of the limited-angle image.MethodIn this paper, we proposed a limited-angle CT reconstruction method that combining the Fourier domain reweighting and wavelet domain enhancement, which fused information from different domains, thereby getting high-resolution reconstruction images.ResultsWe verified the feasibility and effectiveness of the proposed solution through experiments, and the reconstruction results are improved compared with the state-of-the-art methods.ConclusionsThe proposed method enhances some features of the original image domain data from different domains, which is beneficial to the reasonable diffusion and restoration of diffuse detail texture features.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"935-944"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}