Jiahao Chang, Shuo Xu, Zirou Jiang, Yucheng Zhang, Yuewen Sun
{"title":"The deep radon prior-based stationary CT image reconstruction algorithm for two phase flow inspection.","authors":"Jiahao Chang, Shuo Xu, Zirou Jiang, Yucheng Zhang, Yuewen Sun","doi":"10.1177/08953996251322078","DOIUrl":"10.1177/08953996251322078","url":null,"abstract":"<p><p>Investigating the state of two-phase flow in heat transfer pipes is crucial for ensuring reactor safety and enhancing operational efficiency. Current measurement methods fail to address the requirements for identifying flow patterns and void fractions in high-velocity two-phase flow within small-diameter alloy steel pipes. The laboratory proposes a method for measuring high-velocity two-phase flow utilizing stationary computed tomography (CT) and verifies its feasibility. Constrained by the overall physical arrangement of the system, the CT system can only gather under complete sparse projection data. We propose an unsupervised deep learning algorithm called Deep Radon Prior (DRP). This algorithm directly reconstructs images from projection data by optimizing errors in radon domain. It leverages the neural network's capacity to learn regular information inherent in the image, in conjunction with an iterative algorithmic approach. Experimental results demonstrate the algorithm's effectiveness in suppressing image artifacts and noise, yielding significantly improved reconstruction quality compared to the Filtered Back Projection (FBP) and Alternating Direction Method of Multiplier - Total Variation (ADMM-TV) algorithms. This enhancement enables the visualization of small bubbles with a diameter of 0.3 mm. The DRP algorithm has wider applicability in fluids with different patterns in pipe and is more suitable for measurements of actual bubble flows.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":"33 4","pages":"726-741"},"PeriodicalIF":1.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144545828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inclusion of spatio-energetic charge sharing effect model for accurate photon counting CT simulation.","authors":"Jiabing Sheng, Dong Zeng, Zhaoying Bian, Mingqiang Li, Yongle Wu, Xin Li, YongShuai Ge, Jianhua Ma","doi":"10.1177/08953996251323725","DOIUrl":"10.1177/08953996251323725","url":null,"abstract":"<p><strong>Background: </strong>Photon counting CT has demonstrated exceptional performance in spatial resolution, density resolution, and image quality, earning recognition as a groundbreaking technology in medical imaging. However, its technical implementation continues to face substantial challenges, including charge sharing effects.</p><p><strong>Objective: </strong>To develop a spatio-energetic charge-sharing modulation model for accurate photon counting CT simulation (SmuSim). Specifically, SmuSim is built upon the previously developed photon counting toolkit (PcTK) and thoroughly incorporates the charge sharing effects that occur in photon counting CT.</p><p><strong>Methods: </strong>The proposed SmuSim firstly enrolls three primary modules, i.e., photon transport, charge transport, and charge induction to characterize the charge sharing effects in the photon counting CT imaging chain. Then, Monte Carlo simulation is also conducted to validate the feasibility of the proposed SmuSim with well-built charge sharing effects model.</p><p><strong>Results: </strong>Under diverse detector configurations, SmuSim's energy spectrum response curves exhibit a remarkable alignment with Monte Carlo simulations, in stark contrast to the Pctk results. In both digital and clinical phantom studies, SmuSim effectively simulates distorted photon counting CT images. In digital physical phantom simulations, the deviations in attenuation coefficient due to charge sharing effects are -49.70%, -19.66%, and -3.33% for the three energy bins, respectively. In digital clinical phantom simulations, the differences in attenuation coefficient are -19.92%, -4.98%, and -0.6%, respectively. In the two simulation studies, the deviations between the results obtained from SmuSim and those from Monte Carlo simulation are less than 3% and 2%, respectively, demonstrating the effectiveness of the proposed SmuSim.</p><p><strong>Conclusion: </strong>We analyze charge sharing effects in photon counting CT, a comprehensive analytical model, and finally simulate CT images with charge sharing effects for evaluation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"695-712"},"PeriodicalIF":1.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-threshold sample selection with latent tendency difference for label-noise-robust pneumoconiosis staging.","authors":"Shuming Zhang, Xueting Ren, Yan Qiang, Juanjuan Zhao, Ying Qiao, Huajie Yue","doi":"10.1177/08953996251319652","DOIUrl":"10.1177/08953996251319652","url":null,"abstract":"<p><p>BackgroundThe precise pneumoconiosis staging suffers from progressive pair label noise (PPLN) in chest X-ray datasets, because adjacent stages are confused due to unidentifialble and diffuse opacities in the lung fields. As deep neural networks are employed to aid the disease staging, the performance is degraded under such label noise.ObjectiveThis study improves the effectiveness of pneumoconiosis staging by mitigating the impact of PPLN through network architecture refinement and sample selection mechanism adjustment.MethodsWe propose a novel multi-branch architecture that incorporates the dual-threshold sample selection. Several auxiliary branches are integrated in a two-phase module to learn and predict the <i>progressive feature tendency</i>. A novel difference-based metric is introduced to iteratively obtained the instance-specific thresholds as a complementary criterion of dynamic sample selection. All the samples are finally partitioned into <i>clean</i> and <i>hard</i> sets according to dual-threshold criteria and treated differently by loss functions with penalty terms.ResultsCompared with the state-of-the-art, the proposed method obtains the best metrics (accuracy: 90.92%, precision: 84.25%, sensitivity: 81.11%, F1-score: 82.06%, and AUC: 94.64%) under real-world PPLN, and is less sensitive to the rise of synthetic PPLN rate. An ablation study validates the respective contributions of critical modules and demonstrates how variations of essential hyperparameters affect model performance.ConclusionsThe proposed method achieves substantial effectiveness and robustness against PPLN in pneumoconiosis dataset, and can further assist physicians in diagnosing the disease with a higher accuracy and confidence.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"665-682"},"PeriodicalIF":1.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143701998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chunjie Xia, Tianyun Gu, Nan Zheng, Hongjiang Wei, Tsung-Yuan Tsai
{"title":"RNAF: Regularization neural attenuation fields for sparse-view CBCT reconstruction.","authors":"Chunjie Xia, Tianyun Gu, Nan Zheng, Hongjiang Wei, Tsung-Yuan Tsai","doi":"10.1177/08953996241301661","DOIUrl":"10.1177/08953996241301661","url":null,"abstract":"<p><p>Cone beam computed tomography (CBCT) is increasingly used in clinical settings, with the radiation dose incurred during X-ray acquisition emerging as a critical concern. Traditional algorithms for reconstructing high-quality CBCT images typically necessitate hundreds of X-ray projections, prompting a shift towards sparse-view CBCT reconstruction as a means to minimize radiation exposure. A novel approach, leveraging the Neural Attenuation Field (NAF) based on neural radiation field algorithms, has recently gained traction. This method offers rapid and promising CBCT reconstruction outcomes using a mere 50 views. Nonetheless, NAF tends to overlook the inherent structural properties of projected images, which can lead to shortcomings in accurately capturing the structural essence of the object being imaged. To address these limitations, we introduce an enhanced method: Regularization Neural Attenuation Fields (RNAF). Our approach includes two key innovations. First, we implement a hash coding regularization technique designed to retain low-frequency details within the reconstructed images, thereby preserving essential structural information. Second, we incorporate a Local Patch Global (LPG) sampling strategy. This method focuses on extracting local geometric details from the projection image, ensuring that the intensity variations in randomly sampled X-rays closely mimic those in the actual projection image. Comparative analyses across various body parts (Chest, Jaw, Foot, Abdomen, Knee) reveal that RNAF substantially outperforms existing algorithms. Specifically, its reconstruction quality exceeds that of previous NeRF-based, optimization-based, and analysis algorithms by margins of at least 2.09 dB, 3.09 dB, and 13.84 dB respectively. This significant enhancement in performance underscores the potential of RNAF as a groundbreaking solution in the realm of CBCT imaging, offering a path towards achieving high-quality reconstructions with reduced radiation exposure. Our implementation is publically available at https://github.com/springXIACJ/FRNAF.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"713-725"},"PeriodicalIF":1.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LungNet-ViT: Efficient lung disease classification using a multistage vision transformer model from chest radiographs.","authors":"V Padmavathi, Kavitha Ganesan","doi":"10.1177/08953996251320262","DOIUrl":"10.1177/08953996251320262","url":null,"abstract":"<p><p>This research introduces a Multistage-Vision Transformer (Multistage-ViT) model for precisely classifying various lung diseases using chest radiographic (CXR) images. The dataset in the proposed method includes four classes: Normal, COVID-19, Viral Pneumonia and Lung Opacity. This model demonstrates its efficacy on imbalanced and balanced datasets by enhancing classifier accuracy through deep feature extraction. It integrates backbone models with the ViT architecture, creating rigorously hybrid configurations compared to their standalone counterparts. These hybrid models utilize optimized features for classification, significantly improving their performance. Notably, the multistage-ViT model achieved accuracies of 99.93% on an imbalanced dataset and 99.97% on a balanced dataset using the InceptionV3 combined with the ViT model. These findings highlight the superior accuracy and robustness of multistage-ViT models, underscoring their potential to enhance lung disease classification through advanced feature extraction and model integration techniques. The proposed model effectively demonstrates the benefits of employing ViT for deep feature extraction from CXR images.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"742-759"},"PeriodicalIF":1.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143732745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui Han, Yu Han, Yanmin Sun, Liyang Zhang, Xiaoqi Xi, Lei Li, Bin Yan
{"title":"Cone-beam computed laminography frequency domain information distribution and missing model.","authors":"Hui Han, Yu Han, Yanmin Sun, Liyang Zhang, Xiaoqi Xi, Lei Li, Bin Yan","doi":"10.1177/08953996251325786","DOIUrl":"10.1177/08953996251325786","url":null,"abstract":"<p><p>The objective of this study is to analyse and validate the distribution and missing regions in the frequency domain space of the projection information obtained from Cone-beam Computed Laminography (CBCL) scanned samples. Furthermore, the aim is to establish a frequency domain information distribution and missing model for CBCL. This paper employs the Fourier slice theorem to ascertain the spatial region of the frequency domain wherein the CBCL projection information is situated. To this end, the geometrical structure of the CL system and the spatial propagation characteristics of the cone-beam rays are subjected to analysis. Furthermore, the veracity of the model for the missing information in the CBCL frequency domain is validated through an iterative reconstruction process, whereby different regions of the frequency domain space are reconstructed through an iterative reconstruction algorithm that takes only the projection information as a constraint. The CBCL frequency domain missing information model can be employed as a priori information in the frequency domain space to facilitate further optimisation and improvement of image reconstruction.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"683-694"},"PeriodicalIF":1.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143701990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM).","authors":"Wensong Yan, Yunhua Xu, Shiju Yan","doi":"10.1177/08953996251333364","DOIUrl":"10.1177/08953996251333364","url":null,"abstract":"<p><p>BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251333364"},"PeriodicalIF":1.7,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"KBA-PDNet: A primal-dual unrolling network with kernel basis attention for low-dose CT reconstruction.","authors":"Rongfeng Li, Dalin Wang","doi":"10.1177/08953996241308759","DOIUrl":"10.1177/08953996241308759","url":null,"abstract":"<p><p>Computed tomography (CT) image reconstruction is faced with challenge of balancing image quality and radiation dose. Recent unrolled optimization methods address low-dose CT image quality issues using convolutional neural networks or self-attention mechanisms as regularization operators. However, these approaches have limitations in adaptability, computational efficiency, or preservation of beneficial inductive biases. They also depend on initial reconstructions, potentially leading to information loss and error propagation. To overcome these limitations, Kernel Basis Attention Primal-Dual Network (KBA-PDNet) is proposed. The method unrolls multiple iterations of the proximal primal-dual optimization process, replacing traditional proximal operators with Kernel Basis Attention (KBA) modules. This design enables direct training from raw measurement data without relying on preliminary reconstructions. The KBA module achieves adaptability by learning and dynamically fusing kernel bases, generating customized convolution kernels for each spatial location. This approach maintains computational efficiency while preserving beneficial inductive biases of convolutions. By training end-to-end from raw projection data, KBA-PDNet fully utilizes all original information, potentially capturing details lost in preliminary reconstructions. Experiments on simulated and clinical datasets demonstrate that KBA-PDNet outperforms existing approaches in both image quality and computational efficiency.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"591-607"},"PeriodicalIF":1.4,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143537915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-limited-angle spectral CT image reconstruction based on average image induced relative total variation model.","authors":"Zhaoqiang Shen, Yumeng Guo","doi":"10.1177/08953996251314771","DOIUrl":"10.1177/08953996251314771","url":null,"abstract":"<p><p>In recent years, spectral computed tomography (CT) has attracted extensive attention. The purpose of this study is to achieve a low-cost and fast energy spectral CT reconstruction algorithm by implementing multi-limited-angle scans. General spectral CT projection data are collected over a full-angular range of 360 degrees. We simulate multi-source spectral CT by using a pair of X-ray source/detector. To speed up scanning, multi-limited-angle scanning was used in each energy channel. On this basis, an average image induced relative total variation (Aii-RTV) with multi-limited-angle spectral CT image reconstruction model is proposed. The iterative algorithm is used to solve Aii-RTV. Before iteration, the weighted average projection data of the multi-limited-angle energy spectral is carried out. In each step of the iterative algorithm flow is as follows: First, the relative total variation (RTV) reconstruction model is used to reconstruct the average image using average projection data. Then, the partial derivative of the average image is used to calculate the inherent variation in RTV model due to the integrity of the average image, and take its reciprocal as the weight coefficient of the windowing total variation of each energy channel reconstruction image. Finally, the average energy image is used to guide the multi-limited-angle projection data to reconstruct the image of each energy channel so as to suppress the limited-angle artifact of each energy channel image. In addition, we also discuss the influence of parameter selection on reconstructed image quality, which is important for regularization model. Through the reconstruction of multi-limited-angle spectral CT projection data, quantitative results and reconstructed images show that our algorithm has better performance than prior image constrained compressed sensing (PICCS) and RTV. The average PSNR of our reconstruction results in different channels was 35.6273, 4.533 and 2.301 higher than RTV (31.0943) and PICCS (33.3263), respectively.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"637-650"},"PeriodicalIF":1.4,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative analysis of machine learning and deep learning algorithms for knee arthritis detection using YOLOv8 models.","authors":"Ilkay Cinar","doi":"10.1177/08953996241308770","DOIUrl":"10.1177/08953996241308770","url":null,"abstract":"<p><p>Knee arthritis is a prevalent joint condition that affects many people worldwide. Early detection and appropriate treatment are essential to slow the disease's progression and enhance patients' quality of life. In this study, various machine learning and deep learning algorithms were used to detect knee arthritis. The machine learning models included k-NN, SVM, and GBM, while DenseNet, EfficientNet, and InceptionV3 were used as deep learning models. Additionally, YOLOv8 classification models (YOLOv8n-cls, YOLOv8s-cls, YOLOv8m-cls, YOLOv8l-cls, and YOLOv8x-cls) were employed. The \"Annotated Dataset for Knee Arthritis Detection\" with five classes (Normal, Doubtful, Mild, Moderate, Severe) and 1650 images were divided into 80% training, 10% validation, and 10% testing using the Hold-Out method. YOLOv8 models outperformed both machine learning and deep learning algorithms. k-NN, SVM, and GBM achieved success rates of 63.61%, 64.14%, and 67.36%, respectively. Among deep learning models, DenseNet, EfficientNet, and InceptionV3 achieved 62.35%, 70.59%, and 79.41%. The highest success was seen in the YOLOv8x-cls model at 86.96%, followed by YOLOv8l-cls at 86.79%, YOLOv8m-cls at 83.65%, YOLOv8s-cls at 80.37%, and YOLOv8n-cls at 77.91%.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"565-577"},"PeriodicalIF":1.4,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}