{"title":"PCOR-net: A prototype-enhanced contrastive ordinal regression network for pneumoconiosis staging.","authors":"Guohua Ji, Juanjuan Zhao, Surong Chu, Xueting Ren, Yangyang Wei, Qi Chen","doi":"10.1177/08953996261433872","DOIUrl":"https://doi.org/10.1177/08953996261433872","url":null,"abstract":"<p><p>BackgroundPneumoconiosis is one of the most severe occupational diseases, and accurate staging is essential for treatment planning and disease management. However, the visual features on chest X-rays are often subtle and exhibit gradual transitions between stages, posing challenges for traditional classification models.ObjectiveThe study aims to overcome the limitations of current staging methods, and to develop a model that simultaneously captures the ordinal progression of pneumoconiosis and enhances feature discrimination for reliable staging.MethodsWe propose a Prototype-enhanced Contrastive Ordinal Regression Network (PCOR-Net) for pneumoconiosis staging. PCOR-Net adopts a dual-branch architecture, where a momentum-updated teacher encoder builds dynamic class prototypes, and a student encoder learns more discriminative features under prototype-guided supervision. To capture the ordinal structure of disease progression, we introduce an ordinal-aware prototype contrastive mechanism and a learnable-threshold ordinal regression module that adapts to the non-uniform nature of stage transitions. Three loss functions-prototype contrastive loss, feature distillation loss, and ordinal regression loss-are jointly optimized in a unified framework.ResultsWe conducted experiments on the pneumoconiosis dataset, where PCOR-Net achieved an accuracy of 91.18% and a Quadratic Weighted Kappa (QWK) of 92.72%, outperforming existing state-of-the-art methods. To assess generalizability, PCOR-Net was also evaluated on a COVID-19 severity dataset, demonstrating good transferability.ConclusionsPCOR-Net demonstrates strong effectiveness and robustness in pneumoconiosis staging and generalizes well to the COVID-19 grading dataset, providing reliable support for clinical diagnosis with improved accuracy and ordinal consistency.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433872"},"PeriodicalIF":1.4,"publicationDate":"2026-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147634908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Segmenting multi-modal abdominal multi-organ using the CADSTransN-Net model.","authors":"Peng Sun, Guichong Wu, Yutao Tan, Lixuan Mei, Taiping Mo, Zhencheng Chen","doi":"10.1177/08953996261433941","DOIUrl":"https://doi.org/10.1177/08953996261433941","url":null,"abstract":"<p><strong>Background: </strong>Deep learning advances medical imaging segmentation, but the insufficient diversity of datasets limits its performance. The AMOS22 dataset addresses this by providing large-scale, varied clinical data to enhance algorithm robustness.PurposeThis study develops and validates CADSTransN-Net (Convolutional Attention and Deep Supervision TransN-Net) to optimize abdominal organ segmentation for the AMOS22 challenge.MethodsCADSTransN-Net integrates three core innovations: a novel N-shaped feature flow path (departing from symmetric architectures for efficient encoder-decoder fusion), a convolutional attention mechanism (prioritizing anatomically relevant regions), and layer-wise deep supervision (ensuring meticulous gradient propagation and faster convergence).ResultsEvaluated on the full AMOS22 dataset, CADSTransN-Net achieved outstanding comprehensive performance: average Dice Similarity Coefficient (DSC) of 0.907, Normalized Surface Dice (NSD) of 0.850, 95th Percentile Hausdorff Distance (HD(95%)) of 3.98 mm, Average Surface Distance (ASD) of 0.75 mm, Absolute Volumetric Difference (AVD) of 39,755.88 mm<sup>3</sup>, and Relative Volumetric Difference (RVD) of 1.53%. These metrics confirm its high accuracy in region overlap, boundary consistency, and volume estimation for multi-modal abdominal multi-organ segmentation.ConclusionsCADSTransN-Net effectively meets AMOS22's challenges, delivering robust performance across region, boundary, and volume metrics. It provides a reliable solution for multi-modal abdominal multi-organ segmentation, with significant clinical potential for tasks such as surgical navigation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433941"},"PeriodicalIF":1.4,"publicationDate":"2026-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147629075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hai Wang, Yanwen Wu, Shaoyu Wang, Zhijun Guo, Weiwen Wu
{"title":"Static CT imaging method based on multi-line array X-ray sources.","authors":"Hai Wang, Yanwen Wu, Shaoyu Wang, Zhijun Guo, Weiwen Wu","doi":"10.1177/08953996261420977","DOIUrl":"https://doi.org/10.1177/08953996261420977","url":null,"abstract":"<p><p>Static computed tomography (CT) is an advanced 3D measurement technology. It offers key instrumentation advantages by reducing motion-induced artifacts, preserving measurement accuracy, and lowering radiation doses- critical factors for precision systems. This capability is particularly crucial for imaging moving organs such as the heart. This study introduces a novel Multi-Array Stationary CT (MASCT) imaging architecture. Utilizing cold- cathode carbon nanotube X-ray sources, the proposed architecture achieves dual innovations in temporal and spatial resolution, advancing stationary CT instrument design. Firstly, the X-ray source's rapid response allows for swift data acquisition. Coupled with its low power consumption and compact form factor, these characteristics collectively facilitate high temporal resolution, meeting the demands of dynamic imaging. Second, the densely arranged source array enhances angular resolution, while the z-axis MAS configuration extends scanning capacity for elongated objects, thereby expanding spatial resolution across multiple dimensions. To address practical implementation challenges, we incorporate iterative reconstruction algorithms to solve limited-angle problem, achieving diagnostically acceptable image quality. The feasibility of our multi-array X-ray source-based static CT system is rigorously validated through both numerical simulations and physical experiments.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261420977"},"PeriodicalIF":1.4,"publicationDate":"2026-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147595782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiaofang Xing, Ailong Cai, Zhizhong Zheng, Lei Li, Bin Yan
{"title":"Detail preservation sparse-view CT image reconstruction via range-null space decomposition based diffusion priors.","authors":"Qiaofang Xing, Ailong Cai, Zhizhong Zheng, Lei Li, Bin Yan","doi":"10.1177/08953996261432977","DOIUrl":"https://doi.org/10.1177/08953996261432977","url":null,"abstract":"<p><p>BackgroundSparse-view CT reduces radiation dose by decreasing the number of projections, yet the resulting undersampling introduces severe artifacts in images reconstructed with traditional analytical algorithms. Recent diffusion posterior sampling (DPS)-based methods enhance image quality but frequently generate spurious details and incur prohibitive computational cost, limiting clinical adoption.ObjectiveTo enable high-fidelity, low-dose CT imaging from sparse projections while suppressing hallucinated details and reducing computational burden.MethodsWe propose a novel diffusion-based method that synergizes null-space restoration with Filtered Back-Projection (FBP) pseudoinverse approximation. Specifically, by employing range-null space decomposition, we use diffusion models to restore null-space image components while ensuring data consistency through the FBP algorithm approximating the pseudoinverse of projection matrix in the range image space. Moreover, we provide a theoretical analysis of the rationality of this approximation. This novel approach effectively combines the strengths of diffusion models and traditional CT reconstruction techniques, optimizing the inverse diffusion trajectory to enable high-fidelity image recovery from sparse data.ResultsExperimental results show that the proposed method achieves significant improvements in image quality and computational efficiency. Compared with the DPS method, it yields an average PSNR gain of 5.32 dB, an average SSIM increase of 0.083, and a 41.9% reduction in computation time.ConclusionIn summary, this framework provides a practical and effective solution for high-quality, low-dose CT imaging, effectively balancing reconstruction accuracy and computational efficiency in practical applications.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261432977"},"PeriodicalIF":1.4,"publicationDate":"2026-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147582990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haihang Zhao, Pengxiang Ji, Yongzhou Wu, Jintao Zhao, Jing Zou
{"title":"A Two-Module Parallel Dual-Domain Network for interior tomography reconstruction.","authors":"Haihang Zhao, Pengxiang Ji, Yongzhou Wu, Jintao Zhao, Jing Zou","doi":"10.1177/08953996261433954","DOIUrl":"https://doi.org/10.1177/08953996261433954","url":null,"abstract":"<p><p>BackgroundInterior tomography is a crucial technique in computed tomography (CT) that aims to minimize radiation exposure by limiting X-ray imaging to the region of interest (ROI) while maintaining diagnostic accuracy. However, traditional reconstruction algorithms often suffer from severe cupping artifacts caused by data truncation, which significantly degrades image quality.ObjectiveThis study aims to develop a parallel network that effectively integrates information between the projection and image domains to improve interior tomography reconstruction.MethodsIn this paper, we propose an end-to-end deep learning framework, the Two-Module Parallel Dual-Domain Network (TPDDN), which consists of two key modules. The Initial Restoration Module generates high-quality prior sinograms and images, providing a robust foundation for subsequent processing and effectively mitigating the impact of data truncation. The Interactive Fusion Module, the core of the network, employs two parallel and interactive branches that operate simultaneously on the projection and image domains. These branches enable bidirectional feature interaction and information fusion, significantly enhancing the accuracy and quality of the reconstructed images.ResultsExtensive experiments were conducted under both normal-dose and high-dose noise conditions to evaluate the performance of TPDDN. The results demonstrate that TPDDN achieves superior qualitative and quantitative performance compared to existing representative methods.ConclusionsThe proposed TPDDN offers a robust and effective approach for interior tomography reconstruction by synergistically integrating information from both the projection and image domains. It effectively suppresses cupping artifacts and enhances reconstructed image quality under both normal-dose and high-noise conditions, demonstrating promising potential for safer and more accurate diagnostic imaging.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433954"},"PeriodicalIF":1.4,"publicationDate":"2026-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147516089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongqi Yan, Yi Liu, Lingshuang Meng, Junjing Li, Shu Li, Niu Guo, Pengcheng Zhang, Zhiguo Gui
{"title":"Weld defect detection based on improved YOLOv8n.","authors":"Yongqi Yan, Yi Liu, Lingshuang Meng, Junjing Li, Shu Li, Niu Guo, Pengcheng Zhang, Zhiguo Gui","doi":"10.1177/08953996261433937","DOIUrl":"https://doi.org/10.1177/08953996261433937","url":null,"abstract":"<p><p>BackgroundIndustrial weld defect detection is challenged by the minimal grayscale contrast between defects and the background, as well as by blurred defect edges, which together hinder the performance of detection algorithms. Moreover, practical industrial environments require high detection accuracy, fast inference speed, and flexible deployment.ObjectiveTo address these challenges, this study proposes an improved YOLOv8n defect detection method that enables more accurate, faster, and lightweight automated weld defect detection.MethodsThe key improvements are as follows. First, in the backbone, the original C2f module is replaced by the C2f_OREPA feature extraction module, constructed with the Online Convolution Parameterization Approach (OREPA), which reduces computational complexity and enhances feature representation. Second, a downsampling module, DCDConv, is introduced to replace the conventional convolution after the first standard convolution layer, allowing better preservation of fine defect features and improving the detection of subtle defects. Additionally, in the neck, a cross-scale feature fusion module (CCFM) is incorporated to improve detection performance across defects of different scales.ResultsExperiments on our self-constructed dataset comprising eight weld defect categories show that the improved model achieves a mean average precision (mAP) of 87.6%, a 4.5% increase over the original YOLOv8n. Meanwhile, the model reduces the number of parameters by 26.9%, decreases computational cost by 35.7%, and achieves an inference speed of 103 frames per second (FPS). On the public NEU-DET dataset, the improved model obtains an mAP of 82.8%, outperforming the original YOLOv8n by 6.7%. Overall, the proposed model surpasses mainstream object detection frameworks, including YOLOv8n, YOLOv12n, Faster R-CNN, and RetinaNet.ConclusionIn summary, the proposed method provides an accurate, efficient, and deployment-friendly solution for weld defect detection in industrial applications, demonstrating substantial practical value.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433937"},"PeriodicalIF":1.4,"publicationDate":"2026-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147505803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Retraction: Investigations on coronary artery plaque detection and subclassification using machine learning classifier.","authors":"","doi":"10.1177/08953996251386435","DOIUrl":"10.1177/08953996251386435","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"221"},"PeriodicalIF":1.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrigendum to \"Retraction notice\".","authors":"","doi":"10.1177/08953996251405970","DOIUrl":"10.1177/08953996251405970","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"222"},"PeriodicalIF":1.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Shen, Ningning Liang, Ailong Cai, Xinrui Zhang, Yizhong Wang, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan
{"title":"Visual language model-assisted CT denoising via text-guided diffusion and fidelity maintenance.","authors":"Ye Shen, Ningning Liang, Ailong Cai, Xinrui Zhang, Yizhong Wang, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan","doi":"10.1177/08953996251372739","DOIUrl":"10.1177/08953996251372739","url":null,"abstract":"<p><p>Reducing radiation dose in computed tomography (CT) and photon-counting CT (PCCT) is crucial for patient safety, but lower doses introduce noise that degrades image quality. Existing denoising methods often rely on supervised learning of paired data or are based on specific noise assumptions, which poses challenges in clinical practice. A novel Visual-Language Model-assisted CT Denoising (VLD) framework is proposed to address CT image noise while preserving diagnostic fidelity through semantic guidance. Our method innovatively leverages the human-level knowledge embedded in multimodal visual-language models and applies it to the field of CT image denoising. This approach enables the diffusion model to perform restoration guided by semantic understanding. Meanwhile, a tri-domain consistency framework has been proposed to further enhance image quality by progressively refining details while preserving structural integrity. Extensive experiments on both simulated CT and real PCCT data demonstrate that the VLD method generates high-quality reconstruction images and exhibits robust generalization to new scenarios. In simulation experiments, the VLD method achieves average improvements of 0.95 dB and 1.21 dB in peak signal-to-noise ratio under the 5000-photon number condition, outperforming the WGAN and FBPConvNet methods, which require paired data.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"151-169"},"PeriodicalIF":1.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147437312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Fengxiao, Wang Yixin, Xu Haodong, Zhong Guowei, Liu Chengfeng, Yang Run, Zhou Rifeng
{"title":"Research on the method for measuring the focal spot size of micro-focus X-ray sources using the JIMA resolution test card.","authors":"Li Fengxiao, Wang Yixin, Xu Haodong, Zhong Guowei, Liu Chengfeng, Yang Run, Zhou Rifeng","doi":"10.1177/08953996251403456","DOIUrl":"10.1177/08953996251403456","url":null,"abstract":"<p><p>BackgroundMeasuring an X-ray source's focal spot size is vital for Micro-CT resolution. Standard methods are often too complex or inaccurate. The popular JIMA resolution test card is simple to use but lacks a clear, quantitative formula to determine the actual focal spot size.ObjectiveThis study aims to create a reliable quantitative link between JIMA resolution and focal spot size using simulations and experiments.MethodsWe used Monte Carlo simulations and practical experiments to establish the relationship between JIMA resolution and focal spot size.ResultsWe found that the focal spot size is twice the line pair width on the JIMA card when the image contrast (MTF) is at 10%. This method is highly accurate, with a maximum measurement error of less than 8.7% compared to a high-precision technique.ConclusionsOur findings provide a simple, fast, and validated method for measuring focal spot size using the JIMA test card. This makes it a practical and reliable alternative to more complex procedures.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"209-220"},"PeriodicalIF":1.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}