{"title":"A dense and U-shaped transformer with dual-domain multi-loss function for sparse-view CT reconstruction.","authors":"Peng Liu, Chenyun Fang, Zhiwei Qiao","doi":"10.3233/XST-230184","DOIUrl":"10.3233/XST-230184","url":null,"abstract":"<p><strong>Objective: </strong>CT image reconstruction from sparse-view projections is an important imaging configuration for low-dose CT, as it can reduce radiation dose. However, the CT images reconstructed from sparse-view projections by traditional analytic algorithms suffer from severe sparse artifacts. Therefore, it is of great value to develop advanced methods to suppress these artifacts. In this work, we aim to use a deep learning (DL)-based method to suppress sparse artifacts.</p><p><strong>Methods: </strong>Inspired by the good performance of DenseNet and Transformer architecture in computer vision tasks, we propose a Dense U-shaped Transformer (D-U-Transformer) to suppress sparse artifacts. This architecture exploits the advantages of densely connected convolutions in capturing local context and Transformer in modelling long-range dependencies, and applies channel attention to fusion features. Moreover, we design a dual-domain multi-loss function with learned weights for the optimization of the model to further improve image quality.</p><p><strong>Results: </strong>Experimental results of our proposed D-U-Transformer yield performance improvements on the well-known Mayo Clinic LDCT dataset over several representative DL-based models in terms of artifact suppression and image feature preservation. Extensive internal ablation experiments demonstrate the effectiveness of the components in the proposed model for sparse-view computed tomography (SVCT) reconstruction.</p><p><strong>Significance: </strong>The proposed method can effectively suppress sparse artifacts and achieve high-precision SVCT reconstruction, thus promoting clinical CT scanning towards low-dose radiation and high-quality imaging. The findings of this work can be applied to denoising and artifact removal tasks in CT and other medical images.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"207-228"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139673531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S Vishnu Priyan, R Vinod Kumar, C Moorthy, V S Nishok
{"title":"A fusion of deep neural networks and game theory for retinal disease diagnosis with OCT images.","authors":"S Vishnu Priyan, R Vinod Kumar, C Moorthy, V S Nishok","doi":"10.3233/XST-240027","DOIUrl":"10.3233/XST-240027","url":null,"abstract":"<p><p>Retinal disorders pose a serious threat to world healthcare because they frequently result in visual loss or impairment. For retinal disorders to be diagnosed precisely, treated individually, and detected early, deep learning is a necessary subset of artificial intelligence. This paper provides a complete approach to improve the accuracy and reliability of retinal disease identification using images from OCT (Retinal Optical Coherence Tomography). The Hybrid Model GIGT, which combines Generative Adversarial Networks (GANs), Inception, and Game Theory, is a novel method for diagnosing retinal diseases using OCT pictures. This technique, which is carried out in Python, includes preprocessing images, feature extraction, GAN classification, and a game-theoretic examination. Resizing, grayscale conversion, noise reduction using Gaussian filters, contrast enhancement using Contrast Limiting Adaptive Histogram Equalization (CLAHE), and edge recognition via the Canny technique are all part of the picture preparation step. These procedures set up the OCT pictures for efficient analysis. The Inception model is used for feature extraction, which enables the extraction of discriminative characteristics from the previously processed pictures. GANs are used for classification, which improves accuracy and resilience by adding a strategic and dynamic aspect to the diagnostic process. Additionally, a game-theoretic analysis is utilized to evaluate the security and dependability of the model in the face of hostile attacks. Strategic analysis and deep learning work together to provide a potent diagnostic tool. This suggested model's remarkable 98.2% accuracy rate shows how this method has the potential to improve the detection of retinal diseases, improve patient outcomes, and address the worldwide issue of visual impairment.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1011-1039"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FDB-Net: Fusion double branch network combining CNN and transformer for medical image segmentation.","authors":"Zhongchuan Jiang, Yun Wu, Lei Huang, Maohua Gu","doi":"10.3233/XST-230413","DOIUrl":"10.3233/XST-230413","url":null,"abstract":"<p><strong>Background: </strong>The rapid development of deep learning techniques has greatly improved the performance of medical image segmentation, and medical image segmentation networks based on convolutional neural networks and Transformer have been widely used in this field. However, due to the limitation of the restricted receptive field of convolutional operation and the lack of local fine information extraction ability of the self-attention mechanism in Transformer, the current neural networks with pure convolutional or Transformer structure as the backbone still perform poorly in medical image segmentation.</p><p><strong>Methods: </strong>In this paper, we propose FDB-Net (Fusion Double Branch Network, FDB-Net), a double branch medical image segmentation network combining CNN and Transformer, by using a CNN containing gnConv blocks and a Transformer containing Varied-Size Window Attention (VWA) blocks as the feature extraction backbone network, the dual-path encoder ensures that the network has a global receptive field as well as access to the target local detail features. We also propose a new feature fusion module (Deep Feature Fusion, DFF), which helps the image to simultaneously fuse features from two different structural encoders during the encoding process, ensuring the effective fusion of global and local information of the image.</p><p><strong>Conclusion: </strong>Our model achieves advanced results in all three typical tasks of medical image segmentation, which fully validates the effectiveness of FDB-Net.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"931-951"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141288827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Special Section: Medical Applications of X-ray Imaging Techniques.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":"32 2","pages":"459"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140327312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of cutout factors with small and narrow fields using various dosimetry detectors in electron beam keloid radiotherapy.","authors":"Yu-Fang Lin, Chen-Hsi Hsieh, Hui-Ju Tien, Yi-Huan Lee, Yi-Chun Chen, Lu-Han Lai, Shih-Ming Hsu, Pei-Wei Shueng","doi":"10.3233/XST-240059","DOIUrl":"10.3233/XST-240059","url":null,"abstract":"<p><strong>Background: </strong>The inherent problems in the existence of electron equilibrium and steep dose fall-off pose difficulties for small- and narrow-field dosimetry.</p><p><strong>Objective: </strong>To investigate the cutout factors for keloid electron radiotherapy using various dosimetry detectors for small and narrow fields.</p><p><strong>Method: </strong>The measurements were performed in a solid water phantom with nine different cutout shapes. Five dosimetry detectors were used in the study: pinpoint 3D ionization chamber, Farmer chamber, semiflex chamber, Classic Markus parallel plate chamber, and EBT3 film.</p><p><strong>Results: </strong>The results demonstrated good agreement between the semiflex and pinpoint chambers. Furthermore, there was no difference between the Farmer and pinpoint chambers for large cutouts. For the EBT3 film, half of the cases had differences greater than 1%, and the maximum discrepancy compared with the reference chamber was greater than 2% for the narrow field.</p><p><strong>Conclusion: </strong>The parallel plate, semiflex chamber and EBT3 film are suitable dosimeters that are comparable with pinpoint 3D chambers in small and narrow electron fields. Notably, a semiflex chamber could be an alternative option to a pinpoint 3D chamber for cutout widths≥3 cm. It is very important to perform patient-specific cutout factor calibration with an appropriate dosimeter for keloid radiotherapy.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1177-1184"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141437769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Huang, Ruxin Cai, Yifei Pi, Kui Ma, Qing Kong, Weihai Zhuo, Yan Kong
{"title":"A feasibility study to predict 3D dose delivery accuracy for IMRT using DenseNet with log files.","authors":"Ying Huang, Ruxin Cai, Yifei Pi, Kui Ma, Qing Kong, Weihai Zhuo, Yan Kong","doi":"10.3233/XST-230412","DOIUrl":"10.3233/XST-230412","url":null,"abstract":"<p><strong>Objective: </strong>This study aims to explore the feasibility of DenseNet in the establishment of a three-dimensional (3D) gamma prediction model of IMRT based on the actual parameters recorded in the log files during delivery.</p><p><strong>Methods: </strong>A total of 55 IMRT plans (including 367 fields) were randomly selected. The gamma analysis was performed using gamma criteria of 3% /3 mm (Dose Difference/Distance to Agreement), 3% /2 mm, 2% /3 mm, and 2% /2 mm with a 10% dose threshold. In addition, the log files that recorded the gantry angle, monitor units (MU), multi-leaf collimator (MLC), and jaws position during delivery were collected. These log files were then converted to MU-weighted fluence maps as the input of DenseNet, gamma passing rates (GPRs) under four different gamma criteria as the output, and mean square errors (MSEs) as the loss function of this model.</p><p><strong>Results: </strong>Under different gamma criteria, the accuracy of a 3D GPR prediction model decreased with the implementation of stricter gamma criteria. In the test set, the mean absolute error (MAE) of the prediction model under the gamma criteria of 3% /3 mm, 2% /3 mm, 3% /2 mm, and 2% /2 mm was 1.41, 1.44, 3.29, and 3.54, respectively; the root mean square error (RMSE) was 1.91, 1.85, 4.27, and 4.40, respectively; the Sr was 0.487, 0.554, 0.573, and 0.506, respectively. There was a correlation between predicted and measured GPRs (P < 0.01). Additionally, there was no significant difference in the accuracy between the validation set and the test set. The accuracy in the high GPR group was high, and the MAE in the high GPR group was smaller than that in the low GPR group under four different gamma criteria.</p><p><strong>Conclusions: </strong>In this study, a 3D GPR prediction model of patient-specific QA using DenseNet was established based on log files. As an auxiliary tool for 3D dose verification in IMRT, this model is expected to improve the accuracy and efficiency of dose validation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1199-1208"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140870664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Cui, Haipeng Lv, Jiping Wang, Yanyan Zheng, Zhongyi Wu, Hui Zhao, Jian Zheng, Ming Li
{"title":"Feature shared multi-decoder network using complementary learning for Photon counting CT ring artifact suppression.","authors":"Wei Cui, Haipeng Lv, Jiping Wang, Yanyan Zheng, Zhongyi Wu, Hui Zhao, Jian Zheng, Ming Li","doi":"10.3233/XST-230396","DOIUrl":"10.3233/XST-230396","url":null,"abstract":"<p><strong>Background: </strong>Photon-counting computed tomography (Photon counting CT) utilizes photon-counting detectors to precisely count incident photons and measure their energy. These detectors, compared to traditional energy integration detectors, provide better image contrast and material differentiation. However, Photon counting CT tends to show more noticeable ring artifacts due to limited photon counts and detector response variations, unlike conventional spiral CT.</p><p><strong>Objective: </strong>To comprehensively address this issue, we propose a novel feature shared multi-decoder network (FSMDN) that utilizes complementary learning to suppress ring artifacts in Photon counting CT images.</p><p><strong>Methods: </strong>Specifically, we employ a feature-sharing encoder to extract context and ring artifact features, facilitating effective feature sharing. These shared features are also independently processed by separate decoders dedicated to the context and ring artifact channels, working in parallel. Through complementary learning, this approach achieves superior performance in terms of artifact suppression while preserving tissue details.</p><p><strong>Results: </strong>We conducted numerous experiments on Photon counting CT images with three-intensity ring artifacts. Both qualitative and quantitative results demonstrate that our network model performs exceptionally well in correcting ring artifacts at different levels while exhibiting superior stability and robustness compared to the comparison methods.</p><p><strong>Conclusions: </strong>In this paper, we have introduced a novel deep learning network designed to mitigate ring artifacts in Photon counting CT images. The results illustrate the viability and efficacy of our proposed network model as a new deep learning-based method for suppressing ring artifacts.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"529-547"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahao Chang, Chaoyang Zhu, Yuanpeng Song, Zhentao Wang
{"title":"A fast response time gas ionization chamber detector with a grid structure.","authors":"Jiahao Chang, Chaoyang Zhu, Yuanpeng Song, Zhentao Wang","doi":"10.3233/XST-230219","DOIUrl":"10.3233/XST-230219","url":null,"abstract":"<p><p>The time response characteristic of the detector is crucial in radiation imaging systems. Unfortunately, existing parallel plate ionization chamber detectors have a slow response time, which leads to blurry radiation images. To enhance imaging quality, the electrode structure of the detector must be modified to reduce the response time. This paper proposes a gas detector with a grid structure that has a fast response time. In this study, the detector electrostatic field was calculated using COMSOL, while Garfield++ was utilized to simulate the detector's output signal. To validate the accuracy of simulation results, the experimental ionization chamber was tested on the experimental platform. The results revealed that the average electric field intensity in the induced region of the grid detector was increased by at least 33%. The detector response time was reduced to 27% -38% of that of the parallel plate detector, while the sensitivity of the detector was only reduced by 10%. Therefore, incorporating a grid structure within the parallel plate detector can significantly improve the time response characteristics of the gas detector, providing an insight for future detector enhancements.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"339-354"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on breast cancer pathological image classification method based on wavelet transform and YOLOv8.","authors":"Yunfeng Yang, Jiaqi Wang","doi":"10.3233/XST-230296","DOIUrl":"10.3233/XST-230296","url":null,"abstract":"<p><p> Breast cancer is one of the cancers with high morbidity and mortality in the world, which is a serious threat to the health of women. With the development of deep learning, the recognition about computer-aided diagnosis technology is getting higher and higher. And the traditional data feature extraction technology has been gradually replaced by the feature extraction technology based on convolutional neural network which helps to realize the automatic recognition and classification of pathological images. In this paper, a novel method based on deep learning and wavelet transform is proposed to classify the pathological images of breast cancer. Firstly, the image flip technique is used to expand the data set, then the two-level wavelet decomposition and reconfiguration technology is used to sharpen and enhance the pathological images. Secondly, the processed data set is divided into the training set and the test set according to 8:2 and 7:3, and the YOLOv8 network model is selected to perform the eight classification tasks of breast cancer pathological images. Finally, the classification accuracy of the proposed method is compared with the classification accuracy obtained by YOLOv8 for the original BreaKHis dataset, and it is found that the algorithm can improve the classification accuracy of images with different magnifications, which proves the effectiveness of combining two-level wavelet decomposition and reconfiguration with YOLOv8 network model.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"677-687"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dosimetry and treatment efficiency of SBRT using TaiChiB radiotherapy system for two-lung lesions with one overlapping organs at risk.","authors":"Yanhua Duan, Aihui Feng, Hao Wang, Hua Chen, Hengle Gu, Yan Shao, Ying Huang, Zhenjiong Shen, Qing Kong, Zhiyong Xu","doi":"10.3233/XST-230176","DOIUrl":"10.3233/XST-230176","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to assess the dosimetry and treatment efficiency of TaiChiB-based Stereotactic Body Radiotherapy (SBRT) plans applying to treat two-lung lesions with one overlapping organs at risk.</p><p><strong>Methods: </strong>For four retrospective patients diagnosed with two-lung lesions each patient, four treatment plans were designed including Plan Edge, TaiChiB linac-based, RGS-based, and a linac-RGS hybrid (Plan TCLinac, Plan TCRGS, and Plan TCHybrid). Dosimetric metrics and beam-on time were employed to evaluate and compare the TaiChiB-based plans against Plan Edge.</p><p><strong>Results: </strong>For Conformity Index (CI), Plan TCRGS outperformed all other plans with an average CI of 1.06, as opposed to Plan Edge's 1.33. Similarly, for R50 %, Plan TCRGS was superior with an average R50 % of 3.79, better than Plan Edge's 4.28. In terms of D2 cm, Plan TCRGS also led with an average of 48.48%, compared to Plan Edge's 56.25%. For organ at risk (OAR) sparing, Plan TCRGS often displayed the lowest dosimetric values, notably for the spinal cord (Dmax 5.92 Gy) and lungs (D1500cc 1.00 Gy, D1000cc 2.61 Gy, V10 Gy 15.14%). However, its high Dmax values for the heart and great vessels sometimes exceeded safety thresholds. Plan TCHybrid presented a balanced approach, showing doses comparable to or better than Plan Edge without crossing safety limits. In terms of beam-on time, Plan TCLinac emerged as the most efficient treatment option in three out of four cases, followed closely by Plan Edge in one case. Plan TCRGS, despite its dosimetric advantages, was the least efficient, recording notably longer beam-on times, with a peak at 33.28 minutes in Case 2.</p><p><strong>Conclusion: </strong>For patients with two-lung lesions treated by SBRT whose one lesion overlaps with OARs, the Plan TCHybrid delivered by TaiChiB digital radiotherapy system can be recommended as a clinical option.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"379-394"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139466285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}