Journal of X-Ray Science and Technology最新文献

筛选
英文 中文
An effective COVID-19 classification in X-ray images using a new deep learning framework. 使用新的深度学习框架在x射线图像中有效分类COVID-19。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-01-19 DOI: 10.1177/08953996241290893
P Thilagavathi, R Geetha, S Jothi Shri, K Somasundaram
{"title":"An effective COVID-19 classification in X-ray images using a new deep learning framework.","authors":"P Thilagavathi, R Geetha, S Jothi Shri, K Somasundaram","doi":"10.1177/08953996241290893","DOIUrl":"10.1177/08953996241290893","url":null,"abstract":"<p><p>BackgroundThe global concern regarding the diagnosis of lung-related diseases has intensified due to the rapid transmission of coronavirus disease 2019 (COVID-19). Artificial Intelligence (AI) based methods are emerging technologies that help to identify COVID-19 in chest X-ray images quickly.MethodIn this study, the publically accessible database COVID-19 Chest X-ray is used to diagnose lung-related disorders using a hybrid deep-learning approach. This dataset is pre-processed using an Improved Anisotropic Diffusion Filtering (IADF) method. After that, the features extraction methods named Grey-level Co-occurrence Matrix (GLCM), uniform Local Binary Pattern (uLBP), Histogram of Gradients (HoG), and Horizontal-vertical neighbourhood local binary pattern (hvnLBP) are utilized to extract the useful features from the pre-processed dataset. The dimensionality of a feature set is subsequently reduced through the utilization of an Adaptive Reptile Search Optimization (ARSO) algorithm, which optimally selects the features for flawless classification. Finally, the hybrid deep learning algorithm, Multi-head Attention-based Bi-directional Gated Recurrent unit with Deep Sparse Auto-encoder Network (MhA-Bi-GRU with DSAN), is developed to perform the multiclass classification problem. Moreover, a Dynamic Levy-Flight Chimp Optimization (DLF-CO) algorithm is applied to minimize the loss function in the hybrid algorithm.ResultsThe whole simulation is performed using the Python language in which the 0.001 learning rate accomplishes the proposed method's higher classification accuracy of 0.95%, and 0.98% is obtained for a 0.0001 learning rate. Overall, the performance of the proposed methodology outperforms all existing methods employing different performance parameters.ConclusionThe proposed hybrid deep-learning approach with various feature extraction, and optimal feature selection effectively diagnoses disease using Chest X-ray images demonstrated through classification accuracy.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"297-316"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound and advanced imaging techniques in prostate cancer diagnosis: A comparative study of mpMRI, TRUS, and PET/CT. 超声和先进成像技术在前列腺癌诊断中的应用:mpMRI、TRUS和PET/CT的比较研究。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-02-12 DOI: 10.1177/08953996241304988
Ying Dong, Peng Wang, Hua Geng, Yankun Liu, Enguo Wang
{"title":"Ultrasound and advanced imaging techniques in prostate cancer diagnosis: A comparative study of mpMRI, TRUS, and PET/CT.","authors":"Ying Dong, Peng Wang, Hua Geng, Yankun Liu, Enguo Wang","doi":"10.1177/08953996241304988","DOIUrl":"10.1177/08953996241304988","url":null,"abstract":"<p><p>ObjectiveThis study aims to assess and compare the diagnostic performance of three advanced imaging modalities-multiparametric magnetic resonance imaging (mpMRI), transrectal ultrasound (TRUS), and positron emission tomography/computed tomography (PET/CT)-in detecting prostate cancer in patients with elevated PSA levels and abnormal DRE findings.MethodsA retrospective analysis was conducted on 150 male patients aged 50-75 years with elevated PSA and abnormal DRE. The diagnostic accuracy of each modality was assessed through sensitivity, specificity, and the area under the curve (AUC) to compare performance in detecting clinically significant prostate cancer (Gleason score ≥ 7).ResultsMpMRI demonstrated the highest diagnostic performance, with a sensitivity of 90%, specificity of 85%, and AUC of 0.92, outperforming both TRUS (sensitivity 76%, specificity 78%, AUC 0.77) and PET/CT (sensitivity 82%, specificity 80%, AUC 0.81). MpMRI detected clinically significant tumors in 80% of cases. Although TRUS and PET/CT had similar detection rates for significant tumors, their overall accuracy was lower. Minor adverse events occurred in 5% of patients undergoing TRUS, while no significant complications were associated with mpMRI or PET/CT.ConclusionThese findings suggest that mpMRI is the most reliable imaging modality for early detection of clinically significant prostate cancer. It reduces the need for unnecessary biopsies and optimizes patient management.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"436-447"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising. 基于特征增强和三重交互关注的交叉型多维网络LDCT去噪。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-01-29 DOI: 10.1177/08953996241306696
Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui
{"title":"A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising.","authors":"Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui","doi":"10.1177/08953996241306696","DOIUrl":"10.1177/08953996241306696","url":null,"abstract":"<p><p>BackgroundNumerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.ObjectiveTo improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.MethodsIn our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each <math><mn>3</mn><mo>×</mo><mn>3</mn></math> convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.ResultsCompared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.ConclusionThrough comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"393-404"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-model machine learning framework for breast cancer risk stratification using clinical and imaging data. 使用临床和影像学数据进行乳腺癌风险分层的多模型机器学习框架。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-01-27 DOI: 10.1177/08953996241308175
Lu Miao, Zidong Li, Jinnan Gao
{"title":"A multi-model machine learning framework for breast cancer risk stratification using clinical and imaging data.","authors":"Lu Miao, Zidong Li, Jinnan Gao","doi":"10.1177/08953996241308175","DOIUrl":"10.1177/08953996241308175","url":null,"abstract":"<p><p>PurposeThis study presents a comprehensive machine learning framework for assessing breast cancer malignancy by integrating clinical features with imaging features derived from deep learning.MethodsThe dataset included 1668 patients with documented breast lesions, incorporating clinical data (e.g., age, BI-RADS category, lesion size, margins, and calcifications) alongside mammographic images processed using four CNN architectures: EfficientNet, ResNet, DenseNet, and InceptionNet. Three predictive configurations were developed: an imaging-only model, a hybrid model combining imaging and clinical data, and a stacking-based ensemble model that aggregates both data types to enhance predictive accuracy. Twelve feature selection techniques, including ReliefF and Fisher Score, were applied to identify key predictive features. Model performance was evaluated using accuracy and AUC, with 5-fold cross-valida tion and hyperparameter tuning to ensure robustness.ResultsThe imaging-only models demonstrated strong predictive performance, with EfficientNet achieving an AUC of 0.76. The hybrid model combining imaging and clinical data reached the highest accuracy of 83% and an AUC of 0.87, underscoring the benefits of data integration. The stacking-based ensemble model further optimized accuracy, reaching a peak AUC of 0.94, demonstrating its potential as a reliable tool for malignancy risk assessment.ConclusionThis study highlights the importance of integrating clinical and deep imaging features for breast cancer risk stratification, with the stacking-based model.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"360-375"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proximal femur segmentation and quantification in dual-energy subtraction tomosynthesis: A novel approach to fracture risk assessment. 双能量减法断层合成中股骨近端分割和量化:骨折风险评估的新方法。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-01-29 DOI: 10.1177/08953996241312594
Akari Matsushima, Tai-Been Chen, Koharu Kimura, Mizuki Sato, Shih-Yen Hsu, Takahide Okamoto
{"title":"Proximal femur segmentation and quantification in dual-energy subtraction tomosynthesis: A novel approach to fracture risk assessment.","authors":"Akari Matsushima, Tai-Been Chen, Koharu Kimura, Mizuki Sato, Shih-Yen Hsu, Takahide Okamoto","doi":"10.1177/08953996241312594","DOIUrl":"10.1177/08953996241312594","url":null,"abstract":"<p><p>BackgroundOsteoporosis is a major public health concern, especially among older adults, due to its association with an increased risk of fractures, particularly in the proximal femur. These fractures severely impact mobility and quality of life, leading to significant economic and health burdens.ObjectiveThis study aims to enhance bone density assessment in the proximal femur by addressing the limitations of conventional dual-energy X-ray absorptiometry through the integration of tomosynthesis with dual-energy applications and advanced segmentation models.Methods and MaterialsThe imaging capability of a radiography/fluoroscopy system with dual-energy subtraction was evaluated. Two phantoms were included in this study: a tomosynthesis phantom (PH-56) was used to measure the quality of the tomosynthesis images, and a torso phantom (PH-4) was used to obtain proximal femur images. Quantification of bone images was achieved by optimizing the energy subtraction (ene-sub) and scale factors to isolate bone pixel values while nullifying soft tissue pixel values. Both the faster region-based convolutional neural network (Faster R-CNN) and U-Net were used to segment the proximal femoral region. The performance of these models was then evaluated using the intersection-over-union (IoU) metric with a torso phantom to ensure controlled conditions.ResultsThe optimal ene-sub-factor ranged between 1.19 and 1.20, and a scale factor of around 0.1 was found to be suitable for detailed bone image observation. Regarding segmentation performance, a VGG19-based Faster R-CNN model achieved the highest mean IoU, outperforming the U-Net model (0.865 vs. 0.515, respectively).ConclusionsThese findings suggest that the integration of tomosynthesis with dual-energy applications significantly enhances the accuracy of bone density measurements in the proximal femur, and that the Faster R-CNN model provides superior segmentation performance, thereby offering a promising tool for bone density and osteoporosis management. Future research should focus on refining these models and validating their clinical applicability to improve patient outcomes.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"405-419"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DR-ConvNeXt: DR classification method for reconstructing ConvNeXt model structure. DR-ConvNeXt:用于重建ConvNeXt模型结构的DR分类方法。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-02-12 DOI: 10.1177/08953996241311190
Pengfei Song, Yun Wu
{"title":"DR-ConvNeXt: DR classification method for reconstructing ConvNeXt model structure.","authors":"Pengfei Song, Yun Wu","doi":"10.1177/08953996241311190","DOIUrl":"10.1177/08953996241311190","url":null,"abstract":"<p><p>BackgroundDiabetic retinopathy (DR) is a major complication of diabetes and a leading cause of blindness among the working-age population. However, the complex distribution and variability of lesion characteristics within the dataset present significant challenges for achieving high-precision classification of DR images.ObjectiveWe propose an automatic classification method for DR images, named DR-ConvNeXt, which aims to achieve accurate diagnosis of lesion types.MethodsThe method involves designing a dual-branch addition convolution structure and appropriately increasing the number of stacked ConvNeXt Block convolution layers. Additionally, a unique primary-auxiliary loss function is introduced, contributing to a significant enhancement in DR classification accuracy within the DR-ConvNeXt model.ResultsThe model achieved an accuracy of 91.8%,sensitivity of 81.6%, and specificity of 97.9% on the APTOS dataset. On the Messidor-2 dataset, the model achieved an accuracy of 83.6%, sensitivity of 74.0%, and specificity of 94.6%.ConclusionsThe DR-ConvNeXt model's classification results on the two publicly available datasets illustrate the significant advantages in all evaluation indexes for DR classification.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"448-460"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel detail-enhanced wavelet domain feature compensation network for sparse-view X-ray computed laminography. 稀疏视图x射线计算机层析成像的小波域特征补偿网络。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-02-18 DOI: 10.1177/08953996251319183
Yawu Long, Qianglong Zhong, Jin Lu, Chengke Xiong
{"title":"A novel detail-enhanced wavelet domain feature compensation network for sparse-view X-ray computed laminography.","authors":"Yawu Long, Qianglong Zhong, Jin Lu, Chengke Xiong","doi":"10.1177/08953996251319183","DOIUrl":"10.1177/08953996251319183","url":null,"abstract":"<p><p>BackgroundX-ray Computed Laminography (CL) is a popular industrial tool for non-destructive visualization of flat objects. However, high-quality CL imaging requires a large number of projections, resulting in a long imaging time. Reducing the number of projections allows acceleration of the imaging process, but decreases the quality of reconstructed images.ObjectiveOur objective is to build a deep learning network for sparse-view CL reconstruction.MethodsConsidering complementarities of feature extraction in different domains, we design an encoder-decoder network that enables to compensate the missing information during spatial domain feature extraction in wavelet domain. Also, a detail-enhanced module is developed to highlight details. Additionally, Swin Transformer and convolution operators are combined to better capture features.ResultsA total of 3200 pairs of 16-view and 1024-view CL images (2880 pairs for training, 160 pairs for validation, and 160 pairs for testing) of solder joints have been employed to investigate the performance of the proposed network. It is observed that the proposed network obtains the highest image quality with PSNR and SSIM of 37.875 ± 0.908 dB, 0.992 ± 0.002, respectively. Also, it achieves competitive results on the AAPM dataset.ConclusionsThis study demonstrates the effectiveness and generalization of the proposed network for sparse-view CL reconstruction.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"488-498"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on ring artifact reduction method for CT images of nuclear graphite components. 核石墨成分 CT 图像的环形伪影消除方法研究。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-01-22 DOI: 10.1177/08953996241308760
Tianchen Zeng, Jintao Fu, Peng Cong, Ximing Liu, Guangduo Xu, Yuewen Sun
{"title":"Research on ring artifact reduction method for CT images of nuclear graphite components.","authors":"Tianchen Zeng, Jintao Fu, Peng Cong, Ximing Liu, Guangduo Xu, Yuewen Sun","doi":"10.1177/08953996241308760","DOIUrl":"10.1177/08953996241308760","url":null,"abstract":"<p><p>BackgroundThe supporting structure of high-temperature gas-cooled reactors (HTGR) comprises over 3000 carbon/graphite components, necessitating computed tomography (CT) non-destructive testing before operational deployment as per reactor technical specifications. However, CT images are frequently marred by severe ring artifacts due to the response non-uniformity and non-linearity of detector units, which diminishes the ability to detect defects effectively.MethodsTo address this issue, we propose a physics-based ring artifacts reduction method for CT that employs pixel response correction. This method physically accounts for the cause of ring artifacts and leverages the prior knowledge of the detected object to enhance the accuracy of the detection process.ResultsOur proposed method achieved a notable reduction in ring artifacts, as evidenced by a 37.7% decrease in ring total variation (RTV) values compared to the originals, significantly enhancing image quality. It also surpassed traditional and machine learning methods in artifact reduction while maintaining image details. The lower RTV scores confirm our method's superior effectiveness in minimizing ring artifacts.ConclusionWe believe that our research contributes to the enhancement of defect inspection performance in detection systems, which is crucial for ensuring the safety of reactors. The proposed method's effectiveness in mitigating ring artifacts while maintaining image quality highlights its potential impact on the reliability of non-destructive testing in the context of HTGR components.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"317-324"},"PeriodicalIF":1.7,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DML-MFCM: A multimodal fine-grained classification model based on deep metric learning for Alzheimer's disease diagnosis. DML-MFCM:基于深度度量学习的多模态细粒度分类模型,用于阿尔茨海默病诊断。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-01-01 Epub Date: 2025-01-13 DOI: 10.1177/08953996241300023
Heng Wang, Tiejun Yang, Jiacheng Fan, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao
{"title":"DML-MFCM: A multimodal fine-grained classification model based on deep metric learning for Alzheimer's disease diagnosis.","authors":"Heng Wang, Tiejun Yang, Jiacheng Fan, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao","doi":"10.1177/08953996241300023","DOIUrl":"10.1177/08953996241300023","url":null,"abstract":"<p><strong>Background: </strong>Alzheimer's disease (AD) is a neurodegenerative disorder. There are no drugs and methods for the treatment of AD, but early intervention can delay the deterioration of the disease. Therefore, the early diagnosis of AD and mild cognitive impairment (MCI) is significant. Structural magnetic resonance imaging (sMRI) is widely used to present structural changes in the subject's brain tissue. The relatively mild structural changes in the brain with MCI have led to ongoing challenges in the task of conversion prediction in MCI. Moreover, many multimodal AD diagnostic models proposed in recent years ignore the potential relationship between multimodal information.</p><p><strong>Objective: </strong>To solve these problems, we propose a multimodal fine-grained classification model based on deep metric learning for AD diagnosis (DML-MFCM), which can fully exploit the fine-grained feature information of sMRI and learn the potential relationships between multimodal feature information.</p><p><strong>Methods: </strong>First, we propose a fine-grained feature extraction module that can effectively capture the fine-grained feature information of the lesion area. Then, we introduce a multimodal cross-attention module to learn the potential relationships between multimodal data. In addition, we design a hybrid loss function based on deep metric learning. It can guide the model to learn the feature representation method between samples, which improves the model's performance in disease diagnosis.</p><p><strong>Results: </strong>We have extensively evaluated the proposed models on the ADNI and AIBL datasets. The ACC of AD vs. NC, MCI vs. NC, and sMCI vs. pMCI tasks in the ADNI dataset are 98.75%, 95.88%, and 88.00%, respectively. The ACC on the AD vs. NC and MCI vs. NC tasks in the AIBL dataset are 94.33% and 91.67%.</p><p><strong>Conclusions: </strong>The results demonstrate that our method has excellent performance in AD diagnosis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"211-228"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative elemental sensitive imaging based on K-edge subtraction tomography. 基于k边相减层析成像的定量元素敏感成像。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2025-01-01 Epub Date: 2024-11-27 DOI: 10.1177/08953996241290323
Yichi Zhang, Fen Tao, Ruoyang Gao, Ling Zhang, Jun Wang, Guohao Du, Tiqiao Xiao, Biao Deng
{"title":"Quantitative elemental sensitive imaging based on K-edge subtraction tomography.","authors":"Yichi Zhang, Fen Tao, Ruoyang Gao, Ling Zhang, Jun Wang, Guohao Du, Tiqiao Xiao, Biao Deng","doi":"10.1177/08953996241290323","DOIUrl":"10.1177/08953996241290323","url":null,"abstract":"<p><strong>Background: </strong>K-edge subtraction (KES) tomography has been extensively utilized in the field of elemental sensitive imaging due to its high spatial resolution, rapid acquisition, and three-dimensional (3D) imaging capabilities. However, previous studies have primarily focused on the qualitative analysis of element contents, rather than quantitative assessment.</p><p><strong>Objective: </strong>The current study proposes a novel method for quantitative elemental analysis based on K-edge subtraction tomography.</p><p><strong>Methods: </strong>The linear correlation between the slice grayscale of standard samples and the difference in their linear absorption coefficients is confirmed. This finding suggests that the grayscale data from slices may be employed to perform quantitative estimations of elemental compositions.</p><p><strong>Results: </strong>In order to verify the accuracy and validity of this method, the target element contents in standard and actual samples are quantitatively analyzed, respectively. The results demonstrate that the method is capable of achieving nanometer-resolved quantitative elemental sensitive imaging with a relative error of less than 3% in the target elemental content.</p><p><strong>Conclusions: </strong>The method described in this paper is expected to expand the scope of applications for K-edge subtraction tomography and provide a novel approach to achieve more precise and convenient quantitative elemental analysis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"37-46"},"PeriodicalIF":1.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信