Journal of X-Ray Science and Technology最新文献

筛选
英文 中文
CT-based intratumoral and peritumoral deep transfer learning features prediction of lymph node metastasis in non-small cell lung cancer. 基于CT的非小细胞肺癌淋巴结转移的瘤内和瘤周深度转移学习特征预测
IF 3 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230326
Tianyu Lu, Jianbing Ma, Jiajun Zou, Chenxu Jiang, Yangyang Li, Jun Han
{"title":"CT-based intratumoral and peritumoral deep transfer learning features prediction of lymph node metastasis in non-small cell lung cancer.","authors":"Tianyu Lu, Jianbing Ma, Jiajun Zou, Chenxu Jiang, Yangyang Li, Jun Han","doi":"10.3233/XST-230326","DOIUrl":"10.3233/XST-230326","url":null,"abstract":"<p><strong>Background: </strong>The main metastatic route for lung cancer is lymph node metastasis, and studies have shown that non-small cell lung cancer (NSCLC) has a high risk of lymph node infiltration.</p><p><strong>Objective: </strong>This study aimed to compare the performance of handcrafted radiomics (HR) features and deep transfer learning (DTL) features in Computed Tomography (CT) of intratumoral and peritumoral regions in predicting the metastatic status of NSCLC lymph nodes in different machine learning classifier models.</p><p><strong>Methods: </strong>We retrospectively collected data of 199 patients with pathologically confirmed NSCLC. All patients were divided into training (n = 159) and validation (n = 40) cohorts, respectively. The best HR and DTL features in the intratumoral and peritumoral regions were extracted and selected, respectively. Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Light Gradient Boosting Machine (Light GBM), Multilayer Perceptron (MLP), and Logistic Regression (LR) models were constructed, and the performance of the models was evaluated.</p><p><strong>Results: </strong>Among the five models in the training and validation cohorts, the LR classifier model performed best in terms of HR and DTL features. The AUCs of the training cohort were 0.841 (95% CI: 0.776-0.907) and 0.955 (95% CI: 0.926-0.983), and the AUCs of the validation cohort were 0.812 (95% CI: 0.677-0.948) and 0.893 (95% CI: 0.795-0.991), respectively. The DTL signature was superior to the handcrafted radiomics signature.</p><p><strong>Conclusions: </strong>Compared with the radiomics signature, the DTL signature constructed based on intratumoral and peritumoral areas in CT can better predict NSCLC lymph node metastasis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"597-609"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: Methods, applications and limitations. 基于深度学习方法革新多模态成像中的肿瘤检测和分类:方法、应用和局限性。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230429
Dildar Hussain, Mohammed A Al-Masni, Muhammad Aslam, Abolghasem Sadeghi-Niaraki, Jamil Hussain, Yeong Hyeon Gu, Rizwan Ali Naqvi
{"title":"Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: Methods, applications and limitations.","authors":"Dildar Hussain, Mohammed A Al-Masni, Muhammad Aslam, Abolghasem Sadeghi-Niaraki, Jamil Hussain, Yeong Hyeon Gu, Rizwan Ali Naqvi","doi":"10.3233/XST-230429","DOIUrl":"10.3233/XST-230429","url":null,"abstract":"<p><strong>Background: </strong>The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking.</p><p><strong>Objective: </strong>This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress.</p><p><strong>Methods: </strong>Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness.</p><p><strong>Results: </strong>Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT.</p><p><strong>Future directions: </strong>The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain.</p><p><strong>Conclusion: </strong>Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"857-911"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140860642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale unsupervised network for deformable image registration. 用于可变形图像配准的多尺度无监督网络
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-240159
Yun Wang, Wanru Chang, Chongfei Huang, Dexing Kong
{"title":"Multiscale unsupervised network for deformable image registration.","authors":"Yun Wang, Wanru Chang, Chongfei Huang, Dexing Kong","doi":"10.3233/XST-240159","DOIUrl":"10.3233/XST-240159","url":null,"abstract":"<p><strong>Background: </strong>Deformable image registration (DIR) plays an important part in many clinical tasks, and deep learning has made significant progress in DIR over the past few years.</p><p><strong>Objective: </strong>To propose a fast multiscale unsupervised deformable image registration (referred to as FMIRNet) method for monomodal image registration.</p><p><strong>Methods: </strong>We designed a multiscale fusion module to estimate the large displacement field by combining and refining the deformation fields of three scales. The spatial attention mechanism was employed in our fusion module to weight the displacement field pixel by pixel. Except mean square error (MSE), we additionally added structural similarity (ssim) measure during the training phase to enhance the structural consistency between the deformed images and the fixed images.</p><p><strong>Results: </strong>Our registration method was evaluated on EchoNet, CHAOS and SLIVER, and had indeed performance improvement in terms of SSIM, NCC and NMI scores. Furthermore, we integrated the FMIRNet into the segmentation network (FCN, UNet) to boost the segmentation task on a dataset with few manual annotations in our joint leaning frameworks. The experimental results indicated that the joint segmentation methods had performance improvement in terms of Dice, HD and ASSD scores.</p><p><strong>Conclusions: </strong>Our proposed FMIRNet is effective for large deformation estimation, and its registration capability is generalizable and robust in joint registration and segmentation frameworks to generate reliable labels for training segmentation tasks.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1385-1398"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal feature fusion in deep learning for comprehensive dental condition classification. 深度学习中的多模态特征融合,用于综合牙科状况分类。
IF 3 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230271
Shang-Ting Hsieh, Ya-Ai Cheng
{"title":"Multimodal feature fusion in deep learning for comprehensive dental condition classification.","authors":"Shang-Ting Hsieh, Ya-Ai Cheng","doi":"10.3233/XST-230271","DOIUrl":"10.3233/XST-230271","url":null,"abstract":"<p><strong>Background: </strong>Dental health issues are on the rise, necessitating prompt and precise diagnosis. Automated dental condition classification can support this need.</p><p><strong>Objective: </strong>The study aims to evaluate the effectiveness of deep learning methods and multimodal feature fusion techniques in advancing the field of automated dental condition classification.</p><p><strong>Methods and materials: </strong>A dataset of 11,653 clinically sourced images representing six prevalent dental conditions-caries, calculus, gingivitis, tooth discoloration, ulcers, and hypodontia-was utilized. Features were extracted using five Convolutional Neural Network (CNN) models, then fused into a matrix. Classification models were constructed using Support Vector Machines (SVM) and Naive Bayes classifiers. Evaluation metrics included accuracy, recall rate, precision, and Kappa index.</p><p><strong>Results: </strong>The SVM classifier integrated with feature fusion demonstrated superior performance with a Kappa index of 0.909 and accuracy of 0.925. This significantly surpassed individual CNN models such as EfficientNetB0, which achieved a Kappa of 0.814 and accuracy of 0.847.</p><p><strong>Conclusions: </strong>The amalgamation of feature fusion with advanced machine learning algorithms can significantly bolster the precision and robustness of dental condition classification systems. Such a method presents a valuable tool for dental professionals, facilitating enhanced diagnostic accuracy and subsequently improved patient outcomes.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"303-321"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139466387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical boundary conditions for propagation-based X-ray phase contrast imaging: from bio-sample models targeting to clinical applications. 基于传播的 X 射线相衬成像的临床边界条件:从生物样本模型到临床应用。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230425
M S S Gobo, D R Balbin, M G Hönnicke, M E Poletti
{"title":"Clinical boundary conditions for propagation-based X-ray phase contrast imaging: from bio-sample models targeting to clinical applications.","authors":"M S S Gobo, D R Balbin, M G Hönnicke, M E Poletti","doi":"10.3233/XST-230425","DOIUrl":"10.3233/XST-230425","url":null,"abstract":"<p><strong>Background: </strong>Typical propagation-based X-ray phase contrast imaging (PB-PCI) experiments using polyenergetic sources are tested in very ideal conditions: low-energy spectrum (mainly characteristic X-rays), small thickness and homogeneous materials considered weakly absorbing objects, large object-to-detector distance, long exposure times and non-clinical detector.</p><p><strong>Objective: </strong>Explore PB-PCI features using boundary conditions imposed by a low power polychromatic X-ray source (X-ray spectrum without characteristic X-rays), thick and heterogenous materials and a small area imaging detector with high low-detection radiation threshold, elements commonly found in a clinical scenario.</p><p><strong>Methods: </strong>A PB-PCI setup implemented using a microfocus X-ray source and a dental imaging detector was characterized in terms of different spectra and geometric parameters on the acquired images. Test phantoms containing fibers and homogeneous materials with close attenuation characteristics and animal bone and mixed soft tissues (bio-sample models) were analyzed. Contrast to Noise Ratio (CNR), system spatial resolution and Kerma values were obtained for all images.</p><p><strong>Results: </strong>Phase contrast images showed CNR up to 15% higher than conventional contact images. Moreover, it is better seen when large magnifications (>3) and object-to-detector distances (>13 cm) were used. The influence of the spectrum was not appreciable due to the low efficiency of the detector (thin scintillator screen) at high energies.</p><p><strong>Conclusions: </strong>Despite the clinical boundary condition used in this work, regarding the X-ray spectrum, thick samples, and detection system, it was possible to acquire phase contrast images of biological samples.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1163-1175"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensembled CNN with artificial bee colony optimization method for esophageal cancer stage classification using SVM classifier. 基于人工蜂群优化的集成CNN与SVM分类器的食管癌分期分类。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230111
A Chempak Kumar, D Muhammad Noorul Mubarak
{"title":"Ensembled CNN with artificial bee colony optimization method for esophageal cancer stage classification using SVM classifier.","authors":"A Chempak Kumar, D Muhammad Noorul Mubarak","doi":"10.3233/XST-230111","DOIUrl":"10.3233/XST-230111","url":null,"abstract":"<p><strong>Background: </strong>Esophageal cancer (EC) is aggressive cancer with a high fatality rate and a rapid rise of the incidence globally. However, early diagnosis of EC remains a challenging task for clinicians.</p><p><strong>Objective: </strong>To help address and overcome this challenge, this study aims to develop and test a new computer-aided diagnosis (CAD) network that combines several machine learning models and optimization methods to detect EC and classify cancer stages.</p><p><strong>Methods: </strong>The study develops a new deep learning network for the classification of the various stages of EC and the premalignant stage, Barrett's Esophagus from endoscopic images. The proposed model uses a multi-convolution neural network (CNN) model combined with Xception, Mobilenetv2, GoogLeNet, and Darknet53 for feature extraction. The extracted features are blended and are then applied on to wrapper based Artificial Bee Colony (ABC) optimization technique to grade the most accurate and relevant attributes. A multi-class support vector machine (SVM) classifies the selected feature set into the various stages. A study dataset involving 523 Barrett's Esophagus images, 217 ESCC images and 288 EAC images is used to train the proposed network and test its classification performance.</p><p><strong>Results: </strong>The proposed network combining Xception, mobilenetv2, GoogLeNet, and Darknet53 outperforms all the existing methods with an overall classification accuracy of 97.76% using a 3-fold cross-validation method.</p><p><strong>Conclusion: </strong>This study demonstrates that a new deep learning network that combines a multi-CNN model with ABC and a multi-SVM is more efficient than those with individual pre-trained networks for the EC analysis and stage classification.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"31-51"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning and radiomics based Alberta stroke program early CT score method on CTA to evaluate acute ischemic stroke. 基于深度学习和放射组学的阿尔伯塔卒中项目CTA早期CT评分方法评估急性缺血性卒中。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230119
Ting Fang, Naijia Liu, Shengdong Nie, Shouqiang Jia, Xiaodan Ye
{"title":"A deep learning and radiomics based Alberta stroke program early CT score method on CTA to evaluate acute ischemic stroke.","authors":"Ting Fang, Naijia Liu, Shengdong Nie, Shouqiang Jia, Xiaodan Ye","doi":"10.3233/XST-230119","DOIUrl":"10.3233/XST-230119","url":null,"abstract":"<p><strong>Background: </strong>Alberta stroke program early CT score (ASPECTS) is a semi-quantitative evaluation method used to evaluate early ischemic changes in patients with acute ischemic stroke, which can guide physicians in treatment decisions and prognostic judgments.</p><p><strong>Objective: </strong>We propose a method combining deep learning and radiomics to alleviate the problem of large inter-observer variance in ASPECTS faced by physicians and assist them to improve the accuracy and comprehensiveness of the ASPECTS.</p><p><strong>Methods: </strong>Our study used a brain region segmentation method based on an improved encoding-decoding network. Through the deep convolutional neural network, 10 regions defined for ASPECTS will be obtained. Then, we used Pyradiomics to extract features associated with cerebral infarction and select those significantly associated with stroke to train machine learning classifiers to determine the presence of cerebral infarction in each scored brain region.</p><p><strong>Results: </strong>The experimental results show that the Dice coefficient for brain region segmentation reaches 0.79. Three radioactive features are selected to identify cerebral infarction in brain regions, and the 5-fold cross-validation experiment proves that these 3 features are reliable. The classifier trained based on 3 features reaches prediction performance of AUC = 0.95. Moreover, the intraclass correlation coefficient of ASPECTS between those obtained by the automated ASPECTS method and physicians is 0.86 (95% confidence interval, 0.56-0.96).</p><p><strong>Conclusions: </strong>This study demonstrates advantages of using a deep learning network to replace the traditional template registration for brain region segmentation, which can determine the shape and location of each brain region more precisely. In addition, a new brain region classifier based on radiomics features has potential to assist physicians in clinical stroke detection and improve the consistency of ASPECTS.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"17-30"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnostic reference levels in spinal CT: Jordanian assessments and global benchmarks. 脊柱 CT 诊断参考水平:约旦评估和全球基准。
IF 3 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230276
Mohammad Rawashdeh, Abdel-Baset Bani Yaseen, Mark McEntee, Andrew England, Praveen Kumar, Charbel Saade
{"title":"Diagnostic reference levels in spinal CT: Jordanian assessments and global benchmarks.","authors":"Mohammad Rawashdeh, Abdel-Baset Bani Yaseen, Mark McEntee, Andrew England, Praveen Kumar, Charbel Saade","doi":"10.3233/XST-230276","DOIUrl":"10.3233/XST-230276","url":null,"abstract":"<p><strong>Background: </strong>To reduce radiation dose and subsequent risks, several legislative documents in different countries describe the need for Diagnostic Reference Levels (DRLs). Spinal radiography is a common and high-dose examination. Therefore, the aim of this work was to establish the DRL for Computed Tomography (CT) examinations of the spine in healthcare institutions across Jordan.</p><p><strong>Methods: </strong>Data was retrieved from the picture archiving and communications system (PACS), which included the CT Dose Index (CTDI (vol) ) and Dose Length Product (DLP). The median radiation dose values of the dosimetric indices were calculated for each site. DRL values were defined as the 75th percentile distribution of the median CTDI (vol)  and DLP values.</p><p><strong>Results: </strong>Data was collected from 659 CT examinations (316 cervical spine and 343 lumbar-sacral spine). Of the participants, 68% were males, and the patients' mean weight was 69.7 kg (minimum = 60; maximum = 80, SD = 8.9). The 75th percentile for the DLP of cervical and LS-spine CT scans in Jordan were 565.2 and 967.7 mGy.cm, respectively.</p><p><strong>Conclusions: </strong>This research demonstrates a wide range of variability in CTDI (vol)  and DLP values for spinal CT examinations; these variations were associated with the acquisition protocol and highlight the need to optimize radiation dose in spinal CT examinations.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"725-734"},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative study of abdominal CT enhancement in overweight and obese patients based on different scanning modes combined with different contrast medium concentrations. 基于不同扫描模式和不同造影剂浓度的超重和肥胖患者腹部 CT 增强对比研究。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230327
Kai Gao, Ze-Peng Ma, Tian-Le Zhang, Yi-Wen Liu, Yong-Xia Zhao
{"title":"Comparative study of abdominal CT enhancement in overweight and obese patients based on different scanning modes combined with different contrast medium concentrations.","authors":"Kai Gao, Ze-Peng Ma, Tian-Le Zhang, Yi-Wen Liu, Yong-Xia Zhao","doi":"10.3233/XST-230327","DOIUrl":"10.3233/XST-230327","url":null,"abstract":"<p><strong>Purpose: </strong>To compare image quality, iodine intake, and radiation dose in overweight and obese patients undergoing abdominal computed tomography (CT) enhancement using different scanning modes and contrast medium.</p><p><strong>Methods: </strong>Ninety overweight and obese patients (25 kg/m2≤body mass index (BMI)< 30 kg/m2 and BMI≥30 kg/m2) who underwent abdominal CT-enhanced examinations were randomized into three groups (A, B, and C) of 30 each and scanned using gemstone spectral imaging (GSI) +320 mgI/ml, 100 kVp + 370 mgI/ml, and 120 kVp + 370 mgI/ml, respectively. Reconstruct monochromatic energy images of group A at 50-70 keV (5 keV interval). The iodine intake and radiation dose of each group were recorded and calculated. The CT values, contrast-to-noise ratios (CNRs), and subjective scores of each subgroup image in group A versus images in groups B and C were by using one-way analysis of variance or Kruskal-Wallis H test, and the optimal keV of group A was selected.</p><p><strong>Results: </strong>The dual-phase CT values and CNRs of each part in group A were higher than or similar to those in groups B and C at 50-60 keV, and similar to or lower than those in groups B and C at 65 keV and 70 keV. The subjective scores of the dual-phase images in group A were lower than those of groups B and C at 50 keV and 55 keV, whereas no significant difference was seen at 60-70 keV. Compared to groups B and C, the iodine intake in group A decreased by 12.5% and 13.3%, respectively. The effective doses in groups A and B were 24.7% and 25.8% lower than those in group C, respectively.</p><p><strong>Conclusion: </strong>GSI +320 mgI/ml for abdominal CT-enhanced in overweight patients satisfies image quality while reducing iodine intake and radiation dose, and the optimal keV was 60 keV.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"569-581"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139466104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Connectome-based schizophrenia prediction using structural connectivity - Deep Graph Neural Network(sc-DGNN). 利用结构连接性-深度图神经网络(sc-DGNN)进行基于连接组的精神分裂症预测。
IF 1.7 3区 医学
Journal of X-Ray Science and Technology Pub Date : 2024-01-01 DOI: 10.3233/XST-230426
P Udayakumar, R Subhashini
{"title":"Connectome-based schizophrenia prediction using structural connectivity - Deep Graph Neural Network(sc-DGNN).","authors":"P Udayakumar, R Subhashini","doi":"10.3233/XST-230426","DOIUrl":"10.3233/XST-230426","url":null,"abstract":"<p><strong>Background: </strong>Connectome is understanding the complex organization of the human brain's structural and functional connectivity is essential for gaining insights into cognitive processes and disorders.</p><p><strong>Objective: </strong>To improve the prediction accuracy of brain disorder issues, the current study investigates dysconnected subnetworks and graph structures associated with schizophrenia.</p><p><strong>Method: </strong>By using the proposed structural connectivity-deep graph neural network (sc-DGNN) model and compared with machine learning (ML) and deep learning (DL) models.This work attempts to focus on eighty-eight subjects of diffusion magnetic resonance imaging (dMRI), three classical ML, and five DL models.</p><p><strong>Result: </strong>The structural connectivity-deep graph neural network (sc-DGNN) model is proposed to effectively predict dysconnectedness associated with schizophrenia and exhibits superior performance compared to traditional ML and DL (GNNs) methods in terms of accuracy, sensitivity, specificity, precision, F1-score, and Area under receiver operating characteristic (AUC).</p><p><strong>Conclusion: </strong>The classification task on schizophrenia using structural connectivity matrices and experimental results showed that linear discriminant analysis (LDA) performed 72% accuracy rate in ML models and sc-DGNN performed at a 93% accuracy rate in DL models to distinguish between schizophrenia and healthy patients.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1041-1059"},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141184692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信