medRxiv - Radiology and Imaging最新文献

筛选
英文 中文
Influence of Prior Probability Information on Large Language Model Performance in Radiological Diagnosis 先验概率信息对放射诊断中大语言模型性能的影响
medRxiv - Radiology and Imaging Pub Date : 2024-08-28 DOI: 10.1101/2024.08.27.24312693
Takahiro Fukushima, Ryo Kurokawa, Akifumi Hagiwara, Yuki Sonoda, Yusuke Asari, Mariko Kurokawa, Jun Kanzawa, Wataru Gonoi, Osamu Abe
{"title":"Influence of Prior Probability Information on Large Language Model Performance in Radiological Diagnosis","authors":"Takahiro Fukushima, Ryo Kurokawa, Akifumi Hagiwara, Yuki Sonoda, Yusuke Asari, Mariko Kurokawa, Jun Kanzawa, Wataru Gonoi, Osamu Abe","doi":"10.1101/2024.08.27.24312693","DOIUrl":"https://doi.org/10.1101/2024.08.27.24312693","url":null,"abstract":"<strong>Background</strong> Large language models (LLMs) show promise in radiological diagnosis, but their performance may be affected by the context of the cases presented.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cardiothoracic Ratio (CTR) Among Patients Presenting for Chest X-ray in Radiology Department at Mulago National Referral Hospital: A Patients’ Health Indicator for Clinical Application. 穆拉戈国家转诊医院放射科胸部 X 光检查患者的心胸比率 (CTR):临床应用中的患者健康指标。
medRxiv - Radiology and Imaging Pub Date : 2024-08-28 DOI: 10.1101/2024.08.28.24312720
Alen Musisi, Rebecca Nakatudde, Oliver Namuwonge, Deborah Babirye, Ismail Kintu, Francis Olweny, Richard Malumba, Victoria Nakalanzi, Aloysius Gonzaga Mubuuke
{"title":"Cardiothoracic Ratio (CTR) Among Patients Presenting for Chest X-ray in Radiology Department at Mulago National Referral Hospital: A Patients’ Health Indicator for Clinical Application.","authors":"Alen Musisi, Rebecca Nakatudde, Oliver Namuwonge, Deborah Babirye, Ismail Kintu, Francis Olweny, Richard Malumba, Victoria Nakalanzi, Aloysius Gonzaga Mubuuke","doi":"10.1101/2024.08.28.24312720","DOIUrl":"https://doi.org/10.1101/2024.08.28.24312720","url":null,"abstract":"Introduction/background: The heart is vital, and even minor dysfunctions can significantly impact the body. Cardiologists need always to determine heart size, which varies with physiological changes. Advanced measurement techniques are costly and often inaccessible to a common man. Measuring the cardiothoracic ratio (CTR) via conventional X-ray is a common and more affordable option, but there's a need for even cheaper alternatives\u0000Objective: To determine relationship between CTR and presenting clinical indications and to relate CTR to the body parameters to find an appropriate relationship that can be utilized in low resource facilities in determining heart size.\u0000Methodology: This cross-sectional study involved 386 patients undergoing chest radiographs at Mulago National Specialized Hospital's radiology department. Data were summarized using frequencies and percentages. Associations between the cardiothoracic ratio (CTR) and independent variables were analyzed using Pearson’s chi-square, Fisher’s exact test, Spearman’s correlation coefficient, simple linear regression, and multivariate regression. Statistical significance was set at a p-value of &lt; 0.05.\u0000Results: The median cardiothoracic ratio (CTR) was 0.46, with an interquartile range of 0.42 to 0.50. Female patients had a higher CTR than males. Significant positive correlations were found between CTR; and BMI (p &lt; 0.001, correlation 0.21), and BSA (p = 0.016, correlation 0.12), and BSI (p &lt; 0.001, correlation 0.19). The diagnostic accuracy of a linear regression equation containing BSA as an estimator of CTR showed relatively fair performance compared to the linear regression equations with BSI and BMI. It showed sensitivity, specificity, and positive and negative predictive values of 29.2%, 86.0%, 63.6%, and 59.0% for males, and 8.3%, 98.1%, 75.0%, and 60.7% for females, respectively.\u0000Conclusion: BSA shows a moderately good relationship with CTR, while the influence of body habitus on CTR is minimal. Thus, using body parameters to predict CTR should be approached cautiously. We recommend conducting a similar study on a more diverse general population","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"185 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Institutional European Evaluation and Validation of Automated Multilabel Segmentation for Acute Intracerebral Hemorrhage and Complications 欧洲跨机构评估和验证急性脑内出血及并发症的自动多标签分割技术
medRxiv - Radiology and Imaging Pub Date : 2024-08-28 DOI: 10.1101/2024.08.27.24312653
Jawed Nawabi, Georg Lukas Baumgaertner, Sophia Schulze-Weddige, Andrea Dell'Orco, Andrea Morotti, Federico Mazzacane, Helge C Kniep, Frieder Schlunk, Maik FH Boehmer, Burakhan Akkurt, Tobias Orth, Jana-Sofie Weissflog, Maik Schumann, Peter Sporns, Michael Scheel, Uta Hanning, Jens Fiehler, Tobias Penzkofer
{"title":"Cross-Institutional European Evaluation and Validation of Automated Multilabel Segmentation for Acute Intracerebral Hemorrhage and Complications","authors":"Jawed Nawabi, Georg Lukas Baumgaertner, Sophia Schulze-Weddige, Andrea Dell'Orco, Andrea Morotti, Federico Mazzacane, Helge C Kniep, Frieder Schlunk, Maik FH Boehmer, Burakhan Akkurt, Tobias Orth, Jana-Sofie Weissflog, Maik Schumann, Peter Sporns, Michael Scheel, Uta Hanning, Jens Fiehler, Tobias Penzkofer","doi":"10.1101/2024.08.27.24312653","DOIUrl":"https://doi.org/10.1101/2024.08.27.24312653","url":null,"abstract":"Purpose: To evaluate a nnU-Net-based deep learning for automated segmentation of intracerebral hemorrhage (ICH), intraventricular hemorrhage (IVH), and perihematomal edema (PHE) on noncontrast CT scans. Materials and Methods: Retrospective data from acute ICH patients admitted at four European stroke centers (2017-2019), along healthy controls (2022-2023), were analyzed. nnU-Net was trained (n=775) using a 5-fold cross-valiadtion approach, tested (n=189), and seperatly validated on internal (n=121), external (n=169), and diverse ICH etiologies (n=175) datasets. Interrater-validated ground truth served as the reference standard. Lesion detection, segmentation, and volumetric accuracy were measured, alongside time efficiency versus manual segmentation. Results: Test set results revealed high nnU-Net accuracy (median Dice Similartiy Coefficient (DSC): ICH 0.91, IVH 0.76, PHE 0.71) and volumetric correlation (ICH, IVH: r=0.99; PHE: r=0.92). Sensitivities were high (ICH, PHE: 99%; IVH: 97%), with IVH detection specificities and sensitivities &gt;90% for volumes up to 0.2 ml. Anatomical-specific metrics showed higher performance for lobar and deep hemorrhages (median DSC 0.90 and 0.92, respectively) and lower for brainstem (median DSC 0.70). Concurrent hemorrhages did not affect accuracy, p&gt; 0.05. Across validation sets, segmentation precision was consistent, especially for ICH (median DSC 0.85-0.90), with PHE slightly lower (median DSC 0.61-0.66) and IVH best in the second and third set (median DSC 0.80). Average processing time was 18.2 seconds versus 18.01 minutes manually. Conclusion: The nnU-Net provides reliable, time-efficient ICH, IVH, and PHE segmentation, validated across various clinical settings, with excellent anatomical-specific performance for lobar and deep hemorrhages. It shows promise for enhancing clinical workflow and research initiatives.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-powered CT-less Multi-tracer Organ Segmentation from PET Images: A solution for unreliable CT segmentation in PET/CT Imaging 深度学习驱动的 PET 图像无 CT 多示踪剂器官分割:PET/CT 成像中不可靠 CT 分割的解决方案
medRxiv - Radiology and Imaging Pub Date : 2024-08-28 DOI: 10.1101/2024.08.27.24312482
Yazdan Salimi, Zahra Mansouri, Isaac Shiri, Ismini Mainta, Habib Zaidi
{"title":"Deep Learning-powered CT-less Multi-tracer Organ Segmentation from PET Images: A solution for unreliable CT segmentation in PET/CT Imaging","authors":"Yazdan Salimi, Zahra Mansouri, Isaac Shiri, Ismini Mainta, Habib Zaidi","doi":"10.1101/2024.08.27.24312482","DOIUrl":"https://doi.org/10.1101/2024.08.27.24312482","url":null,"abstract":"Introduction: The common approach for organ segmentation in hybrid imaging relies on co-registered CT (CTAC) images. This method, however, presents several limitations in real clinical workflows where mismatch between PET and CT images are very common. Moreover, low-dose CTAC images have poor quality, thus challenging the segmentation task. Recent advances in CT-less PET imaging further highlight the necessity for an effective PET organ segmentation pipeline that does not rely on CT images. Therefore, the goal of this study was to develop a CT-less multi-tracer PET segmentation framework.\u0000Methods: We collected 2062 PET/CT images from multiple scanners. The patients were injected with either 18F-FDG (1487) or 68Ga-PSMA (575). PET/CT images with any kind of mismatch between PET and CT images were detected through visual assessment and excluded from our study. Multiple organs were delineated on CT components using previously trained in-house developed nnU-Net models. The segmentation masks were resampled to co-registered PET images and used to train four different deep-learning models using different images as input, including non-corrected PET (PET-NC) and attenuation and scatter-corrected PET (PET-ASC) for 18F-FDG (tasks #1 and #2, respectively using 22 organs) and PET-NC and PET-ASC for 68Ga tracers (tasks #3 and #4, respectively, using 15 organs). The models performance was evaluated in terms of Dice coefficient, Jaccard index, and segment volume difference.\u0000Results: The average Dice coefficient over all organs was 0.81±0.15, 0.82±0.14, 0.77±0.17, and 0.79±0.16 for tasks #1, #2, #3, and #4, respectively. PET-ASC models outperformed PET-NC models (P-value &lt; 0.05). The highest Dice values were achieved for the brain (0.93 to 0.96 in all four tasks), whereas the lowest values were achieved for small organs, such as the adrenal glands. The trained models showed robust performance on dynamic noisy images as well.\u0000Conclusion: Deep learning models allow high performance multi-organ segmentation for two popular PET tracers without the use of CT information. These models may tackle the limitations of using CT segmentation in PET/CT image quantification, kinetic modeling, radiomics analysis, dosimetry, or any other tasks that require organ segmentation masks.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Field 7T MRI in a drug-resistant paediatric epilepsy cohort: image comparison and radiological outcomes 耐药性儿科癫痫队列中的高场7T磁共振成像:图像对比和放射学结果
medRxiv - Radiology and Imaging Pub Date : 2024-08-23 DOI: 10.1101/2024.08.19.24312117
Katy Vecchiato, Chiara Casella, Ayse Sila Dokumaci, Olivia Carney, Jon O. Cleary, Pierluigi Di Ciò, Michela Cleri, Kathleen Colford, Rory J. Piper, Tomoki Arichi Arichi, Michael Eyre, Fraser Aitken, Raphael Tomi-Tricot, Tom Wilkinson, Colm J. McGinnity, Sharon L. Giles, Shaihan Malik, Alexander Hammers, Philippa Bridgen, David W Carmichael, Jonathan O'Muircheartaigh
{"title":"High-Field 7T MRI in a drug-resistant paediatric epilepsy cohort: image comparison and radiological outcomes","authors":"Katy Vecchiato, Chiara Casella, Ayse Sila Dokumaci, Olivia Carney, Jon O. Cleary, Pierluigi Di Ciò, Michela Cleri, Kathleen Colford, Rory J. Piper, Tomoki Arichi Arichi, Michael Eyre, Fraser Aitken, Raphael Tomi-Tricot, Tom Wilkinson, Colm J. McGinnity, Sharon L. Giles, Shaihan Malik, Alexander Hammers, Philippa Bridgen, David W Carmichael, Jonathan O'Muircheartaigh","doi":"10.1101/2024.08.19.24312117","DOIUrl":"https://doi.org/10.1101/2024.08.19.24312117","url":null,"abstract":"Background and Objectives: Epileptogenic lesions in focal epilepsy can be subtle or undetected on conventional brain MRI. Ultra-high field (7T) MRI offers higher spatial resolution, contrast and signal-to-noise ratio compared to conventional imaging systems and has shown promise in the pre-surgical evaluation of adult focal epilepsy. However, the utility of ultra-high field MRI in paediatric focal epilepsy, where malformations of cortical development are more common, is unclear. This study compared 7T to conventional 3T MRI in children with epilepsy by comparing: (i) scan tolerability; (ii) radiological image quality; (iii) lesion yield. Materials and Methods: Children with drug-resistant focal epilepsy and healthy controls were recruited prospectively and imaged at both 3T and 7T. Safety and tolerability during scanning was assessed via a questionnaire. Image quality was evaluated by an expert paediatric neuroradiologist and estimated quantitatively by comparing cortical thickness between field strengths. To assess lesion detection yield of 7T MRI, a multi-disciplinary team jointly reviewed patients' images. Results: 41 patients (8-17 years, mean=12.6 years, 22 male) and 22 healthy controls (8-17 years, mean=11.7 years, 15 male) were recruited. All children completed the scan, with no significant adverse events. Higher discomfort due to dizziness was reported at 7T (p=0.02), with side-effects more frequently noted in younger children (p=0.02). However, both field strengths were generally well-tolerated and side-effects were transient. 7T images had increased inhomogeneity and artefacts compared to those obtained at 3T. Cortical thickness measurements were significantly thinner at 7T (p&lt;0.001). 8/26 (31%) patients had new lesions identified at 7T which were not identified at 3T, influencing the surgical management in 4/26 (15%). Discussion: 7T MRI in children with epilepsy is feasible, well-tolerated and is associated with a 31% improvement in lesion detection rates.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"87 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI Enables Medical Image Segmentation in Ultra Low-Data Regimes 生成式人工智能实现超低数据量下的医学图像分割
medRxiv - Radiology and Imaging Pub Date : 2024-08-23 DOI: 10.1101/2024.08.23.24312461
Li Zhang, Basu Jindal, Ahmed Alaa, Robert Weinreb, David Wilson, Eran Segal, James Zou, Pengtao Xie
{"title":"Generative AI Enables Medical Image Segmentation in Ultra Low-Data Regimes","authors":"Li Zhang, Basu Jindal, Ahmed Alaa, Robert Weinreb, David Wilson, Eran Segal, James Zou, Pengtao Xie","doi":"10.1101/2024.08.23.24312461","DOIUrl":"https://doi.org/10.1101/2024.08.23.24312461","url":null,"abstract":"Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning has excelled in automating this task, a major hurdle is the need for numerous annotated segmentation masks, which are resource-intensive to produce due to the required expertise and time. This scenario often leads to ultra low-data regimes, where annotated images are extremely limited, posing significant challenges for the generalization of conventional deep learning methods on test images. To address this, we introduce a generative deep learning framework, which uniquely generates high-quality paired segmentation masks and medical images, serving as auxiliary data for training robust models in data-scarce environments. Unlike traditional generative models that treat data generation and segmentation model training as separate processes, our method employs multi-level optimization for end-to-end data generation. This approach allows segmentation performance to directly influence the data generation process, ensuring that the generated data is specifically tailored to enhance the performance of the segmentation model. Our method demonstrated strong generalization performance across 9 diverse medical image segmentation tasks and on 16 datasets, in ultra-low data regimes, spanning various diseases, organs, and imaging modalities. When applied to various segmentation models, it achieved performance improvements of 10-20% (absolute), in both same-domain and out-of-domain scenarios. Notably, it requires 8 to 20 times less training data than existing methods to achieve comparable results. This advancement significantly improves the feasibility and cost-effectiveness of applying deep learning in medical imaging, particularly in scenarios with limited data availability.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of continuous medical education and clinical imaging guidelines on reducing inappropriate computerized tomography utilization  among children  and young patients in a resource -limited settings:   A before-and-after study 在资源有限的环境中,持续医学教育和临床成像指南对减少儿童和年轻患者不适当使用计算机断层扫描的影响: 前后对比研究
medRxiv - Radiology and Imaging Pub Date : 2024-08-22 DOI: 10.1101/2024.08.16.24312127
Harriet Nalubega Kisembo, Richard Malumba, Ezra Kato Nsereko, Deborah Babirye, Victoria Nakalanzi, Francis Xavier Kasujja, Elsie Kiguli Malwadde, Elizeus Rutebemberwa, Simon Kasasa, Dina Husseiny Salama, Michael Grace Kawooya
{"title":"Effect of continuous medical education and clinical imaging guidelines on reducing inappropriate computerized tomography utilization  among children  and young patients in a resource -limited settings:   A before-and-after study","authors":"Harriet Nalubega Kisembo, Richard Malumba, Ezra Kato Nsereko, Deborah Babirye, Victoria Nakalanzi, Francis Xavier Kasujja, Elsie Kiguli Malwadde, Elizeus Rutebemberwa, Simon Kasasa, Dina Husseiny Salama, Michael Grace Kawooya","doi":"10.1101/2024.08.16.24312127","DOIUrl":"https://doi.org/10.1101/2024.08.16.24312127","url":null,"abstract":"Background Multi-Detector Computed Tomography (MDCT) has revolutionized healthcare delivery, significantly improving diagnostic accuracy and patient outcomes in various clinical settings. However, the overuse of CT examinations (CTEs), especially in resource-limited settings (RLS), poses a substantial public health challenge. Inappropriately performed CTEs, particularly among children and young adults, expose these vulnerable populations to unnecessary radiation risks, with 20%-50% of CTEs deemed inappropriate, and 10%-20% involving children. Despite the existence of evidence-based interventions like clinical imaging guidelines (CIGs) to curb this overuse, their availability and effectiveness in RLS are not well established.\u0000Objective\u0000This study aimed to determine the impact of continuous medical education (CME) and the introduction of clinical imaging guidelines (CIGs) on the appropriateness of CT utilization among children and young adults in selected hospitals in Uganda.\u0000Materials and Methods\u0000A before-and-after study design was employed to assess the effect of an intervention comprising CME and CIGs on appropriate CTE utilization. The intervention targeted healthcare providers (HCPs) across six public and private tertiary hospitals with available CT services over a 12-month period. Baseline data indicated a high prevalence of inappropriate CTEs among the target population. The proportion of CTEs performed for various body regions (head, paranasal sinuses, chest, abdomen, spine, trauma) and their appropriateness were retrospectively analyzed before and after the intervention, using the European Society of Radiology's iGuide and pre-intervention study results as benchmarks.\u0000Results Post-intervention, the total number of CTEs performed increased by 33% (909 vs. 1210), with a 30% increase in public hospitals (300 vs. 608, p &lt; 0.001) and a 41% increase in private-for-profit hospitals (91 vs. 238, p = 0.037). Specific increases were observed in head CTs (19%, 746 vs. 890, p &lt; 0.0001) and contrasted studies (252%, 113 vs. 410, p &lt; 0.0001). Conversely, CTEs for trauma decreased by 8% (499 vs. 458, p &lt; 0.0001). Despite these changes, the overall proportion of inappropriate CTEs increased by 15% (38% vs. 44%, p &lt; 0.001), with a 28% increase in inappropriate contrasted examinations (25% vs. 53%, p &lt; 0.001) and a 13% increase in non-trauma cases (66% vs. 79%, p &lt; 0.001). Notably, inappropriate CTEs for non-contrasted and trauma-related cases reduced by 28% (75% vs. 47%, p &lt; 0.001) and 31% (34% vs. 14%, p = 0.0001), respectively.\u0000Conclusion\u0000The findings underscore the potential of CME and the adaptation of CIGs from high-resource settings to enhance the appropriateness of CT utilization in RLS. While the intervention notably reduced inappropriate trauma-related and non-contrasted CTEs, it also highlighted the complexity of achieving consistent improvements across all examination types. Further research is recommended to explore the ","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modal Functional Plasticity after Cochlear-implantation 人工耳蜗植入术后的跨模态功能可塑性
medRxiv - Radiology and Imaging Pub Date : 2024-08-22 DOI: 10.1101/2024.08.22.24312200
Jamal Esmaelpoor, Tommy Peng, Beth Jelfs, Darren Mao, Maureen J. Shader, Colette M. McKay
{"title":"Cross-modal Functional Plasticity after Cochlear-implantation","authors":"Jamal Esmaelpoor, Tommy Peng, Beth Jelfs, Darren Mao, Maureen J. Shader, Colette M. McKay","doi":"10.1101/2024.08.22.24312200","DOIUrl":"https://doi.org/10.1101/2024.08.22.24312200","url":null,"abstract":"Objective: Despite evidence that cross-modal effects after hearing loss and cochlear implantation are primarily conveyed through synaptic gain and efficacy rather than reorganized fiber tracts, few studies have assessed cross-modal functional connectivity (CMFC) to evaluate plasticity. This study, inspired by the psychophysiological interactions (PPI) method, addresses its limitations and provides a robust approach to evaluating task-induced CMFC.\u0000Design: Twenty-two post-lingually deafened, newly implanted adult cochlear implant (CI) recipients with severe hearing loss in the contralateral ear and 17 normal-hearing (NH) subjects participated. The experiment included audio-only and visual-only speech tasks, with resting-state FC as a baseline. Functional near-infrared spectroscopy (fNIRS) measured brain imaging data one month and one year post-implantation. CI users' speech understanding performance was evaluated one year after implantation.\u0000Results: A negative correlation was found between average contralateral task-induced CMFC and speech outcomes, particularly in links from the angular gyrus (AG), both one month and one year post-activation. Plastic changes showed higher task-induced CMFC in AG compared to the superior temporal gyrus (STG), aligning with neural efficiency principles. Task-induced CMFC remained elevated in CI users compared to NH cohorts even after one year.\u0000Conclusion: Task-induced CMFC can serve as a significant marker of cross-modal plasticity and speech performance in CI recipients, indicating increased reliance on cross-modal processing in one year after implantation.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Your Style Transfer Doing Anything Useful? An Investigation Into Hippocampus Segmentation and the Role of Preprocessing 你的风格转移有用吗?对海马体分割和预处理作用的研究
medRxiv - Radiology and Imaging Pub Date : 2024-08-22 DOI: 10.1101/2024.08.22.24312425
Hoda Kalabizadeh, Ludovica Griffanti, Pak Hei Yeung, Natalie Voets, Grace Gillis, Clare E Mackay, Ana IL Namburete, Nicola K Dinsdale, Konstantinos Kamnitsas
{"title":"Is Your Style Transfer Doing Anything Useful? An Investigation Into Hippocampus Segmentation and the Role of Preprocessing","authors":"Hoda Kalabizadeh, Ludovica Griffanti, Pak Hei Yeung, Natalie Voets, Grace Gillis, Clare E Mackay, Ana IL Namburete, Nicola K Dinsdale, Konstantinos Kamnitsas","doi":"10.1101/2024.08.22.24312425","DOIUrl":"https://doi.org/10.1101/2024.08.22.24312425","url":null,"abstract":"Brain atrophy assessment in MRI, particularly of the hippocampus, is commonly used to support diagnosis and monitoring of dementia. Consequently, there is a demand for accurate automated hippocampus quantification. Most existing segmentation methods have been developed and validated on research datasets and, therefore, may not be appropriate for clinical MR images and populations, leading to potential gaps between dementia research and clinical practice. In this study, we investigated the performance of segmentation models trained on research data that were style-transferred to resemble clinical scans. Our results highlighted the importance of intensity normalisation methods in MRI segmentation, and their relation to domain shift and style-transfer. We found that whilst normalising intensity based on min and max values, commonly used in generative MR harmonisation methods, may create a need for style transfer, Z-score normalisation effectively maintains style consistency, and optimises performance. Moreover, we show for our datasets spatial augmentations are more beneficial than style harmonisation. Thus, emphasising robust normalisation techniques and spatial augmentation significantly improves MRI hippocampus segmentation.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning-Based Pixel-Level Quantification of Intramuscular Connective Tissue using Ultrasound Texture Analysis 利用超声纹理分析进行基于机器学习的肌内结缔组织像素级定量分析
medRxiv - Radiology and Imaging Pub Date : 2024-08-21 DOI: 10.1101/2024.08.21.24312346
Patricio A. Pincheira, Jong H. Kim, Paul W. Hodges
{"title":"Machine Learning-Based Pixel-Level Quantification of Intramuscular Connective Tissue using Ultrasound Texture Analysis","authors":"Patricio A. Pincheira, Jong H. Kim, Paul W. Hodges","doi":"10.1101/2024.08.21.24312346","DOIUrl":"https://doi.org/10.1101/2024.08.21.24312346","url":null,"abstract":"Objective This study aimed to develop a machine learning method for characterizing muscle composition on ultrasound imaging, focusing on pixel-level quantification of connective tissue using texture analysis. Methods Ultrasound images of the multifidus muscle from 20 healthy young adults were included in the analysis. Texture features including Local Binary Patterns, Histograms of Oriented Gradients, Grey Level Co-occurrence Matrix, and Discrete Wavelet Transforms, were extracted from the images across multiple scales. Within a positive-unlabeled machine learning framework, two competing models, Bagging Support Vector Machine and Random Forests with Recursive Greedy Risk Minimization were trained for each texture and scale. The outputs of the texture-based pixel-level classification were compared to traditional echo intensity-based methods. Metrics such as the F-measure were employed to evaluate the models' performance. Expert consensus was utilised to evaluate the accuracy of the classified images and identify the best-performing combination of model, texture, and scale. Results Expert evaluation identified the Bagging Support Vector Machine model trained with Local Binary Pattern histograms extracted at a scale of 9x9 pixel region of interest as the best combination for accurately classifying connective tissue-like pixels (F-measure= 0.88). The proposed method demonstrated high repeatability (intraclass correlation coefficient= 0.92) and robustness to echo intensity variations, outperforming traditional echo intensity-based methods. Conclusion This approach offers a valid method for pixel-level quantification of intramuscular connective tissue from ultrasound images. It overcomes the limitations of traditional analyses relying on echo intensity and demonstrates robustness against variations in echo intensity, representing an operator-independent advancement in ultrasound-based muscle composition analysis.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"117 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信