British Journal of Ophthalmology最新文献

筛选
英文 中文
Foundation models in ophthalmology. 眼科基础模型。
IF 3.7 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-20 DOI: 10.1136/bjo-2024-325459
Mark A Chia, Fares Antaki, Yukun Zhou, Angus W Turner, Aaron Y Lee, Pearse A Keane
{"title":"Foundation models in ophthalmology.","authors":"Mark A Chia, Fares Antaki, Yukun Zhou, Angus W Turner, Aaron Y Lee, Pearse A Keane","doi":"10.1136/bjo-2024-325459","DOIUrl":"10.1136/bjo-2024-325459","url":null,"abstract":"<p><p>Foundation models represent a paradigm shift in artificial intelligence (AI), evolving from narrow models designed for specific tasks to versatile, generalisable models adaptable to a myriad of diverse applications. Ophthalmology as a specialty has the potential to act as an exemplar for other medical specialties, offering a blueprint for integrating foundation models broadly into clinical practice. This review hopes to serve as a roadmap for eyecare professionals seeking to better understand foundation models, while equipping readers with the tools to explore the use of foundation models in their own research and practice. We begin by outlining the key concepts and technological advances which have enabled the development of these models, providing an overview of novel training approaches and modern AI architectures. Next, we summarise existing literature on the topic of foundation models in ophthalmology, encompassing progress in vision foundation models, large language models and large multimodal models. Finally, we outline major challenges relating to privacy, bias and clinical validation, and propose key steps forward to maximise the benefit of this powerful technology.</p>","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11503093/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141247416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital ray: enhancing cataractous fundus images using style transfer generative adversarial networks to improve retinopathy detection. 数字射线:利用风格转移生成式对抗网络增强白内障眼底图像,改善视网膜病变检测。
IF 3.7 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-20 DOI: 10.1136/bjo-2024-325403
Lixue Liu, Jiaming Hong, Yuxuan Wu, Shaopeng Liu, Kai Wang, Mingyuan Li, Lanqin Zhao, Zhenzhen Liu, Longhui Li, Tingxin Cui, Ching-Kit Tsui, Fabao Xu, Weiling Hu, Dongyuan Yun, Xi Chen, Yuanjun Shang, Shaowei Bi, Xiaoyue Wei, Yunxi Lai, Duoru Lin, Zhe Fu, Yaru Deng, Kaimin Cai, Yi Xie, Zizheng Cao, Dongni Wang, Xulin Zhang, Meimei Dongye, Haotian Lin, Xiaohang Wu
{"title":"Digital ray: enhancing cataractous fundus images using style transfer generative adversarial networks to improve retinopathy detection.","authors":"Lixue Liu, Jiaming Hong, Yuxuan Wu, Shaopeng Liu, Kai Wang, Mingyuan Li, Lanqin Zhao, Zhenzhen Liu, Longhui Li, Tingxin Cui, Ching-Kit Tsui, Fabao Xu, Weiling Hu, Dongyuan Yun, Xi Chen, Yuanjun Shang, Shaowei Bi, Xiaoyue Wei, Yunxi Lai, Duoru Lin, Zhe Fu, Yaru Deng, Kaimin Cai, Yi Xie, Zizheng Cao, Dongni Wang, Xulin Zhang, Meimei Dongye, Haotian Lin, Xiaohang Wu","doi":"10.1136/bjo-2024-325403","DOIUrl":"10.1136/bjo-2024-325403","url":null,"abstract":"<p><strong>Background/aims: </strong>The aim of this study was to develop and evaluate digital ray, based on preoperative and postoperative image pairs using style transfer generative adversarial networks (GANs), to enhance cataractous fundus images for improved retinopathy detection.</p><p><strong>Methods: </strong>For eligible cataract patients, preoperative and postoperative colour fundus photographs (CFP) and ultra-wide field (UWF) images were captured. Then, both the original CycleGAN and a modified CycleGAN (C<sup>2</sup>ycleGAN) framework were adopted for image generation and quantitatively compared using Frechet Inception Distance (FID) and Kernel Inception Distance (KID). Additionally, CFP and UWF images from another cataract cohort were used to test model performances. Different panels of ophthalmologists evaluated the quality, authenticity and diagnostic efficacy of the generated images.</p><p><strong>Results: </strong>A total of 959 CFP and 1009 UWF image pairs were included in model development. FID and KID indicated that images generated by C<sup>2</sup>ycleGAN presented significantly improved quality. Based on ophthalmologists' average ratings, the percentages of inadequate-quality images decreased from 32% to 18.8% for CFP, and from 18.7% to 14.7% for UWF. Only 24.8% and 13.8% of generated CFP and UWF images could be recognised as synthetic. The accuracy of retinopathy detection significantly increased from 78% to 91% for CFP and from 91% to 93% for UWF. For retinopathy subtype diagnosis, the accuracies also increased from 87%-94% to 91%-100% for CFP and from 87%-95% to 93%-97% for UWF.</p><p><strong>Conclusion: </strong>Digital ray could generate realistic postoperative CFP and UWF images with enhanced quality and accuracy for overall detection and subtype diagnosis of retinopathies, especially for CFP. TRIAL REGISTRATION NUMBER: This study was registered with ClinicalTrials.gov (NCT05491798).</p>","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11503040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141261103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICGA-GPT: report generation and question answering for indocyanine green angiography images. ICGA-GPT:吲哚青绿血管造影图像的报告生成和问题解答。
IF 3.7 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-20 DOI: 10.1136/bjo-2023-324446
Xiaolan Chen, Weiyi Zhang, Ziwei Zhao, Pusheng Xu, Yingfeng Zheng, Danli Shi, Mingguang He
{"title":"ICGA-GPT: report generation and question answering for indocyanine green angiography images.","authors":"Xiaolan Chen, Weiyi Zhang, Ziwei Zhao, Pusheng Xu, Yingfeng Zheng, Danli Shi, Mingguang He","doi":"10.1136/bjo-2023-324446","DOIUrl":"10.1136/bjo-2023-324446","url":null,"abstract":"<p><strong>Background: </strong>Indocyanine green angiography (ICGA) is vital for diagnosing chorioretinal diseases, but its interpretation and patient communication require extensive expertise and time-consuming efforts. We aim to develop a bilingual ICGA report generation and question-answering (QA) system.</p><p><strong>Methods: </strong>Our dataset comprised 213 129 ICGA images from 2919 participants. The system comprised two stages: image-text alignment for report generation by a multimodal transformer architecture, and large language model (LLM)-based QA with ICGA text reports and human-input questions. Performance was assessed using both qualitative metrics (including Bilingual Evaluation Understudy (BLEU), Consensus-based Image Description Evaluation (CIDEr), Recall-Oriented Understudy for Gisting Evaluation-Longest Common Subsequence (ROUGE-L), Semantic Propositional Image Caption Evaluation (SPICE), accuracy, sensitivity, specificity, precision and F1 score) and subjective evaluation by three experienced ophthalmologists using 5-point scales (5 refers to high quality).</p><p><strong>Results: </strong>We produced 8757 ICGA reports covering 39 disease-related conditions after bilingual translation (66.7% English, 33.3% Chinese). The ICGA-GPT model's report generation performance was evaluated with BLEU scores (1-4) of 0.48, 0.44, 0.40 and 0.37; CIDEr of 0.82; ROUGE of 0.41 and SPICE of 0.18. For disease-based metrics, the average specificity, accuracy, precision, sensitivity and F1 score were 0.98, 0.94, 0.70, 0.68 and 0.64, respectively. Assessing the quality of 50 images (100 reports), three ophthalmologists achieved substantial agreement (kappa=0.723 for completeness, kappa=0.738 for accuracy), yielding scores from 3.20 to 3.55. In an interactive QA scenario involving 100 generated answers, the ophthalmologists provided scores of 4.24, 4.22 and 4.10, displaying good consistency (kappa=0.779).</p><p><strong>Conclusion: </strong>This pioneering study introduces the ICGA-GPT model for report generation and interactive QA for the first time, underscoring the potential of LLMs in assisting with automated ICGA image interpretation.</p>","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140173776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative artificial intelligence in ophthalmology: current innovations, future applications and challenges. 眼科中的生成人工智能:当前的创新、未来的应用和挑战。
IF 3.7 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-20 DOI: 10.1136/bjo-2024-325458
Sadi Can Sonmez, Mertcan Sevgi, Fares Antaki, Josef Huemer, Pearse A Keane
{"title":"Generative artificial intelligence in ophthalmology: current innovations, future applications and challenges.","authors":"Sadi Can Sonmez, Mertcan Sevgi, Fares Antaki, Josef Huemer, Pearse A Keane","doi":"10.1136/bjo-2024-325458","DOIUrl":"10.1136/bjo-2024-325458","url":null,"abstract":"<p><p>The rapid advancements in generative artificial intelligence are set to significantly influence the medical sector, particularly ophthalmology. Generative adversarial networks and diffusion models enable the creation of synthetic images, aiding the development of deep learning models tailored for specific imaging tasks. Additionally, the advent of multimodal foundational models, capable of generating images, text and videos, presents a broad spectrum of applications within ophthalmology. These range from enhancing diagnostic accuracy to improving patient education and training healthcare professionals. Despite the promising potential, this area of technology is still in its infancy, and there are several challenges to be addressed, including data bias, safety concerns and the practical implementation of these technologies in clinical settings.</p>","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11503064/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141455417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards regulatory generative AI in ophthalmology healthcare: a security and privacy perspective. 在眼科医疗保健领域实现规范的生成式人工智能:安全与隐私视角。
IF 3.7 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-20 DOI: 10.1136/bjo-2024-325167
Yueye Wang, Chi Liu, Keyao Zhou, Tianqing Zhu, Xiaotong Han
{"title":"Towards regulatory generative AI in ophthalmology healthcare: a security and privacy perspective.","authors":"Yueye Wang, Chi Liu, Keyao Zhou, Tianqing Zhu, Xiaotong Han","doi":"10.1136/bjo-2024-325167","DOIUrl":"10.1136/bjo-2024-325167","url":null,"abstract":"<p><p>As the healthcare community increasingly harnesses the power of generative artificial intelligence (AI), critical issues of security, privacy and regulation take centre stage. In this paper, we explore the security and privacy risks of generative AI from model-level and data-level perspectives. Moreover, we elucidate the potential consequences and case studies within the domain of ophthalmology. Model-level risks include knowledge leakage from the model and model safety under AI-specific attacks, while data-level risks involve unauthorised data collection and data accuracy concerns. Within the healthcare context, these risks can bear severe consequences, encompassing potential breaches of sensitive information, violating privacy rights and threats to patient safety. This paper not only highlights these challenges but also elucidates governance-driven solutions that adhere to AI and healthcare regulations. We advocate for preparedness against potential threats, call for transparency enhancements and underscore the necessity of clinical validation before real-world implementation. The objective of security and privacy improvement in generative AI warrants emphasising the role of ophthalmologists and other healthcare providers, and the timely introduction of comprehensive regulations.</p>","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141247390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using artificial intelligence to improve human performance: efficient retinal disease detection training with synthetic images. 利用人工智能提高人类性能:利用合成图像进行高效视网膜疾病检测训练。
IF 3.7 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-20 DOI: 10.1136/bjo-2023-324923
Hitoshi Tabuchi, Justin Engelmann, Fumiatsu Maeda, Ryo Nishikawa, Toshihiko Nagasawa, Tomofusa Yamauchi, Mao Tanabe, Masahiro Akada, Keita Kihara, Yasuyuki Nakae, Yoshiaki Kiuchi, Miguel O Bernabeu
{"title":"Using artificial intelligence to improve human performance: efficient retinal disease detection training with synthetic images.","authors":"Hitoshi Tabuchi, Justin Engelmann, Fumiatsu Maeda, Ryo Nishikawa, Toshihiko Nagasawa, Tomofusa Yamauchi, Mao Tanabe, Masahiro Akada, Keita Kihara, Yasuyuki Nakae, Yoshiaki Kiuchi, Miguel O Bernabeu","doi":"10.1136/bjo-2023-324923","DOIUrl":"10.1136/bjo-2023-324923","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) in medical imaging diagnostics has huge potential, but human judgement is still indispensable. We propose an AI-aided teaching method that leverages generative AI to train students on many images while preserving patient privacy.</p><p><strong>Methods: </strong>A web-based course was designed using 600 synthetic ultra-widefield (UWF) retinal images to teach students to detect disease in these images. The images were generated by stable diffusion, a large generative foundation model, which we fine-tuned with 6285 real UWF images from six categories: five retinal diseases (age-related macular degeneration, glaucoma, diabetic retinopathy, retinal detachment and retinal vein occlusion) and normal. 161 trainee orthoptists took the course. They were evaluated with two tests: one consisting of UWF images and another of standard field (SF) images, which the students had not encountered in the course. Both tests contained 120 real patient images, 20 per category. The students took both tests once before and after training, with a cool-off period in between.</p><p><strong>Results: </strong>On average, students completed the course in 53 min, significantly improving their diagnostic accuracy. For UWF images, student accuracy increased from 43.6% to 74.1% (p<0.0001 by paired t-test), nearly matching the previously published state-of-the-art AI model's accuracy of 73.3%. For SF images, student accuracy rose from 42.7% to 68.7% (p<0.0001), surpassing the state-of-the-art AI model's 40%.</p><p><strong>Conclusion: </strong>Synthetic images can be used effectively in medical education. We also found that humans are more robust to novel situations than AI models, thus showcasing human judgement's essential role in medical diagnosis.</p>","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11503156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140130718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases. 基于智能手机的深度学习模型,用于白内障和多种角膜疾病的广泛诊断和分流。
IF 3.7 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-20 DOI: 10.1136/bjo-2023-324488
Yuta Ueno, Masahiro Oda, Takefumi Yamaguchi, Hideki Fukuoka, Ryohei Nejima, Yoshiyuki Kitaguchi, Masahiro Miyake, Masato Akiyama, Kazunori Miyata, Kenji Kashiwagi, Naoyuki Maeda, Jun Shimazaki, Hisashi Noma, Kensaku Mori, Tetsuro Oshika
{"title":"Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases.","authors":"Yuta Ueno, Masahiro Oda, Takefumi Yamaguchi, Hideki Fukuoka, Ryohei Nejima, Yoshiyuki Kitaguchi, Masahiro Miyake, Masato Akiyama, Kazunori Miyata, Kenji Kashiwagi, Naoyuki Maeda, Jun Shimazaki, Hisashi Noma, Kensaku Mori, Tetsuro Oshika","doi":"10.1136/bjo-2023-324488","DOIUrl":"10.1136/bjo-2023-324488","url":null,"abstract":"<p><strong>Aim: </strong>To develop an artificial intelligence (AI) algorithm that diagnoses cataracts/corneal diseases from multiple conditions using smartphone images.</p><p><strong>Methods: </strong>This study included 6442 images that were captured using a slit-lamp microscope (6106 images) and smartphone (336 images). An AI algorithm was developed based on slit-lamp images to differentiate 36 major diseases (cataracts and corneal diseases) into 9 categories. To validate the AI model, smartphone images were used for the testing dataset. We evaluated AI performance that included sensitivity, specificity and receiver operating characteristic (ROC) curve for the diagnosis and triage of the diseases.</p><p><strong>Results: </strong>The AI algorithm achieved an area under the ROC curve of 0.998 (95% CI, 0.992 to 0.999) for normal eyes, 0.986 (95% CI, 0.978 to 0.997) for infectious keratitis, 0.960 (95% CI, 0.925 to 0.994) for immunological keratitis, 0.987 (95% CI, 0.978 to 0.996) for cornea scars, 0.997 (95% CI, 0.992 to 1.000) for ocular surface tumours, 0.993 (95% CI, 0.984 to 1.000) for corneal deposits, 1.000 (95% CI, 1.000 to 1.000) for acute angle-closure glaucoma, 0.992 (95% CI, 0.985 to 0.999) for cataracts and 0.993 (95% CI, 0.985 to 1.000) for bullous keratopathy. The triage of referral suggestion using the smartphone images exhibited high performance, in which the sensitivity and specificity were 1.00 (95% CI, 0.478 to 1.00) and 1.00 (95% CI, 0.976 to 1.000) for 'urgent', 0.867 (95% CI, 0.683 to 0.962) and 1.00 (95% CI, 0.971 to 1.000) for 'semi-urgent', 0.853 (95% CI, 0.689 to 0.950) and 0.983 (95% CI, 0.942 to 0.998) for 'routine' and 1.00 (95% CI, 0.958 to 1.00) and 0.896 (95% CI, 0.797 to 0.957) for 'observation', respectively.</p><p><strong>Conclusions: </strong>The AI system achieved promising performance in the diagnosis of cataracts and corneal diseases.</p>","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11503034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of a deep learning model for automatic detection and quantification of five OCT critical retinal features associated with neovascular age-related macular degeneration. 验证用于自动检测和量化与新生血管性老年黄斑变性相关的五个 OCT 关键视网膜特征的深度学习模型。
IF 3.7 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-20 DOI: 10.1136/bjo-2023-324647
Federico Ricardi, Jonathan Oakley, Daniel Russakoff, Giacomo Boscia, Paolo Caselgrandi, Francesco Gelormini, Andrea Ghilardi, Giulia Pintore, Tommaso Tibaldi, Paola Marolo, Francesco Bandello, Michele Reibaldi, Enrico Borrelli
{"title":"Validation of a deep learning model for automatic detection and quantification of five OCT critical retinal features associated with neovascular age-related macular degeneration.","authors":"Federico Ricardi, Jonathan Oakley, Daniel Russakoff, Giacomo Boscia, Paolo Caselgrandi, Francesco Gelormini, Andrea Ghilardi, Giulia Pintore, Tommaso Tibaldi, Paola Marolo, Francesco Bandello, Michele Reibaldi, Enrico Borrelli","doi":"10.1136/bjo-2023-324647","DOIUrl":"10.1136/bjo-2023-324647","url":null,"abstract":"<p><strong>Purpose: </strong>To develop and validate a deep learning model for the segmentation of five retinal biomarkers associated with neovascular age-related macular degeneration (nAMD).</p><p><strong>Methods: </strong>300 optical coherence tomography volumes from subject eyes with nAMD were collected. Images were manually segmented for the presence of five crucial nAMD features: intraretinal fluid, subretinal fluid, subretinal hyperreflective material, drusen/drusenoid pigment epithelium detachment (PED) and neovascular PED. A deep learning architecture based on a U-Net was trained to perform automatic segmentation of these retinal biomarkers and evaluated on the sequestered data. The main outcome measures were receiver operating characteristic curves for detection, summarised using the area under the curves (AUCs) both on a per slice and per volume basis, correlation score, enface topography overlap (reported as two-dimensional (2D) correlation score) and Dice coefficients.</p><p><strong>Results: </strong>The model obtained a mean (±SD) AUC of 0.93 (±0.04) per slice and 0.88 (±0.07) per volume for fluid detection. The correlation score (R<sup>2</sup>) between automatic and manual segmentation obtained by the model resulted in a mean (±SD) of 0.89 (±0.05). The mean (±SD) 2D correlation score was 0.69 (±0.04). The mean (±SD) Dice score resulted in 0.61 (±0.10).</p><p><strong>Conclusions: </strong>We present a fully automated segmentation model for five features related to nAMD that performs at the level of experienced graders. The application of this model will open opportunities for the study of morphological changes and treatment efficacy in real-world settings. Furthermore, it can facilitate structured reporting in the clinic and reduce subjectivity in clinicians' assessments.</p>","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140130719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of microorganism virulence on endophthalmitis outcomes 微生物毒力对眼底病结果的影响
IF 4.1 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-18 DOI: 10.1136/bjo-2024-325605
Aaron Yap, Dilpreet Kaur, Sharmini Muttaiyah, Sarah Welch, Sue Lightman, Oren Tomkins-Netzer, Rachael L Niederer
{"title":"Impact of microorganism virulence on endophthalmitis outcomes","authors":"Aaron Yap, Dilpreet Kaur, Sharmini Muttaiyah, Sarah Welch, Sue Lightman, Oren Tomkins-Netzer, Rachael L Niederer","doi":"10.1136/bjo-2024-325605","DOIUrl":"https://doi.org/10.1136/bjo-2024-325605","url":null,"abstract":"Aims To determine the impact of microorganism virulence on visual outcomes in endophthalmitis. Methods Retrospective, multicentre cohort study of patients presenting with endophthalmitis between 2006 and 2021. A literature review was conducted to divide cultured microorganisms into low and high virulence subcategories. Results 610 eyes with endophthalmitis were recruited from New Zealand, the UK and Israel. The median age was 69.4 years. The median visual acuity was hand movements at presentation and 20/120 at the final follow-up. Severe visual loss (≤20/200) occurred in 237 eyes (38.9%) at the final follow-up. The culture-positive rate was 48.5% (296 eyes). Highly virulent microorganisms were associated with a 4.48 OR of severe visual loss at the final follow-up (p<0.001) and a 1.90 OR of developing retinal detachment or requiring enucleation or evisceration during the follow-up period (p=0.028). Oral flora were observed in 76 eyes (25.7%), and highly virulent microorganisms were observed in 68 eyes (22.9%). Highly virulent microorganisms were more likely to be found after glaucoma surgery (15 eyes, 34.9%) and vitrectomy (five eyes, 35.7%) compared with intravitreal injections (two eyes, 2.9%) and cataract surgery (22 eyes, 24.2%). On multivariate analysis, the following were associated with poorer visual outcomes: poor presenting vision (p<0.001), glaucoma surgery (p=0.050), trauma (p<0.001), oral microorganism (p=0.001) and highly virulent microorganism (p<0.001). Conclusion This is the first classification of microorganisms into high and low virulence subcategories that demonstrate highly virulent microorganisms were associated with poor visual outcomes and increased likelihood of retinal detachment and enucleation. Data are available upon reasonable request.","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive inner retinal neurodegeneration in non-proliferative macular telangiectasia type 2 非增生性黄斑毛细血管扩张症 2 型进行性内视网膜神经变性
IF 4.1 2区 医学
British Journal of Ophthalmology Pub Date : 2024-09-17 DOI: 10.1136/bjo-2023-325115
Alec L Amram, S Scott Whitmore, Cheryl Wang, Christine Clavell, Lance J Lyons, Alexander M Rusakevich, Ian Han, James Folk, H Culver Boldt, Edwin M Stone, Stephen R Russell, Kyungmoo Lee, Michael Abramoff, Charles Wykoff, Elliott H Sohn
{"title":"Progressive inner retinal neurodegeneration in non-proliferative macular telangiectasia type 2","authors":"Alec L Amram, S Scott Whitmore, Cheryl Wang, Christine Clavell, Lance J Lyons, Alexander M Rusakevich, Ian Han, James Folk, H Culver Boldt, Edwin M Stone, Stephen R Russell, Kyungmoo Lee, Michael Abramoff, Charles Wykoff, Elliott H Sohn","doi":"10.1136/bjo-2023-325115","DOIUrl":"https://doi.org/10.1136/bjo-2023-325115","url":null,"abstract":"Purpose Patients with non-proliferative macular telangiectasia type 2 (MacTel) have ganglion cell layer (GCL) and nerve fibre layer (NFL) loss, but it is unclear whether the thinning is progressive. We quantified the change in retinal layer thickness over time in MacTel with and without diabetes. Methods In this retrospective, multicentre, comparative case series, subjects with MacTel with at least two optical coherence tomographic (OCT) scans separated by >9 months OCTs were segmented using the Iowa Reference Algorithms. Mean NFL and GCL thickness was computed across the total area of the early treatment diabetic retinopathy study grid and for the inner temporal region to determine the rate of thinning over time. Mixed effects models were fit to each layer and region to determine retinal thinning for each sublayer over time. Results 115 patients with MacTel were included; 57 patients (50%) had diabetes and 21 (18%) had a history of carbonic anhydrase inhibitor (CAI) treatment. MacTel patients with and without diabetes had similar rates of thinning. In patients without diabetes and untreated with CAIs, the temporal parafoveal NFL thinned at a rate of −0.25±0.09 µm/year (95% CI [−0.42 to –0.09]; p=0.003). The GCL in subfield 4 thinned faster in the eyes treated with CAI (−1.23±0.21 µm/year; 95% CI [−1.64 to –0.82]) than in untreated eyes (−0.19±0.16; 95% CI [−0.50, 0.11]; p<0.001), an effect also seen for the inner nuclear layer. Progressive outer retinal thinning was observed. Conclusions Patients with MacTel sustain progressive inner retinal neurodegeneration similar to those with diabetes without diabetic retinopathy. Further research is needed to understand the consequences of retinal thinning in MacTel. All data relevant to the study are included in the article or uploaded as supplementary information.","PeriodicalId":9313,"journal":{"name":"British Journal of Ophthalmology","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信