Abdullah Al-Hayali, Amin Komeili, Azar Azad, Paul Sathiadoss, Nicola Schieda, Eranga Ukwatta
{"title":"Machine learning based prediction of image quality in prostate MRI using rapid localizer images.","authors":"Abdullah Al-Hayali, Amin Komeili, Azar Azad, Paul Sathiadoss, Nicola Schieda, Eranga Ukwatta","doi":"10.1117/1.JMI.11.2.026001","DOIUrl":"10.1117/1.JMI.11.2.026001","url":null,"abstract":"<p><strong>Purpose: </strong>Diagnostic performance of prostate MRI depends on high-quality imaging. Prostate MRI quality is inversely proportional to the amount of rectal gas and distention. Early detection of poor-quality MRI may enable intervention to remove gas or exam rescheduling, saving time. We developed a machine learning based quality prediction of yet-to-be acquired MRI images solely based on MRI rapid localizer sequence, which can be acquired in a few seconds.</p><p><strong>Approach: </strong>The dataset consists of 213 (147 for training and 64 for testing) prostate sagittal T2-weighted (T2W) MRI localizer images and rectal content, manually labeled by an expert radiologist. Each MRI localizer contains seven two-dimensional (2D) slices of the patient, accompanied by manual segmentations of rectum for each slice. Cascaded and end-to-end deep learning models were used to predict the quality of yet-to-be T2W, DWI, and apparent diffusion coefficient (ADC) MRI images. Predictions were compared to quality scores determined by the experts using area under the receiver operator characteristic curve and intra-class correlation coefficient.</p><p><strong>Results: </strong>In the test set of 64 patients, optimal versus suboptimal exams occurred in 95.3% (61/64) versus 4.7% (3/64) for T2W, 90.6% (58/64) versus 9.4% (6/64) for DWI, and 89.1% (57/64) versus 10.9% (7/64) for ADC. The best performing segmentation model was 2D U-Net with ResNet-34 encoder and ImageNet weights. The best performing classifier was the radiomics based classifier.</p><p><strong>Conclusions: </strong>A radiomics based classifier applied to localizer images achieves accurate diagnosis of subsequent image quality for T2W, DWI, and ADC prostate MRI sequences.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"026001"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10905647/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140022894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christiane Posselt, Mehmet Yigit Avci, Mehmet Yigitsoy, Patrick Schuenke, Christoph Kolbitsch, Tobias Schaeffter, Stefanie Remmele
{"title":"Simulation of acquisition shifts in T2 weighted fluid-attenuated inversion recovery magnetic resonance images to stress test artificial intelligence segmentation networks.","authors":"Christiane Posselt, Mehmet Yigit Avci, Mehmet Yigitsoy, Patrick Schuenke, Christoph Kolbitsch, Tobias Schaeffter, Stefanie Remmele","doi":"10.1117/1.JMI.11.2.024013","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024013","url":null,"abstract":"<p><strong>Purpose: </strong>To provide a simulation framework for routine neuroimaging test data, which allows for \"stress testing\" of deep segmentation networks against acquisition shifts that commonly occur in clinical practice for T2 weighted (T2w) fluid-attenuated inversion recovery magnetic resonance imaging protocols.</p><p><strong>Approach: </strong>The approach simulates \"acquisition shift derivatives\" of MR images based on MR signal equations. Experiments comprise the validation of the simulated images by real MR scans and example stress tests on state-of-the-art multiple sclerosis lesion segmentation networks to explore a generic model function to describe the F1 score in dependence of the contrast-affecting sequence parameters echo time (TE) and inversion time (TI).</p><p><strong>Results: </strong>The differences between real and simulated images range up to 19% in gray and white matter for extreme parameter settings. For the segmentation networks under test, the F1 score dependency on TE and TI can be well described by quadratic model functions (<math><mrow><msup><mi>R</mi><mn>2</mn></msup><mo>></mo><mn>0.9</mn></mrow></math>). The coefficients of the model functions indicate that changes of TE have more influence on the model performance than TI.</p><p><strong>Conclusions: </strong>We show that these deviations are in the range of values as may be caused by erroneous or individual differences in relaxation times as described by literature. The coefficients of the F1 model function allow for a quantitative comparison of the influences of TE and TI. Limitations arise mainly from tissues with a low baseline signal (like cerebrospinal fluid) and when the protocol contains contrast-affecting measures that cannot be modeled due to missing information in the DICOM header.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024013"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11042016/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael E Kim, Chenyu Gao, Leon Y Cai, Qi Yang, Nancy R Newlin, Karthik Ramadass, Angela Jefferson, Derek Archer, Niranjana Shashikumar, Kimberly R Pechman, Katherine A Gifford, Timothy J Hohman, Lori L Beason-Held, Susan M Resnick, Stefan Winzeck, Kurt G Schilling, Panpan Zhang, Daniel Moyer, Bennett A Landman
{"title":"Empirical assessment of the assumptions of ComBat with diffusion tensor imaging.","authors":"Michael E Kim, Chenyu Gao, Leon Y Cai, Qi Yang, Nancy R Newlin, Karthik Ramadass, Angela Jefferson, Derek Archer, Niranjana Shashikumar, Kimberly R Pechman, Katherine A Gifford, Timothy J Hohman, Lori L Beason-Held, Susan M Resnick, Stefan Winzeck, Kurt G Schilling, Panpan Zhang, Daniel Moyer, Bennett A Landman","doi":"10.1117/1.JMI.11.2.024011","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024011","url":null,"abstract":"<p><strong>Purpose: </strong>Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique that provides unique information about white matter microstructure in the brain but is susceptible to confounding effects introduced by scanner or acquisition differences. ComBat is a leading approach for addressing these site biases. However, despite its frequent use for harmonization, ComBat's robustness toward site dissimilarities and overall cohort size have not yet been evaluated in terms of DTI.</p><p><strong>Approach: </strong>As a baseline, we match <math><mrow><mi>N</mi><mo>=</mo><mn>358</mn></mrow></math> participants from two sites to create a \"silver standard\" that simulates a cohort for multi-site harmonization. Across sites, we harmonize mean fractional anisotropy and mean diffusivity, calculated using participant DTI data, for the regions of interest defined by the JHU EVE-Type III atlas. We bootstrap 10 iterations at 19 levels of total sample size, 10 levels of sample size imbalance between sites, and 6 levels of mean age difference between sites to quantify (i) <math><mrow><msub><mi>β</mi><mi>AGE</mi></msub></mrow></math>, the linear regression coefficient of the relationship between FA and age; (ii) <math><mrow><msubsup><mrow><mover><mrow><mi>γ</mi></mrow><mrow><mo>^</mo></mrow></mover></mrow><mrow><mi>s</mi><mi>f</mi></mrow><mrow><mo>*</mo></mrow></msubsup></mrow></math>, the ComBat-estimated site-shift; and (iii) <math><mrow><msubsup><mrow><mover><mrow><mi>δ</mi></mrow><mrow><mo>^</mo></mrow></mover></mrow><mrow><mi>s</mi><mi>f</mi></mrow><mrow><mo>*</mo></mrow></msubsup></mrow></math>, the ComBat-estimated site-scaling. We characterize the reliability of ComBat by evaluating the root mean squared error in these three metrics and examine if there is a correlation between the reliability of ComBat and a violation of assumptions.</p><p><strong>Results: </strong>ComBat remains well behaved for <math><mrow><msub><mrow><mi>β</mi></mrow><mrow><mi>AGE</mi></mrow></msub></mrow></math> when <math><mrow><mi>N</mi><mo>></mo><mn>162</mn></mrow></math> and when the mean age difference is less than 4 years. The assumptions of the ComBat model regarding the normality of residual distributions are not violated as the model becomes unstable.</p><p><strong>Conclusion: </strong>Prior to harmonization of DTI data with ComBat, the input cohort should be examined for size and covariate distributions of each site. Direct assessment of residual distributions is less informative on stability than bootstrap analysis. We caution use ComBat of in situations that do not conform to the above thresholds.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024011"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11034156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rahul Pemmaraju, Gayoung Kim, Lina Mekki, Daniel Y Song, Junghoon Lee
{"title":"Cascaded cross-attention transformers and convolutional neural networks for multi-organ segmentation in male pelvic computed tomography.","authors":"Rahul Pemmaraju, Gayoung Kim, Lina Mekki, Daniel Y Song, Junghoon Lee","doi":"10.1117/1.JMI.11.2.024009","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024009","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of the prostate and surrounding organs at risk from computed tomography is required for radiation therapy treatment planning. We propose an automatic two-step deep learning-based segmentation pipeline that consists of an initial multi-organ segmentation network for organ localization followed by organ-specific fine segmentation.</p><p><strong>Approach: </strong>Initial segmentation of all target organs is performed using a hybrid convolutional-transformer model, axial cross-attention UNet. The output from this model allows for region of interest computation and is used to crop tightly around individual organs for organ-specific fine segmentation. Information from this network is also propagated to the fine segmentation stage through an image enhancement module, highlighting regions of interest in the original image that might be difficult to segment. Organ-specific fine segmentation is performed on these cropped and enhanced images to produce the final output segmentation.</p><p><strong>Results: </strong>We apply the proposed approach to segment the prostate, bladder, rectum, seminal vesicles, and femoral heads from male pelvic computed tomography (CT). When tested on a held-out test set of 30 images, our two-step pipeline outperformed other deep learning-based multi-organ segmentation algorithms, achieving average dice similarity coefficient (DSC) of <math><mrow><mn>0.836</mn><mo>±</mo><mn>0.071</mn></mrow></math> (prostate), <math><mrow><mn>0.947</mn><mo>±</mo><mn>0.038</mn></mrow></math> (bladder), <math><mrow><mn>0.828</mn><mo>±</mo><mn>0.057</mn></mrow></math> (rectum), <math><mrow><mn>0.724</mn><mo>±</mo><mn>0.101</mn></mrow></math> (seminal vesicles), and <math><mrow><mn>0.933</mn><mo>±</mo><mn>0.020</mn></mrow></math> (femoral heads).</p><p><strong>Conclusions: </strong>Our results demonstrate that a two-step segmentation pipeline with initial multi-organ segmentation and additional fine segmentation can delineate male pelvic CT organs well. The utility of this additional layer of fine segmentation is most noticeable in challenging cases, as our two-step pipeline produces noticeably more accurate and less erroneous results compared to other state-of-the-art methods on such images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024009"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140863709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karen Drukker, Berkman Sahiner, Tingting Hu, Grace Hyun Kim, Heather M Whitney, Natalie Baughan, Kyle J Myers, Maryellen L Giger, Michael McNitt-Gray
{"title":"MIDRC-MetricTree: a decision tree-based tool for recommending performance metrics in artificial intelligence-assisted medical image analysis.","authors":"Karen Drukker, Berkman Sahiner, Tingting Hu, Grace Hyun Kim, Heather M Whitney, Natalie Baughan, Kyle J Myers, Maryellen L Giger, Michael McNitt-Gray","doi":"10.1117/1.JMI.11.2.024504","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024504","url":null,"abstract":"<p><strong>Purpose: </strong>The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms.</p><p><strong>Approach: </strong>An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos.</p><p><strong>Results: </strong>Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability.</p><p><strong>Conclusions: </strong>The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024504"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10990563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140868026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AMS-U-Net: automatic mass segmentation in digital breast tomosynthesis via U-Net.","authors":"Ahmad Qasem, Genggeng Qin, Zhiguo Zhou","doi":"10.1117/1.JMI.11.2.024005","DOIUrl":"10.1117/1.JMI.11.2.024005","url":null,"abstract":"<p><strong>Purpose: </strong>The objective of this study was to develop a fully automatic mass segmentation method called AMS-U-Net for digital breast tomosynthesis (DBT), a popular breast cancer screening imaging modality. The aim was to address the challenges posed by the increasing number of slices in DBT, which leads to higher mass contouring workload and decreased treatment efficiency.</p><p><strong>Approach: </strong>The study used 50 slices from different DBT volumes for evaluation. The AMS-U-Net approach consisted of four stages: image pre-processing, AMS-U-Net training, image segmentation, and post-processing. The model performance was evaluated by calculating the true positive ratio (TPR), false positive ratio (FPR), F-score, intersection over union (IoU), and 95% Hausdorff distance (pixels) as they are appropriate for datasets with class imbalance.</p><p><strong>Results: </strong>The model achieved 0.911, 0.003, 0.911, 0.900, 5.82 for TPR, FPR, F-score, IoU, and 95% Hausdorff distance, respectively.</p><p><strong>Conclusions: </strong>The AMS-U-Net model demonstrated impressive visual and quantitative results, achieving high accuracy in mass segmentation without the need for human interaction. This capability has the potential to significantly increase clinical efficiency and workflow in DBT for breast cancer screening.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024005"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10960181/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140207950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengyue Wu, David A Hormuth, Ty Easley, Federico Pineda, Gregory S Karczmar, Thomas E Yankeelov
{"title":"Systematic evaluation of MRI-based characterization of tumor-associated vascular morphology and hemodynamics via a dynamic digital phantom.","authors":"Chengyue Wu, David A Hormuth, Ty Easley, Federico Pineda, Gregory S Karczmar, Thomas E Yankeelov","doi":"10.1117/1.JMI.11.2.024002","DOIUrl":"10.1117/1.JMI.11.2.024002","url":null,"abstract":"<p><strong>Purpose: </strong>Validation of quantitative imaging biomarkers is a challenging task, due to the difficulty in measuring the ground truth of the target biological process. A digital phantom-based framework is established to systematically validate the quantitative characterization of tumor-associated vascular morphology and hemodynamics based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI).</p><p><strong>Approach: </strong>A digital phantom is employed to provide a ground-truth vascular system within which 45 synthetic tumors are simulated. Morphological analysis is performed on high-spatial resolution DCE-MRI data (spatial/temporal resolution = 30 to <math><mrow><mn>300</mn><mtext> </mtext><mi>μ</mi><mi>m</mi><mo>/</mo><mn>60</mn><mtext> </mtext><mi>s</mi></mrow></math>) to determine the accuracy of locating the arterial inputs of tumor-associated vessels (TAVs). Hemodynamic analysis is then performed on the combination of high-spatial resolution and high-temporal resolution (spatial/temporal resolution = 60 to <math><mrow><mn>300</mn><mtext> </mtext><mi>μ</mi><mi>m</mi><mo>/</mo><mn>1</mn></mrow></math> to 10 s) DCE-MRI data, determining the accuracy of estimating tumor-associated blood pressure, vascular extraction rate, interstitial pressure, and interstitial flow velocity.</p><p><strong>Results: </strong>The observed effects of acquisition settings demonstrate that, when optimizing the DCE-MRI protocol for the morphological analysis, increasing the spatial resolution is helpful but not necessary, as the location and arterial input of TAVs can be recovered with high accuracy even with the lowest investigated spatial resolution. When optimizing the DCE-MRI protocol for hemodynamic analysis, increasing the spatial resolution of the images used for vessel segmentation is essential, and the spatial and temporal resolutions of the images used for the kinetic parameter fitting require simultaneous optimization.</p><p><strong>Conclusion: </strong>An <i>in silico</i> validation framework was generated to systematically quantify the effects of image acquisition settings on the ability to accurately estimate tumor-associated characteristics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024002"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10921778/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140094911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoli Yin, Rachel Eimen, Daniel Moyer, Audrey K Bowden
{"title":"SpecReFlow: an algorithm for specular reflection restoration using flow-guided video completion.","authors":"Haoli Yin, Rachel Eimen, Daniel Moyer, Audrey K Bowden","doi":"10.1117/1.JMI.11.2.024012","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.024012","url":null,"abstract":"<p><strong>Purpose: </strong>Specular reflections (SRs) are highlight artifacts commonly found in endoscopy videos that can severely disrupt a surgeon's observation and judgment. Despite numerous attempts to restore SR, existing methods are inefficient and time consuming and can lead to false clinical interpretations. Therefore, we propose the first complete deep-learning solution, SpecReFlow, to detect and restore SR regions from endoscopy video with spatial and temporal coherence.</p><p><strong>Approach: </strong>SpecReFlow consists of three stages: (1) an image preprocessing stage to enhance contrast, (2) a detection stage to indicate where the SR region is present, and (3) a restoration stage in which we replace SR pixels with an accurate underlying tissue structure. Our restoration approach uses optical flow to seamlessly propagate color and structure from other frames of the endoscopy video.</p><p><strong>Results: </strong>Comprehensive quantitative and qualitative tests for each stage reveal that our SpecReFlow solution performs better than previous detection and restoration methods. Our detection stage achieves a Dice score of 82.8% and a sensitivity of 94.6%, and our restoration stage successfully incorporates temporal information with spatial information for more accurate restorations than existing techniques.</p><p><strong>Conclusions: </strong>SpecReFlow is a first-of-its-kind solution that combines temporal and spatial information for effective detection and restoration of SR regions, surpassing previous methods relying on single-frame spatial information. Future work will look to optimizing SpecReFlow for real-time applications. SpecReFlow is a software-only solution for restoring image content lost due to SR, making it readily deployable in existing clinical settings to improve endoscopy video quality for accurate diagnosis and treatment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024012"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11042492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobile infrared slit-light scanner for rapid eye disease screening.","authors":"Neelam Kaushik, Parmanand Sharma, Noriko Himori, Takuro Matsumoto, Takehiro Miya, Toru Nakazawa","doi":"10.1117/1.JMI.11.2.026003","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.026003","url":null,"abstract":"<p><strong>Purpose: </strong>Timely detection and treatment of visual impairments and age-related eye diseases are essential for maintaining a longer, healthier life. However, the shortage of appropriate medical equipment often impedes early detection. We have developed a portable self-imaging slit-light device utilizing NIR light and a scanning mirror. The objective of our study is to assess the accuracy and compare the performance of our device with conventional nonportable slit-lamp microscopes and anterior segment optical coherence tomography (AS-OCT) for screening and remotely diagnosing eye diseases, such as cataracts and glaucoma, outside of an eye clinic.</p><p><strong>Approach: </strong>The NIR light provides an advantage as measurements are nonmydriatic and less traumatic for patients. A cross-sectional study involving Japanese adults was conducted. Cataract evaluation was performed using photographs captured by the device. Van-Herick grading was assessed by the ratio of peripheral anterior chamber depth to peripheral corneal thickness, in addition to the iridocorneal angle using Image J software.</p><p><strong>Results: </strong>The correlation coefficient between values obtained by AS-OCT, and our fabricated portable scanning slit-light device was notably high. The results indicate that our portable device is equally reliable as the conventional nonportable slit-lamp microscope and AS-OCT for screening and evaluating eye diseases.</p><p><strong>Conclusions: </strong>Our fabricated device matches the functionality of the traditional slit lamp, offering a cost-effective and portable solution. Ideal for remote locations, healthcare facilities, or areas affected by disasters, our scanning slit-light device can provide easy access to initial eye examinations and supports digital eye healthcare initiatives.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"026003"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003872/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140870690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuguang Cao, Kefeng Fan, Cun Xu, Huilin Ma, Kaijie Jiao
{"title":"CMNet: deep learning model for colon polyp segmentation based on dual-branch structure.","authors":"Xuguang Cao, Kefeng Fan, Cun Xu, Huilin Ma, Kaijie Jiao","doi":"10.1117/1.JMI.11.2.024004","DOIUrl":"10.1117/1.JMI.11.2.024004","url":null,"abstract":"<p><strong>Purpose: </strong>Colon cancer is one of the top three diseases in gastrointestinal cancers, and colon polyps are an important trigger of colon cancer. Early diagnosis and removal of colon polyps can avoid the incidence of colon cancer. Currently, colon polyp removal surgery is mainly based on artificial-intelligence (AI) colonoscopy, supplemented by deep-learning technology to help doctors remove colon polyps. With the development of deep learning, the use of advanced AI technology to assist in medical diagnosis has become mainstream and can maximize the doctor's diagnostic time and help doctors to better formulate medical plans.</p><p><strong>Approach: </strong>We propose a deep-learning model for segmenting colon polyps. The model adopts a dual-branch structure, combines a convolutional neural network (CNN) with a transformer, and replaces ordinary convolution with deeply separable convolution based on ResNet; a stripe pooling module is introduced to obtain more effective information. The aggregated attention module (AAM) is proposed for high-dimensional semantic information, which effectively combines two different structures for the high-dimensional information fusion problem. Deep supervision and multi-scale training are added in the model training process to enhance the learning effect and generalization performance of the model.</p><p><strong>Results: </strong>The experimental results show that the proposed dual-branch structure is significantly better than the single-branch structure, and the model using the AAM has a significant performance improvement over the model not using the AAM. Our model leads 1.1% and 1.5% in mIoU and mDice, respectively, when compared with state-of-the-art models in a fivefold cross-validation on the Kvasir-SEG dataset.</p><p><strong>Conclusions: </strong>We propose and validate a deep learning model for segmenting colon polyps, using a dual-branch network structure. Our results demonstrate the feasibility of complementing traditional CNNs and transformer with each other. And we verified the feasibility of fusing different structures on high-dimensional semantics and successfully retained the high-dimensional information of different structures effectively.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024004"},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10960180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140207951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}