Journal of Medical Imaging最新文献

筛选
英文 中文
Erratum: Publisher's Note: Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study. 勘误:出版者注:增强和虚拟现实成像用于心血管结构干预的协作规划:概念验证和验证研究。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-13 DOI: 10.1117/1.JMI.11.6.069801
Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene
{"title":"Erratum: Publisher's Note: Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study.","authors":"Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene","doi":"10.1117/1.JMI.11.6.069801","DOIUrl":"10.1117/1.JMI.11.6.069801","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1117/1.JMI.11.6.062606.].</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"069801"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo-spectral angle mapping for pixel and cell classification in highly multiplexed immunofluorescence images. 高复用免疫荧光图像中像素和细胞分类的伪光谱角映射。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-10 DOI: 10.1117/1.JMI.11.6.067502
Madeleine S Torcasso, Junting Ai, Gabriel Casella, Thao Cao, Anthony Chang, Ariel Halper-Stromberg, Bana Jabri, Marcus R Clark, Maryellen L Giger
{"title":"Pseudo-spectral angle mapping for pixel and cell classification in highly multiplexed immunofluorescence images.","authors":"Madeleine S Torcasso, Junting Ai, Gabriel Casella, Thao Cao, Anthony Chang, Ariel Halper-Stromberg, Bana Jabri, Marcus R Clark, Maryellen L Giger","doi":"10.1117/1.JMI.11.6.067502","DOIUrl":"10.1117/1.JMI.11.6.067502","url":null,"abstract":"<p><strong>Purpose: </strong>The rapid development of highly multiplexed microscopy has enabled the study of cells embedded within their native tissue. The rich spatial data provided by these techniques have yielded exciting insights into the spatial features of human disease. However, computational methods for analyzing these high-content images are still emerging; there is a need for more robust and generalizable tools for evaluating the cellular constituents and stroma captured by high-plex imaging. To address this need, we have adapted spectral angle mapping-an algorithm developed for hyperspectral image analysis-to compress the channel dimension of high-plex immunofluorescence (IF) images.</p><p><strong>Approach: </strong>Here, we present pseudo-spectral angle mapping (pSAM), a robust and flexible method for determining the most likely class of each pixel in a high-plex image. The class maps calculated through pSAM yield pixel classifications which can be combined with instance segmentation algorithms to classify individual cells.</p><p><strong>Results: </strong>In a dataset of colon biopsies imaged with a 13-plex staining panel, 16 pSAM class maps were computed to generate pixel classifications. Instance segmentations of cells with Cellpose2.0 ( <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of <math><mrow><mn>0.83</mn> <mo>±</mo> <mn>0.13</mn></mrow> </math> ) were combined with these class maps to provide cell class predictions for 13 cell classes. In addition, in a separate unseen dataset of kidney biopsies imaged with a 44-plex staining panel, pSAM plus Cellpose2.0 ( <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of <math><mrow><mn>0.86</mn> <mo>±</mo> <mn>0.11</mn></mrow> </math> ) detected a diverse set of 38 classes of structural and immune cells.</p><p><strong>Conclusions: </strong>In summary, pSAM is a powerful and generalizable tool for evaluating high-plex IF image data and classifying cells in these high-dimensional images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Special Section Guest Editorial: Introduction to the JMI Special Section on Augmented and Virtual Reality in Medical Imaging. 特邀编辑:JMI医学影像增强和虚拟现实特邀部分简介。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-27 DOI: 10.1117/1.JMI.11.6.062601
Ryan Beams, Raj Shekhar
{"title":"Special Section Guest Editorial: Introduction to the JMI Special Section on Augmented and Virtual Reality in Medical Imaging.","authors":"Ryan Beams, Raj Shekhar","doi":"10.1117/1.JMI.11.6.062601","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.062601","url":null,"abstract":"<p><p>The editorial introduces the JMI Special Section on Augmented and Virtual Reality in Medical Imaging.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062601"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11671691/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCC-NET: segmentation of clinical cancer image for head and neck squamous cell carcinoma. SCC-NET:头颈部鳞状细胞癌的临床癌症图像分割。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-11-21 DOI: 10.1117/1.JMI.11.6.065501
Chien-Yu Huang, Cheng-Che Tsai, Lisa Alice Hwang, Bor-Hwang Kang, Yaoh-Shiang Lin, Hsing-Hao Su, Guan-Ting Shen, Jun-Wei Hsieh
{"title":"SCC-NET: segmentation of clinical cancer image for head and neck squamous cell carcinoma.","authors":"Chien-Yu Huang, Cheng-Che Tsai, Lisa Alice Hwang, Bor-Hwang Kang, Yaoh-Shiang Lin, Hsing-Hao Su, Guan-Ting Shen, Jun-Wei Hsieh","doi":"10.1117/1.JMI.11.6.065501","DOIUrl":"10.1117/1.JMI.11.6.065501","url":null,"abstract":"<p><strong>Purpose: </strong>Squamous cell carcinoma (SCC) accounts for 90% of head and neck cancer. The majority of cases can be diagnosed and even treated with endoscopic examination and surgery. Deep learning models have been adopted for various medical endoscopy exams. However, few reports have been on deep learning algorithms for segmenting head and neck SCC.</p><p><strong>Approach: </strong>Head and neck SCC pre-treatment endoscopic images during 2016-2020 were collected from the Kaohsiung Veterans General Hospital Department of Otolaryngology-Head and Neck Surgery. We present a new modification of the neural architecture search-U-Net-based model called SCC-Net for segmenting our enrolled endoscopic photos. The modification included a new technique called \"Learnable Discrete Wavelet Pooling\" to design a new formulation that combines the outputs of different layers using a channel attention module and assigns weights based on their importance in the information flow. We also incorporated the cross-stage-partial design from CSPnet. The performance was compared with other eight state-of-the-art image segmentation models.</p><p><strong>Results: </strong>We collected a total of 556 pathologically confirmed SCC photos. The new SCC-Net algorithm achieves a high mean intersection over union (mIOU) of 87.2%, accuracy of 97.17%, and recall of 97.15%. When comparing the performance of our proposed model with eight different state-of-the-art image segmentation artificial neural network models, our model performed best in mIOU, Dice similarity coefficient, accuracy, and recall.</p><p><strong>Conclusions: </strong>Our proposed SCC-Net architecture was able to successfully segment lesions from white light endoscopic images with promising accuracy, with a single model performing well in all upper aerodigestive tracts.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"065501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11579920/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142710748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilization of double contrast-enhancement boost for lower-extremity CT angiography. 双增强增强在下肢CT血管造影中的应用。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-04 DOI: 10.1117/1.JMI.11.6.067001
Chuluunbaatar Otgonbaatar, Jae-Kyun Ryu, Won Beom Jung, Seon Woong Jang, Sungjun Hwang, Taehyung Kim, Hackjoon Shim, Jung Wook Seo
{"title":"Utilization of double contrast-enhancement boost for lower-extremity CT angiography.","authors":"Chuluunbaatar Otgonbaatar, Jae-Kyun Ryu, Won Beom Jung, Seon Woong Jang, Sungjun Hwang, Taehyung Kim, Hackjoon Shim, Jung Wook Seo","doi":"10.1117/1.JMI.11.6.067001","DOIUrl":"10.1117/1.JMI.11.6.067001","url":null,"abstract":"<p><strong>Purpose: </strong>We aimed to compare the efficacy of the double contrast enhancement (CE)-boost technique with that of conventional methods to improve vascular contrast attenuation in lower-extremity computed tomography (CT) angiography.</p><p><strong>Approach: </strong>This retrospective study enrolled 45 patients (mean age, 70 years; range, 26 to 90 years; 30 males). To generate the CE-boost image, the degree of CE was determined by subtracting the post-contrast CT images from the pre-contrast CT images. The double CE-boost technique involves the application of this CE process twice. Both objective assessments (CT attenuation, noise level, signal-to-noise ratio [SNR], contrast-to-noise ratio [CNR], and image sharpness) and subjective quality evaluations were conducted on three types of images (conventional, CE-boost, and double CE-boost images).</p><p><strong>Results: </strong>Double CE-boost images demonstrated significantly reduced noise in Hounsfield units (HUs) compared with conventional and CE-boost images ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ). CT attenuation values (HUs) were substantially higher in all different locations of the lower extremity with double CE-boost images ( <math><mrow><mn>834.49</mn> <mo>±</mo> <mn>140.73</mn></mrow> </math> ), as opposed to conventional ( <math><mrow><mn>399.63</mn> <mo>±</mo> <mn>62.01</mn></mrow> </math> ) and CE-boost images ( <math><mrow><mn>572.66</mn> <mo>±</mo> <mn>93.61</mn></mrow> </math> ). The SNR and CNR were notably improved in the double CE-boost image compared with both conventional and CE-boost images. Image sharpness analysis of the popliteal artery ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.828</mn></mrow> </math> ), anterior tibial artery ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.671</mn></mrow> </math> ), and dorsalis pedis artery ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.281</mn></mrow> </math> ) revealed consistency across conventional, CE-boost, and double CE-boost images. Subjective image analysis indicated superior ratings for the double CE-boost compared with other types.</p><p><strong>Conclusions: </strong>The implementation of the double CE-boost technique improves image quality by decreasing image noise, increasing CT attenuation, and improving SNR, CNR, and subjective assessment compared with CE-boost and conventional imaging.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067001"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11614588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented reality for point-of-care ultrasound-guided vascular access in pediatric patients using Microsoft HoloLens 2: a preliminary evaluation. 使用 Microsoft HoloLens 2 对儿科患者进行护理点超声引导血管通路的增强现实技术:初步评估。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-09-13 DOI: 10.1117/1.JMI.11.6.062604
Gesiren Zhang, Trong N Nguyen, Hadi Fooladi-Talari, Tyler Salvador, Kia Thomas, Daragh Crowley, R Scott Dingeman, Raj Shekhar
{"title":"Augmented reality for point-of-care ultrasound-guided vascular access in pediatric patients using Microsoft HoloLens 2: a preliminary evaluation.","authors":"Gesiren Zhang, Trong N Nguyen, Hadi Fooladi-Talari, Tyler Salvador, Kia Thomas, Daragh Crowley, R Scott Dingeman, Raj Shekhar","doi":"10.1117/1.JMI.11.6.062604","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.062604","url":null,"abstract":"<p><strong>Significance: </strong>Conventional ultrasound-guided vascular access procedures are challenging due to the need for anatomical understanding, precise needle manipulation, and hand-eye coordination. Recently, augmented reality (AR)-based guidance has emerged as an aid to improve procedural efficiency and potential outcomes. However, its application in pediatric vascular access has not been comprehensively evaluated.</p><p><strong>Aim: </strong>We developed an AR ultrasound application, HoloUS, using the Microsoft HoloLens 2 to display live ultrasound images directly in the proceduralist's field of view. We presented our evaluation of the effect of using the Microsoft HoloLens 2 for point-of-care ultrasound (POCUS)-guided vascular access in 30 pediatric patients.</p><p><strong>Approach: </strong>A custom software module was developed on a tablet capable of capturing the moving ultrasound image from any ultrasound machine's screen. The captured image was compressed and sent to the HoloLens 2 via a hotspot without needing Internet access. On the HoloLens 2, we developed a custom software module to receive, decompress, and display the live ultrasound image. Hand gesture and voice command features were implemented for the user to reposition, resize, and change the gain and the contrast of the image. We evaluated 30 (15 successful control and 12 successful interventional) cases completed in a single-center, prospective, randomized study.</p><p><strong>Results: </strong>The mean overall rendering latency and the rendering frame rate of the HoloUS application were 139.30 ms <math><mrow><mo>(</mo> <mi>σ</mi> <mo>=</mo> <mn>32.02</mn> <mtext>  </mtext> <mi>ms</mi> <mo>)</mo></mrow> </math> and 30 frames per second, respectively. The average procedure completion time was 17.3% shorter using AR guidance. The numbers of puncture attempts and needle redirections were similar between the two groups, and the number of head adjustments was minimal in the interventional group.</p><p><strong>Conclusion: </strong>We presented our evaluation of the results from the first study using the Microsoft HoloLens 2 that investigates AR-based POCUS-guided vascular access in pediatric patients. Our evaluation confirmed clinical feasibility and potential improvement in procedural efficiency.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062604"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11393663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super-resolution multi-contrast unbiased eye atlases with deep probabilistic refinement. 具有深度概率细化功能的超分辨率多对比度无偏眼图。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-11-14 DOI: 10.1117/1.JMI.11.6.064004
Ho Hin Lee, Adam M Saunders, Michael E Kim, Samuel W Remedios, Lucas W Remedios, Yucheng Tang, Qi Yang, Xin Yu, Shunxing Bao, Chloe Cho, Louise A Mawn, Tonia S Rex, Kevin L Schey, Blake E Dewey, Jeffrey M Spraggins, Jerry L Prince, Yuankai Huo, Bennett A Landman
{"title":"Super-resolution multi-contrast unbiased eye atlases with deep probabilistic refinement.","authors":"Ho Hin Lee, Adam M Saunders, Michael E Kim, Samuel W Remedios, Lucas W Remedios, Yucheng Tang, Qi Yang, Xin Yu, Shunxing Bao, Chloe Cho, Louise A Mawn, Tonia S Rex, Kevin L Schey, Blake E Dewey, Jeffrey M Spraggins, Jerry L Prince, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.11.6.064004","DOIUrl":"10.1117/1.JMI.11.6.064004","url":null,"abstract":"<p><strong>Purpose: </strong>Eye morphology varies significantly across the population, especially for the orbit and optic nerve. These variations limit the feasibility and robustness of generalizing population-wise features of eye organs to an unbiased spatial reference.</p><p><strong>Approach: </strong>To tackle these limitations, we propose a process for creating high-resolution unbiased eye atlases. First, to restore spatial details from scans with a low through-plane resolution compared with a high in-plane resolution, we apply a deep learning-based super-resolution algorithm. Then, we generate an initial unbiased reference with an iterative metric-based registration using a small portion of subject scans. We register the remaining scans to this template and refine the template using an unsupervised deep probabilistic approach that generates a more expansive deformation field to enhance the organ boundary alignment. We demonstrate this framework using magnetic resonance images across four different tissue contrasts, generating four atlases in separate spatial alignments.</p><p><strong>Results: </strong>When refining the template with sufficient subjects, we find a significant improvement using the Wilcoxon signed-rank test in the average Dice score across four labeled regions compared with a standard registration framework consisting of rigid, affine, and deformable transformations. These results highlight the effective alignment of eye organs and boundaries using our proposed process.</p><p><strong>Conclusions: </strong>By combining super-resolution preprocessing and deep probabilistic models, we address the challenge of generating an eye atlas to serve as a standardized reference across a largely variable population.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064004"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11561295/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced soft tissue visualization in conjunction with bone structures using contrast-enhanced micro-CT. 利用对比增强微型计算机断层扫描技术,结合骨骼结构进行先进的软组织可视化。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-11-22 DOI: 10.1117/1.JMI.11.6.066001
Torben Hildebrand, Qianli Ma, Catherine A Heyward, Håvard J Haugen, Liebert P Nogueira
{"title":"Advanced soft tissue visualization in conjunction with bone structures using contrast-enhanced micro-CT.","authors":"Torben Hildebrand, Qianli Ma, Catherine A Heyward, Håvard J Haugen, Liebert P Nogueira","doi":"10.1117/1.JMI.11.6.066001","DOIUrl":"10.1117/1.JMI.11.6.066001","url":null,"abstract":"<p><strong>Purpose: </strong>Micro-computed tomography (CT) analysis of soft tissues alongside bone remains challenging due to significant differences in X-ray absorption, preventing spatial inspection of bone remodeling including the cellular intricacies of mineralized tissues in developmental biology and pathology. The goal was to develop a protocol for contrast-enhanced micro-CT imaging that effectively visualizes soft tissues and cells in conjunction with bone while minimizing bone attenuation by decalcification.</p><p><strong>Approach: </strong>Murine femur samples were decalcified in ethylenediaminetetraacetic acid and treated with three different contrast agents: (i) iodine in ethanol, (ii) phosphotungstic acid in water, and (iii) Lugol's iodine. Micro-CT scans were performed in the laboratory setup SkyScan 1172 and at the synchrotron radiation for medical physics beamline in synchrotron radiation facility Elettra. Soft and hard tissue contrast-to-noise ratio (CNR) and contrast efficiency after decalcification were measured.</p><p><strong>Results: </strong>In laboratory micro-CT, Lugol's iodine demonstrated a threefold higher CNR in the bone marrow, representing the soft tissue portion, compared with the bone. Contrast efficiencies, measured in synchrotron micro-CT, were consistent with these findings. Higher resolutions and the specificity of Lugol's iodine to cellular structures enabled detailed visualization of bone-forming cells in the epiphyseal plate.</p><p><strong>Conclusions: </strong>The combination of decalcification and the utilization of the contrast agent Lugol's iodine facilitated an enhanced soft tissue visualization in conjunction with bone.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"066001"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11584031/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142710742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflecting on a Year of Growth and Opportunity. 反思一年的成长和机遇。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-20 DOI: 10.1117/1.JMI.11.6.060101
Bennett Landman
{"title":"Reflecting on a Year of Growth and Opportunity.","authors":"Bennett Landman","doi":"10.1117/1.JMI.11.6.060101","DOIUrl":"10.1117/1.JMI.11.6.060101","url":null,"abstract":"<p><p>The editorial reflects on the past year, highlighting impactful research and discussing challenges.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"060101"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660685/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applications of mixed reality with medical imaging for training and clinical practice. 混合现实与医学成像在培训和临床实践中的应用。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-11-01 Epub Date: 2024-12-26 DOI: 10.1117/1.JMI.11.6.062608
Alexa R Lauinger, Meagan McNicholas, Matthew Bramlet, Maria Bederson, Bradley P Sutton, Caroline G L Cao, Irfan S Ahmad, Carlos Brown, Shandra Jamison, Sarita Adve, John Vozenilek, Jim Rehg, Mark S Cohen
{"title":"Applications of mixed reality with medical imaging for training and clinical practice.","authors":"Alexa R Lauinger, Meagan McNicholas, Matthew Bramlet, Maria Bederson, Bradley P Sutton, Caroline G L Cao, Irfan S Ahmad, Carlos Brown, Shandra Jamison, Sarita Adve, John Vozenilek, Jim Rehg, Mark S Cohen","doi":"10.1117/1.JMI.11.6.062608","DOIUrl":"10.1117/1.JMI.11.6.062608","url":null,"abstract":"<p><strong>Purpose: </strong>This review summarizes the current use of extended reality (XR) including virtual reality (VR), mixed reality, and augmented reality (AR) in the medical field, ranging from medical imaging to training to preoperative planning. It covers the integration of these technologies into clinical practice and within medical training while discussing the challenges and future opportunities in this sphere. This will hopefully encourage more physicians to collaborate on integrating medicine and technology.</p><p><strong>Approach: </strong>The review was written by experts in the field based on their knowledge and on recent publications exploring the topic of extended realities in medicine.</p><p><strong>Results: </strong>Based on our findings, XR including VR, mixed reality, and AR are increasingly utilized within surgery both for preoperative planning and intraoperative procedures. These technologies are also promising means for improved education at every level of physician training. However, there are still barriers to the widespread adoption of VR, mixed reality, and AR, including human factors, technological challenges, and regulatory issues.</p><p><strong>Conclusions: </strong>Based on the current use of VR, mixed reality, and AR, it is likely that the use of these technologies will continue to grow over the next decade. To support the development and integration of XR into medicine, it is important for academic groups to collaborate with industrial groups and regulatory agencies in these endeavors. These joint projects will help address the current limitations and mutually benefit both fields.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062608"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11669596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信