Journal of Medical Imaging最新文献

筛选
英文 中文
Computerized assessment of background parenchymal enhancement on breast dynamic contrast-enhanced-MRI including electronic lesion removal. 乳腺动态对比增强型核磁共振成像(包括电子病灶清除)背景实质增强的计算机化评估。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-02 DOI: 10.1117/1.JMI.11.3.034501
Lindsay Douglas, Jordan Fuhrman, Qiyuan Hu, Alexandra Edwards, Deepa Sheth, Hiroyuki Abe, Maryellen Giger
{"title":"Computerized assessment of background parenchymal enhancement on breast dynamic contrast-enhanced-MRI including electronic lesion removal.","authors":"Lindsay Douglas, Jordan Fuhrman, Qiyuan Hu, Alexandra Edwards, Deepa Sheth, Hiroyuki Abe, Maryellen Giger","doi":"10.1117/1.JMI.11.3.034501","DOIUrl":"10.1117/1.JMI.11.3.034501","url":null,"abstract":"<p><strong>Purpose: </strong>Current clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels.</p><p><strong>Approach: </strong>A U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy <math><mrow><mi>c</mi></mrow></math>-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings.</p><p><strong>Results: </strong>Statistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, <math><mrow><mi>p</mi><mo><</mo><mn>0.001</mn></mrow></math>). Scores from all breast regions performed significantly better than guessing (<math><mrow><mi>p</mi><mo><</mo><mn>0.025</mn></mrow></math> from the <math><mrow><mi>z</mi></mrow></math>-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points.</p><p><strong>Conclusions: </strong>Results demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiresolution semantic segmentation of biological structures in digital histopathology. 数字组织病理学中生物结构的多分辨率语义分割。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-09 DOI: 10.1117/1.JMI.11.3.037501
Sina Salsabili, Adrian D C Chan, Eranga Ukwatta
{"title":"Multiresolution semantic segmentation of biological structures in digital histopathology.","authors":"Sina Salsabili, Adrian D C Chan, Eranga Ukwatta","doi":"10.1117/1.JMI.11.3.037501","DOIUrl":"10.1117/1.JMI.11.3.037501","url":null,"abstract":"<p><strong>Purpose: </strong>Semantic segmentation in high-resolution, histopathology whole slide images (WSIs) is an important fundamental task in various pathology applications. Convolutional neural networks (CNN) are the state-of-the-art approach for image segmentation. A patch-based CNN approach is often employed because of the large size of WSIs; however, segmentation performance is sensitive to the field-of-view and resolution of the input patches, and balancing the trade-offs is challenging when there are drastic size variations in the segmented structures. We propose a multiresolution semantic segmentation approach, which is capable of addressing the threefold trade-off between field-of-view, computational efficiency, and spatial resolution in histopathology WSIs.</p><p><strong>Approach: </strong>We propose a two-stage multiresolution approach for semantic segmentation of histopathology WSIs of mouse lung tissue and human placenta. In the first stage, we use four different CNNs to extract the contextual information from input patches at four different resolutions. In the second stage, we use another CNN to aggregate the extracted information in the first stage and generate the final segmentation masks.</p><p><strong>Results: </strong>The proposed method reported 95.6%, 92.5%, and 97.1% in our single-class placenta dataset and 97.1%, 87.3%, and 83.3% in our multiclass lung dataset for pixel-wise accuracy, mean Dice similarity coefficient, and mean positive predictive value, respectively.</p><p><strong>Conclusions: </strong>The proposed multiresolution approach demonstrated high accuracy and consistency in the semantic segmentation of biological structures of different sizes in our single-class placenta and multiclass lung histopathology WSI datasets. Our study can potentially be used in automated analysis of biological structures, facilitating the clinical research in histopathology applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"037501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography. 在 CT 血管造影中自动提取和标记冠状动脉树的图神经网络。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-15 DOI: 10.1117/1.JMI.11.3.034001
Nils Hampe, Sanne G M van Velzen, Jelmer M Wolterink, Carlos Collet, José P S Henriques, Nils Planken, Ivana Išgum
{"title":"Graph neural networks for automatic extraction and labeling of the coronary artery tree in CT angiography.","authors":"Nils Hampe, Sanne G M van Velzen, Jelmer M Wolterink, Carlos Collet, José P S Henriques, Nils Planken, Ivana Išgum","doi":"10.1117/1.JMI.11.3.034001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.034001","url":null,"abstract":"<p><strong>Purpose: </strong>Automatic comprehensive reporting of coronary artery disease (CAD) requires anatomical localization of the coronary artery pathologies. To address this, we propose a fully automatic method for extraction and anatomical labeling of the coronary artery tree using deep learning.</p><p><strong>Approach: </strong>We include coronary CT angiography (CCTA) scans of 104 patients from two hospitals. Reference annotations of coronary artery tree centerlines and labels of coronary artery segments were assigned to 10 segment classes following the American Heart Association guidelines. Our automatic method first extracts the coronary artery tree from CCTA, automatically placing a large number of seed points and simultaneous tracking of vessel-like structures from these points. Thereafter, the extracted tree is refined to retain coronary arteries only, which are subsequently labeled with a multi-resolution ensemble of graph convolutional neural networks that combine geometrical and image intensity information from adjacent segments.</p><p><strong>Results: </strong>The method is evaluated on its ability to extract the coronary tree and to label its segments, by comparing the automatically derived and the reference labels. A separate assessment of tree extraction yielded an <math><mrow><mi>F</mi><mn>1</mn></mrow></math> score of 0.85. Evaluation of our combined method leads to an average <math><mrow><mi>F</mi><mn>1</mn></mrow></math> score of 0.74.</p><p><strong>Conclusions: </strong>The results demonstrate that our method enables fully automatic extraction and anatomical labeling of coronary artery trees from CCTA scans. Therefore, it has the potential to facilitate detailed automatic reporting of CAD.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034001"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095121/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive continuation based smooth l0-norm approximation for compressed sensing MR image reconstruction. 用于压缩传感磁共振图像重建的基于平滑 l0-norm 近似的自适应延续。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-31 DOI: 10.1117/1.JMI.11.3.035003
Sumit Datta, Joseph Suresh Paul
{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">Adaptive continuation based smooth <ns0:math><ns0:mrow><ns0:msub><ns0:mrow><ns0:mi>l</ns0:mi></ns0:mrow><ns0:mrow><ns0:mn>0</ns0:mn></ns0:mrow></ns0:msub></ns0:mrow></ns0:math>-norm approximation for compressed sensing MR image reconstruction.","authors":"Sumit Datta, Joseph Suresh Paul","doi":"10.1117/1.JMI.11.3.035003","DOIUrl":"10.1117/1.JMI.11.3.035003","url":null,"abstract":"<p><strong>Purpose: </strong>There are a number of algorithms for smooth <math><mrow><msub><mi>l</mi><mn>0</mn></msub></mrow></math>-norm (SL0) approximation. In most of the cases, sparsity level of the reconstructed signal is controlled by using a decreasing sequence of the modulation parameter values. However, predefined decreasing sequences of the modulation parameter values cannot produce optimal sparsity or best reconstruction performance, because the best choice of the parameter values is often data-dependent and dynamically changes in each iteration.</p><p><strong>Approach: </strong>We propose an adaptive compressed sensing magnetic resonance image reconstruction using the SL0 approximation method. The SL0 approach typically involves one-step gradient descent of the SL0 approximating function parameterized with a modulation parameter, followed by a projection step onto the feasible solution set. Since the best choice of the parameter values is often data-dependent and dynamically changes in each iteration, it is preferable to adaptively control the rate of decrease of the parameter values. In order to achieve this, we solve two subproblems in an alternating manner. One is a sparse regularization-based subproblem, which is solved with a precomputed value of the parameter, and the second subproblem is the estimation of the parameter itself using a root finding technique.</p><p><strong>Results: </strong>The advantage of this approach in terms of speed and accuracy is illustrated using a compressed sensing magnetic resonance image reconstruction problem and compared with constant scale factor continuation based SL0-norm and adaptive continuation based <math><mrow><msub><mi>l</mi><mn>1</mn></msub></mrow></math>-norm minimization approaches. The proposed adaptive estimation is found to be at least twofold faster than automated parameter estimation based iterative shrinkage-thresholding algorithm in terms of CPU time, on an average improvement of reconstruction performance 15% in terms of normalized mean squared error.</p><p><strong>Conclusions: </strong>An adaptive continuation-based SL0 algorithm is presented, with a potential application to compressed sensing (CS)-based MR image reconstruction. It is a data-dependent adaptive continuation method and eliminates the problem of searching for appropriate constant scale factor values to be used in the CS reconstruction of different types of MRI data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035003"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141015/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual thresholds for differences in CT noise texture. CT 噪音纹理差异的感知阈值。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-09 DOI: 10.1117/1.JMI.11.3.035501
Luuk J Oostveen, Kirsten Boedeker, Daniel Shin, Craig K Abbey, Ioannis Sechopoulos
{"title":"Perceptual thresholds for differences in CT noise texture.","authors":"Luuk J Oostveen, Kirsten Boedeker, Daniel Shin, Craig K Abbey, Ioannis Sechopoulos","doi":"10.1117/1.JMI.11.3.035501","DOIUrl":"10.1117/1.JMI.11.3.035501","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;The average (&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;av&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;) or peak (&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;) noise power spectrum (NPS) frequency is often used as a one-parameter descriptor of the CT noise texture. Our study develops a more complete two-parameter model of the CT NPS and investigates the sensitivity of human observers to changes in it.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;A model of CT NPS was created based on its &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; and a half-Gaussian fit (&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;σ&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;) to the downslope. Two-alternative forced-choice staircase studies were used to determine perceptual thresholds for noise texture, defined as parameter differences with a predetermined level of discrimination performance (80% correct). Five imaging scientist observers performed the forced-choice studies for eight directions in the &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;σ&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;-space, for two reference NPSs (corresponding to body and lung kernels). The experiment was repeated with 32 radiologists, each evaluating a single direction in the &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;σ&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;-space. NPS differences were quantified by the noise texture contrast (&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;C&lt;/mi&gt;&lt;mtext&gt;texture&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;), the integral of the absolute NPS difference.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The two-parameter NPS model was found to be a good representation of various clinical CT reconstructions. Perception thresholds for &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; alone are &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;0.2&lt;/mn&gt;&lt;mtext&gt;  &lt;/mtext&gt;&lt;mi&gt;lp&lt;/mi&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;cm&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt; for body and &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;0.4&lt;/mn&gt;&lt;mtext&gt;  &lt;/mtext&gt;&lt;mi&gt;lp&lt;/mi&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;cm&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt; for lung NPSs. For &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;σ&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;, these values are 0.15 and &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;mtext&gt;  &lt;/mtext&gt;&lt;mi&gt;lp&lt;/mi&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;mi&gt;cm&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;, respectively. Thresholds change if the other parameter also changes. Different NPSs with the same &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; or &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;av&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; can be discriminated. Nonradiologist observers did not need more &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;C&lt;/mi&gt;&lt;mtext&gt;texture&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; than radiologists.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mtext&gt;peak&lt;/mtext&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; or &lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;av&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt; is insufficient to describe noise texture completely. The discrimination of noise texture changes depending on its frequency content. Radiologists do not discriminate noise texture changes better than nonradiologi","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"035501"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086665/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140912945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Near-pair patch generative adversarial network for data augmentation of focal pathology object detection models. 用于病灶病理对象检测模型数据增强的近对补丁生成对抗网络。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-06-04 DOI: 10.1117/1.JMI.11.3.034505
Ethan Tu, Jonathan Burkow, Andy Tsai, Joseph Junewick, Francisco A Perez, Jeffrey Otjen, Adam M Alessio
{"title":"Near-pair patch generative adversarial network for data augmentation of focal pathology object detection models.","authors":"Ethan Tu, Jonathan Burkow, Andy Tsai, Joseph Junewick, Francisco A Perez, Jeffrey Otjen, Adam M Alessio","doi":"10.1117/1.JMI.11.3.034505","DOIUrl":"10.1117/1.JMI.11.3.034505","url":null,"abstract":"<p><strong>Purpose: </strong>The limited volume of medical training data remains one of the leading challenges for machine learning for diagnostic applications. Object detectors that identify and localize pathologies require training with a large volume of labeled images, which are often expensive and time-consuming to curate. To reduce this challenge, we present a method to support distant supervision of object detectors through generation of synthetic pathology-present labeled images.</p><p><strong>Approach: </strong>Our method employs the previously proposed cyclic generative adversarial network (cycleGAN) with two key innovations: (1) use of \"near-pair\" pathology-present regions and pathology-absent regions from similar locations in the same subject for training and (2) the addition of a realism metric (Fréchet inception distance) to the generator loss term. We trained and tested this method with 2800 fracture-present and 2800 fracture-absent image patches from 704 unique pediatric chest radiographs. The trained model was then used to generate synthetic pathology-present images with exact knowledge of location (labels) of the pathology. These synthetic images provided an augmented training set for an object detector.</p><p><strong>Results: </strong>In an observer study, four pediatric radiologists used a five-point Likert scale indicating the likelihood of a real fracture (1 = definitely not a fracture and 5 = definitely a fracture) to grade a set of real fracture-absent, real fracture-present, and synthetic fracture-present images. The real fracture-absent images scored <math><mrow><mn>1.7</mn><mo>±</mo><mn>1.0</mn></mrow></math>, real fracture-present images <math><mrow><mn>4.1</mn><mo>±</mo><mn>1.2</mn></mrow></math>, and synthetic fracture-present images <math><mrow><mn>2.5</mn><mo>±</mo><mn>1.2</mn></mrow></math>. An object detector model (YOLOv5) trained on a mix of 500 real and 500 synthetic radiographs performed with a recall of <math><mrow><mn>0.57</mn><mo>±</mo><mn>0.05</mn></mrow></math> and an <math><mrow><mi>F</mi><mn>2</mn></mrow></math> score of <math><mrow><mn>0.59</mn><mo>±</mo><mn>0.05</mn></mrow></math>. In comparison, when trained on only 500 real radiographs, the recall and <math><mrow><mi>F</mi><mn>2</mn></mrow></math> score were <math><mrow><mn>0.49</mn><mo>±</mo><mn>0.06</mn></mrow></math> and <math><mrow><mn>0.53</mn><mo>±</mo><mn>0.06</mn></mrow></math>, respectively.</p><p><strong>Conclusions: </strong>Our proposed method generates visually realistic pathology and that provided improved object detector performance for the task of rib fracture detection.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034505"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11149891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerated parallel magnetic resonance imaging with compressed sensing using structured sparsity. 利用结构稀疏性压缩传感加速并行磁共振成像。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-06-26 DOI: 10.1117/1.JMI.11.3.033504
Nicholas Dwork, Jeremy W Gordon, Erin K Englund
{"title":"Accelerated parallel magnetic resonance imaging with compressed sensing using structured sparsity.","authors":"Nicholas Dwork, Jeremy W Gordon, Erin K Englund","doi":"10.1117/1.JMI.11.3.033504","DOIUrl":"10.1117/1.JMI.11.3.033504","url":null,"abstract":"<p><strong>Purpose: </strong>We present a method that combines compressed sensing with parallel imaging that takes advantage of the structure of the sparsifying transformation.</p><p><strong>Approach: </strong>Previous work has combined compressed sensing with parallel imaging using model-based reconstruction but without taking advantage of the structured sparsity. Blurry images for each coil are reconstructed from the fully sampled center region. The optimization problem of compressed sensing is modified to take these blurry images into account, and it is solved to estimate the missing details.</p><p><strong>Results: </strong>Using data of brain, ankle, and shoulder anatomies, the combination of compressed sensing with structured sparsity and parallel imaging reconstructs an image with a lower relative error than does sparse SENSE or L1 ESPIRiT, which do not use structured sparsity.</p><p><strong>Conclusions: </strong>Taking advantage of structured sparsity improves the image quality for a given amount of data as long as a fully sampled region centered on the zero frequency of the appropriate size is acquired.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"033504"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11205977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141471595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating aortic cross-sectional measurement of 3D aorta models. 自动测量三维主动脉模型的主动脉横截面。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-29 DOI: 10.1117/1.JMI.11.3.034503
Matthew Bramlet, Salman Mohamadi, Jayishnu Srinivas, Tehan Dassanayaka, Tafara Okammor, Mark Shadden, Bradley P Sutton
{"title":"Automating aortic cross-sectional measurement of 3D aorta models.","authors":"Matthew Bramlet, Salman Mohamadi, Jayishnu Srinivas, Tehan Dassanayaka, Tafara Okammor, Mark Shadden, Bradley P Sutton","doi":"10.1117/1.JMI.11.3.034503","DOIUrl":"10.1117/1.JMI.11.3.034503","url":null,"abstract":"<p><strong>Purpose: </strong>Aortic dissection carries a mortality as high as 50%, but surgical palliation is also fraught with morbidity risks of stroke or paralysis. As such, a significant focus of medical decision making is on longitudinal aortic diameters. We hypothesize that three-dimensional (3D) modeling affords a more efficient methodology toward automated longitudinal aortic measurement. The first step is to automate the measurement of manually segmented 3D models of the aorta. We developed and validated an algorithm to analyze a 3D segmented aorta and output the maximum dimension of minimum cross-sectional areas in a stepwise progression from the diaphragm to the aortic root. Accordingly, the goal is to assess the diagnostic validity of the 3D modeling measurement as a substitute for existing 2D measurements.</p><p><strong>Approach: </strong>From January 2021 to June 2022, 66 3D non-contrast steady-state free precession magnetic resonance images of aortic pathology with clinical aortic measurements were identified; 3D aorta models were manually segmented. A novel mathematical algorithm was applied to each model to generate maximal aortic diameters from the diaphragm to the root, which were then correlated to clinical measurements.</p><p><strong>Results: </strong>With a 76% success rate, we analyzed the resulting 50 3D aortic models utilizing the automated measurement tool. There was an excellent correlation between the automated measurement and the clinical measurement. The intra-class correlation coefficient and <math><mrow><mi>p</mi></mrow></math>-value for each of the nine measured locations of the aorta were as follows: sinus of valsalva, 0.99, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>; sino-tubular junction, 0.89, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>; ascending aorta, 0.97, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>; brachiocephalic artery, 0.96, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>; transverse segment 1, 0.89, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>; transverse segment 2, 0.93, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>; isthmus region, 0.92, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>; descending aorta, 0.96, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>; and aorta at diaphragm, 0.3, <math><mrow><mo><</mo><mn>0.001</mn></mrow></math>.</p><p><strong>Conclusions: </strong>Automating diagnostic measurements that appease clinical confidence is a critical first step in a fully automated process. This tool demonstrates excellent correlation between measurements derived from manually segmented 3D models and the clinical measurements, laying the foundation for transitioning analytic methodologies from 2D to 3D.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034503"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11135202/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141181130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segment anything with inception module for automated segmentation of endometrium in ultrasound images. 利用初始模块对超声图像中的子宫内膜进行自动分割。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-05-30 DOI: 10.1117/1.JMI.11.3.034504
Yang Qiu, Zhun Xie, Yingchun Jiang, Jianguo Ma
{"title":"Segment anything with inception module for automated segmentation of endometrium in ultrasound images.","authors":"Yang Qiu, Zhun Xie, Yingchun Jiang, Jianguo Ma","doi":"10.1117/1.JMI.11.3.034504","DOIUrl":"10.1117/1.JMI.11.3.034504","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate segmentation of the endometrium in ultrasound images is essential for gynecological diagnostics and treatment planning. Manual segmentation methods are time-consuming and subjective, prompting the exploration of automated solutions. We introduce \"segment anything with inception module\" (SAIM), a specialized adaptation of the segment anything model, tailored specifically for the segmentation of endometrium structures in ultrasound images.</p><p><strong>Approach: </strong>SAIM incorporates enhancements to the image encoder structure and integrates point prompts to guide the segmentation process. We utilized ultrasound images from patients undergoing hysteroscopic surgery in the gynecological department to train and evaluate the model.</p><p><strong>Results: </strong>Our study demonstrates SAIM's superior segmentation performance through quantitative and qualitative evaluations, surpassing existing automated methods. SAIM achieves a dice similarity coefficient of 76.31% and an intersection over union score of 63.71%, outperforming traditional task-specific deep learning models and other SAM-based foundation models.</p><p><strong>Conclusions: </strong>The proposed SAIM achieves high segmentation accuracy, providing high diagnostic precision and efficiency. Furthermore, it is potentially an efficient tool for junior medical professionals in education and diagnosis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034504"},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11137375/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast digitally reconstructed radiograph generation using particle-based statistical shape and intensity model. 利用基于粒子的统计形状和强度模型,快速生成数字重建射线照片。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-06-21 DOI: 10.1117/1.JMI.11.3.033503
Jeongseok Oh, Seungbum Koo
{"title":"Fast digitally reconstructed radiograph generation using particle-based statistical shape and intensity model.","authors":"Jeongseok Oh, Seungbum Koo","doi":"10.1117/1.JMI.11.3.033503","DOIUrl":"10.1117/1.JMI.11.3.033503","url":null,"abstract":"<p><strong>Purpose: </strong>Statistical shape and intensity models (SSIMs) and digitally reconstructed radiographs (DRRs) were introduced for non-rigid 2D-3D registration and skeletal geometry/density reconstruction studies. The computation of DRRs takes most of the time during registration or reconstruction. The goal of this study is to propose a particle-based method for composing an SSIM and a DRR image generation scheme and analyze the quality of the images compared with previous DRR generation methods.</p><p><strong>Approach: </strong>Particle-based SSIMs consist of densely scattered particles on the surface and inside of an object, with each particle having an intensity value. Generating the DRR resembles ray tracing, which counts the particles that are binned with each ray and calculates the radiation attenuation. The distance between adjacent particles was considered to be the radiologic path during attenuation integration, and the mean linear attenuation coefficient of the two particles was multiplied. The proposed method was compared with the DRR of CT projection. The mean squared error and peak signal-to-noise ratio (PSNR) were calculated between the DRR images from the proposed method and those of existing methods of projecting tetrahedral-based SSIMs or computed tomography (CT) images to verify the accuracy of the proposed scheme.</p><p><strong>Results: </strong>The suggested method was about 600 times faster than the tetrahedral-based SSIM without using the hardware acceleration technique. The PSNR was 37.59 dB, and the root mean squared error of the normalized pixel intensities was 0.0136.</p><p><strong>Conclusions: </strong>The proposed SSIM and DRR generation procedure showed high temporal performance while maintaining image quality, and particle-based SSIM is a feasible form for representing a 3D volume and generating the DRR images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"033503"},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11192206/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141443535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信