Jakob Schäfer, Charlotte Herzog, Tina Gabriel, Julian Kober, Toennis Trittler, Edgar Dorausch, Omid Chaghaneh, Thomas Karlas, Cornelius Kühnöl, Antje Naas, Gerhard Fettweis, Franz Brinkmann, Nicole Kampfrath, Jochen Hampe, Carolin Schneider, Moritz Herzog
{"title":"Estimation of controlled attenuation parameter-based liver steatosis via raw ultrasound data from handheld devices.","authors":"Jakob Schäfer, Charlotte Herzog, Tina Gabriel, Julian Kober, Toennis Trittler, Edgar Dorausch, Omid Chaghaneh, Thomas Karlas, Cornelius Kühnöl, Antje Naas, Gerhard Fettweis, Franz Brinkmann, Nicole Kampfrath, Jochen Hampe, Carolin Schneider, Moritz Herzog","doi":"10.1117/1.JMI.13.2.027001","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.027001","url":null,"abstract":"<p><strong>Purpose: </strong>Assessment of liver steatosis is primarily performed through visual evaluation during ultrasound examinations. A more objective approach relies on quantifying ultrasound attenuation, typically using devices such as the FibroScan® or elastography integrated into high-end ultrasound systems-which offer limited accessibility. By contrast, handheld ultrasound devices (HHUDs) are more affordable and widely available. Using raw ultrasound data to get deeper insights into liver tissue characteristics could turn HHUDs into valuable diagnostic tools. We hypothesized that the frequency-specific attenuation of raw ultrasound data acquired with handheld devices correlates with the controlled attenuation parameter (CAP) obtained through vibration-controlled transient elastography via FibroScan.</p><p><strong>Approach: </strong>In an exploratory, single-center study, raw data from 395 participants scheduled for CAP measurement were collected using HHUDs. Of these, 304 participants were included in the final analysis; 91 were excluded due to incomplete data. Using the raw data from the HHUDs, a method based on short-time fast Fourier transform was applied to calculate the frequency-specific attenuation. The results were then correlated with the CAP values.</p><p><strong>Results: </strong>Overall, the attenuation of the radiofrequency data showed a strong linear correlation with CAP values ( <math><mrow><mi>r</mi> <mo>=</mo> <mn>0.672</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), although the strength of correlation varied significantly across frequencies (r_min = 0.443 at 0.75 MHz, r_max = 0.721 at 3.75 MHz), with the highest correlation, equaling results from studies with high-end ultrasound devices.</p><p><strong>Conclusion: </strong>HHUDs capable of acquiring raw data may serve as objective and accessible screening tools for liver steatosis, potentially improving treatment monitoring.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"027001"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12997074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147487893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Krisha Anant, Juanita Hernández López, Junjie Cui, Sneha Das Gupta, Debbie L Bennett, Aimilia Gastounioti
{"title":"Head-to-head comparisons of breast density assessment models using deep learning on digital and synthetic mammograms.","authors":"Krisha Anant, Juanita Hernández López, Junjie Cui, Sneha Das Gupta, Debbie L Bennett, Aimilia Gastounioti","doi":"10.1117/1.JMI.13.2.024503","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.024503","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to evaluate the performance of different deep learning (DL) architectures in breast density classification using digital mammograms (DMs) and synthetic mammograms (SMs) from digital breast tomosynthesis (DBT).</p><p><strong>Approach: </strong>We retrospectively analyzed routine mammographic screening exams (Selenia Dimensions, Hologic Inc.) acquired between 2015 and 2018 at our institution. Each mammogram dataset (DM and SM) included 10,000 exams representing all four breast imaging reporting and data system density categories (a to d). We used ResNet-50, EfficientNet-B0, and DenseNet-121 architectures, separately fine-tuned for breast density classification with DM and SM. Classification accuracy was assessed on 10% unseen test sets in four-category (a to d) and binary (nondense versus dense) scenarios. Evaluations also considered mammogram view (craniocaudal [CC] versus mediolateral-oblique [MLO] view) and race (White versus Black women).</p><p><strong>Results: </strong>DL architectures showed detectable, yet small, differences in classification accuracy regardless of mammogram format. ResNet-50 achieved a four-category accuracy of 0.727 (95% CI: [0.713, 0.740]) for DM, higher than 0.713 (95% CI: [0.699, 0.728]) for SM ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.151</mn></mrow> </math> ). EfficientNet-B0 and DenseNet-121 showed similar trends. DM-SM differences for binary classification were of similar magnitude but statistically significant ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ), with test accuracies ranging from 0.871 to 0.920. The MLO view generally outperformed the CC view, and the results were consistent across racial groups.</p><p><strong>Conclusions: </strong>We highlight that various DL architectures perform effectively in breast density classification, emphasizing the significance of mammogram format and view, though results may vary with different vendors. These insights are crucial for enhancing DL-based breast density assessment, especially during the shift from DM to DBT.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"024503"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13102294/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147785069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Author-Centric AI Pre-Review: Interpreting Science Before It Is Judged.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.13.2.020101","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.020101","url":null,"abstract":"<p><p>The editorial explores an author-centric approach to AI in scientific publishing, arguing for the use of AI as a pre-submission self-review tool to help authors anticipate interpretation, clarify arguments, and strengthen rigor, while preserving author responsibility as well as the human core of peer review and research integrity.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"020101"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13126655/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147822279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naveen Paluru, Mehak Arora, Phaneendra K Yalavarthy
{"title":"Rolling convolution filters for lightweight neural networks in medical image analysis.","authors":"Naveen Paluru, Mehak Arora, Phaneendra K Yalavarthy","doi":"10.1117/1.JMI.13.2.024501","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.024501","url":null,"abstract":"<p><strong>Purpose: </strong>To introduce a filter design element called rolling convolution filters for developing lightweight convolutional neural networks (CNNs) in medical image analysis, aiming to reduce model complexity and memory footprint without compromising performance.</p><p><strong>Approach: </strong>Rolling convolution filters were generated by performing a channel-wise rolling operation on a single base filter, creating unique filters while restricting the learnable parameters. The method was applied to various two- and three-dimensional medical image analysis tasks, including reconstruction, segmentation, and classification across MRI, CT, and OCT modalities. The performance was compared with that of standard CNNs and other lightweight architectures.</p><p><strong>Results: </strong>The proposed rolling convolution filters substantially reduced the number of parameters and model size compared with standard CNNs, with a negligible increase in performance error. For quantitative susceptibility mapping, the rolling filter approach achieved results comparable to those of state-of-the-art methods with 6× fewer parameters. In COVID-19 anomaly segmentation, rolling filters performed on par with existing lightweight models while having <math><mrow><mo>∼</mo> <mn>68</mn> <mo>×</mo></mrow> </math> fewer parameters. For OCT classification, rolling filters maintained accuracy while significantly reducing the model size (49×).</p><p><strong>Conclusions: </strong>Rolling convolution filters offer an effective approach for designing lightweight CNNs for medical image analysis tasks, providing substantial reductions in model complexity and memory requirements while maintaining a performance comparable to that of larger models. This method can be easily incorporated into existing architectures and shows promise for deploying efficient deep learning models in resource-constrained medical imaging settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"024501"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12956260/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147357113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel W Remedios, Shuwen Wei, Shuo Han, Jinwei Zhang, Aaron Carass, Kurt G Schilling, Dzung L Pham, Jerry L Prince, Blake E Dewey
{"title":"ECLARE: efficient cross-planar learning for anisotropic resolution enhancement.","authors":"Samuel W Remedios, Shuwen Wei, Shuo Han, Jinwei Zhang, Aaron Carass, Kurt G Schilling, Dzung L Pham, Jerry L Prince, Blake E Dewey","doi":"10.1117/1.JMI.13.2.024001","DOIUrl":"10.1117/1.JMI.13.2.024001","url":null,"abstract":"<p><strong>Purpose: </strong>In clinical imaging, magnetic resonance (MR) image volumes are often acquired as stacks of 2D slices with decreased scan times, improved signal-to-noise ratio, and image contrasts unique to 2D MR pulse sequences. Although this is sufficient for clinical evaluation, automated algorithms designed for 3D analysis perform poorly on multislice 2D MR volumes, especially those with thick slices and gaps between slices. Superresolution (SR) methods aim to address this problem, but previous methods do not address all of the following: slice profile shape estimation, slice gap, domain shift, and noninteger or arbitrary upsampling factors.</p><p><strong>Approach: </strong>We propose ECLARE (Efficient Cross-planar Learning for Anisotropic Resolution Enhancement), a self-SR method that addresses each of these factors. ECLARE uses a slice profile estimated from the multislice 2D MR volume, trains a network to learn the mapping from low-resolution to high-resolution in-plane patches from the same volume, performs SR with antialiasing, and respects the image FOV during resampling. We compared ECLARE with cubic B-spline interpolation, SMORE, and other contemporary SR methods. We used realistic and representative simulations on human head MR volumes so that quantitative performance against ground truth can be computed. Specifically, healthy <math> <mrow><msub><mi>T</mi> <mn>1</mn></msub> </mrow> </math> -w and people with MS <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -w FLAIR datasets were used for evaluations. We used the peak signal-to-noise ratio and structural similarity index measure as signal recovery metrics. We additionally used two independent brain parcellation algorithms, SLANT and SynthSeg, to compute the consistency Dice similarity coefficient and the <math> <mrow><msup><mi>R</mi> <mn>2</mn></msup> </mrow> </math> coefficient of determination, respectively, as comparison metrics.</p><p><strong>Results: </strong>For images with up to 5 mm of slice thickness and up to 1.5 mm of gap, ECLARE achieves greater mean PSNR and SSIM compared with other methods. In representative regions of interest, such as the ventricles, caudate, cerebral white matter, and cerebellar white matter, ECLARE performs comparably or better than other approaches. These trends are similar for both investigated datasets.</p><p><strong>Conclusions: </strong>The use of slice profile estimation, FOV-aware resampling, and self-SR allowed ECLARE to robustly superresolve anisotropic images without the need for external training data. Future work will investigate the utility of ECLARE on other organs, species, modalities, and resolutions. Our code is open-source and available at https://www.github.com/sremedios/eclare.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"024001"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12959970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147366995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chloe Cho, Yihao Liu, Bohan Jiang, Andrew J McNeil, Benoit M Dawant, Bennett A Landman, Eric R Tkaczyk
{"title":"How much of a face is a face: exploring reidentification potential with generative AI.","authors":"Chloe Cho, Yihao Liu, Bohan Jiang, Andrew J McNeil, Benoit M Dawant, Bennett A Landman, Eric R Tkaczyk","doi":"10.1117/1.JMI.13.S1.S11202","DOIUrl":"https://doi.org/10.1117/1.JMI.13.S1.S11202","url":null,"abstract":"<p><strong>Purpose: </strong>Clinical photographs play an integral role across medical fields. Since the mid-20th century, deidentification has consisted of black bars covering specific facial features, typically the eyes alone. Although increasingly questioned, this practice persists in clinical and academic settings.</p><p><strong>Approach: </strong>A barrier to standardized deidentification guideline development is the unknown risk of artificial intelligence (AI) to reconstruct faces from partially obscured photos. We evaluate the ability of generative AI to reconstruct 10,000 facial images in the Synthetic Faces High Quality dataset across 14 regional masking strategies.</p><p><strong>Results: </strong>Covering the eyes or any other single facial feature resulted in highly identifiable reconstructions, demonstrated by low face mesh distortion (0.14 to 0.18 relative to whole-face masking; absolute total face mesh distortion 8.34 to 10.19) and high structural similarity index to the original face (1.24 to 1.25 relative to whole-face masking; absolute SSIM 0.91 to 0.92). An open-source face verification model using Dlib was able to match 97.98% to 99.93% of these reconstructed images with the original image prior to single feature masking. Removing all major facial features (eyebrows, eyes, nose, and mouth) resulted in a threefold reduction in face verification rates compared with eyes alone, from 98.87% (95% CI [98.63%, 99.07%]) to 33.93% (95% CI [32.95%, 34.94%]).</p><p><strong>Conclusions: </strong>We provide quantitative metrics of the reidentification risk that modern generative AI technology poses for partially obscured facial images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 Suppl 1","pages":"S11202"},"PeriodicalIF":1.7,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12906867/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146208138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dayvison Gomes de Oliveira, Franklin Anthony Ramos Coêlho, Thaís Gaudencio do Rêgo, Yuri de Almeida Malheiros Barbosa, Telmo de Menezes Silva Filho, Bruno Barufaldi
{"title":"Importance of conditioning in latent diffusion models for image generation and super-resolution.","authors":"Dayvison Gomes de Oliveira, Franklin Anthony Ramos Coêlho, Thaís Gaudencio do Rêgo, Yuri de Almeida Malheiros Barbosa, Telmo de Menezes Silva Filho, Bruno Barufaldi","doi":"10.1117/1.JMI.13.S1.S11203","DOIUrl":"https://doi.org/10.1117/1.JMI.13.S1.S11203","url":null,"abstract":"<p><strong>Purpose: </strong>We investigate the use of latent diffusion models (LDMs) for synthesizing and enhancing photon-counting chest computed tomography (CT) images. We evaluate the models' capabilities in two main tasks: image generation for dataset augmentation and super-resolution (SR) for improving image quality, aiming to support diagnostic accuracy and accessibility to high-resolution data.</p><p><strong>Approach: </strong>The proposed framework combines a variational autoencoder-based latent encoder (AutoencoderKL) and a denoising diffusion model, trained under multiple conditioning tests. Eight experiments were conducted across generative and SR tasks, exploring the effects of different conditioning strategies, including segmentation masks and class labels (e.g., lung versus soft tissue), as well as varying loss functions.</p><p><strong>Results: </strong>Unconditioned LDMs produced hallucinated anatomy, lacking clinical interpretability. Conditioning with segmentation masks and anatomical labels considerably improved structural fidelity. The best results for image generation achieved a multiscale structural similarity index measure (MS-SSIM) = 0.7135 and peak signal-to-noise ratio (PSNR) = 24.53, whereas SR tasks reached MS-SSIM = 0.85 and PSNR = 27.31, comparable to recent diffusion-based benchmarks.</p><p><strong>Conclusions: </strong>LDMs show strong potential for both augmentation and SR of photon-counting chest CT images. When guided by segmentation masks and class labels, these models preserve anatomical structure and reduce hallucination risks. The results support their use in clinically relevant scenarios, providing controllable and high-fidelity image synthesis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 Suppl 1","pages":"S11203"},"PeriodicalIF":1.7,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12904813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146203298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Wang, Gengxin Shi, Peiqin Teng, Aswath Sivakumar, Tianyi Ye, Adam D Sylvester, J Webster Stayman, Wojciech B Zbijewski
{"title":"Conditional generative diffusion model for 3D trabecular bone synthesis with tunable microstructure.","authors":"Xin Wang, Gengxin Shi, Peiqin Teng, Aswath Sivakumar, Tianyi Ye, Adam D Sylvester, J Webster Stayman, Wojciech B Zbijewski","doi":"10.1117/1.JMI.13.S1.S11204","DOIUrl":"https://doi.org/10.1117/1.JMI.13.S1.S11204","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to develop a conditional generative diffusion model capable of producing three-dimensional (3D) trabecular bone samples that can be tuned to achieve specific structural characteristics prescribed in terms of three geometric metrics of trabecular microarchitecture: bone volume fraction (BV/TV), trabecular thickness (Tb.Th), and spacing (Tb.Sp).</p><p><strong>Approach: </strong>The generative model is based on 3D latent diffusion. The latent representation of trabecular patches is obtained by a dedicated variational autoencoder (VAE). To control the microstructure characteristics of the synthetic samples, the model is conditioned on BV/TV, Tb.Th, and Tb.Sp. In addition, a shifting slab inference method is employed to generate extended volumes with locally tunable microstructure in a computationally efficient manner. The training data involved 3551 <math><mrow><mn>128</mn> <mo>×</mo> <mn>128</mn> <mo>×</mo> <mn>128</mn></mrow> </math> volumes of interest (VOIs) extracted from micro-CT volumes ( <math><mrow><mn>50</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> voxel size) of 20 femoral bone specimens, paired with trabecular metrics computed within each VOI; the split for training and validation data was 9:1. For testing, 2000 synthetic bone samples were generated using single slab inference over a wide range of condition (target) microstructure metrics. Results were evaluated in terms of (i) consistency across multiple realizations of reverse diffusion for a fixed condition, measured by the coefficient of variation (CV) of trabecular measurements; (ii) agreement between BV/TV, Tb.Th, and Tb.Sp values provided as a condition and those measured in the corresponding synthetic samples, assessed using Pearson correlation coefficient (PCC); and (iii) overlap between the distributions of trabecular parameters of real and synthetic bone patches; this coverage analysis included both the conditioning parameters of BV/TV, Tb.Th, and Tb.Sp, as well as unconditioned metrics of degree of anisotropy, ellipsoid factor, and connectivity. Further, extended volumes ( <math><mrow><mn>128</mn> <mo>×</mo> <mn>128</mn> <mo>×</mo> <mn>256</mn> <mrow><mtext> </mtext></mrow> <mrow><mtext>voxels</mtext></mrow> </mrow> </math> ) were generated using shifting-slab inference with spatially invariant and spatially varying conditioning and evaluated in terms of local agreement between the prescribed and achieved trabecular parameters.</p><p><strong>Results: </strong>Visually, the synthesized cancellous bone patches appear highly similar to the training micro-CT data. The conditioned parameters of the generated volumes agree well with their target values (PCC of 0.99, 0.97, and 0.95 for BV/TV, Tb.Th, and Tb.Sp, respectively). There is a trend toward generating trabeculae that are slightly thicker than prescribed, but this bias is typically on the order of one voxel ( <math><mrow><mn>50</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mr","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 Suppl 1","pages":"S11204"},"PeriodicalIF":1.7,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12907505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146214560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas P Gruszauskas, Joseph Steiner, Krista Dillingham
{"title":"Challenges to the management of oncologic theranostics clinical trials: recommendations for the conduct of theranostics trials at investigational sites.","authors":"Nicholas P Gruszauskas, Joseph Steiner, Krista Dillingham","doi":"10.1117/1.JMI.13.1.013502","DOIUrl":"10.1117/1.JMI.13.1.013502","url":null,"abstract":"<p><strong>Purpose: </strong>Advancements in radionuclide imaging and therapy techniques have created a groundswell of enthusiasm in the recently designated field of theranostics. This has increased the need for facilities that are able to participate in clinical trials for investigational theranostic agents. Theranostics clinical trials present several unique challenges that will tax the resources and staff of most medical centers. Our purpose is to describe the unique logistical and administrative challenges associated with theranostics clinical trials, propose strategies for addressing them, and make recommendations regarding trial conduct to the community at large.</p><p><strong>Approach: </strong>The authors' experiences reviewing, implementing, and managing theranostics trials at their institution were used to identify common activities and challenges.</p><p><strong>Results: </strong>Several key categories of requirements and challenges were identified. Multidisciplinary teams consisting of nuclear medicine, oncology, nursing, clinical research, and administrative staff are necessary to adequately perform all trial-related activities. Strategies are proposed to address these challenges and activities at the institutional and industry levels.</p><p><strong>Conclusion: </strong>The unique challenges inherent to theranostics clinical trials require a focused investment of time, effort, and resources from all stakeholders. Institutions that wish to participate in these trials must develop the infrastructure necessary to fully support the breadth of activities they require. Implementation of the strategies and recommendations presented here will ensure the successful conduct of these trials and will improve efficiency across the community.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"013502"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12928531/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Clarity and Impact in JMI.","authors":"Bennett A Landman","doi":"10.1117/1.JMI.13.1.010101","DOIUrl":"https://doi.org/10.1117/1.JMI.13.1.010101","url":null,"abstract":"<p><p>JMI Editor-in-Chief Bennett Landman offers guidance to help authors achieve higher impact, clearer assessment of contributions, and more useful and direct reviews from the peer review community.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"010101"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12946671/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147327812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}