Zixue Zeng, Xiaoyan Zhao, Matthew Cartier, Xin Meng, Jiantao Pu
{"title":"GRN+: a simplified generative reinforcement network for tissue layer analysis in 3D ultrasound images for chronic low-back pain.","authors":"Zixue Zeng, Xiaoyan Zhao, Matthew Cartier, Xin Meng, Jiantao Pu","doi":"10.1117/1.JMI.12.4.044001","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>3D ultrasound delivers high-resolution, real-time images of soft tissues, which are essential for pain research. However, manually distinguishing various tissues for quantitative analysis is labor-intensive. We aimed to automate multilayer segmentation in 3D ultrasound volumes using minimal annotated data by developing generative reinforcement network plus (GRN+), a semi-supervised multi-model framework.</p><p><strong>Approach: </strong>GRN+ integrates a ResNet-based generator and a U-Net segmentation model. Through a method called segmentation-guided enhancement (SGE), the generator produces new images under the guidance of the segmentation model, with its weights adjusted according to the segmentation loss gradient. To prevent gradient explosion and secure stable training, a two-stage backpropagation strategy was implemented: the first stage propagates the segmentation loss through both the generator and segmentation model, whereas the second stage concentrates on optimizing the segmentation model alone, thereby refining mask prediction using the generated images.</p><p><strong>Results: </strong>Tested on 69 fully annotated 3D ultrasound scans from 29 subjects with six manually labeled tissue layers, GRN+ outperformed all other semi-supervised methods in terms of the Dice coefficient using only 5% labeled data, despite not using unlabeled data for unsupervised training. In addition, when applied to fully annotated datasets, GRN+ with SGE achieved a 2.16% higher Dice coefficient while incurring lower computational costs compared to other models.</p><p><strong>Conclusions: </strong>GRN+ provides accurate tissue segmentation while reducing both computational expenses and the dependency on extensive annotations, making it an effective tool for 3D ultrasound analysis in patients with chronic lower back pain.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 4","pages":"044001"},"PeriodicalIF":1.7000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12310559/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JMI.12.4.044001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/7/31 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: 3D ultrasound delivers high-resolution, real-time images of soft tissues, which are essential for pain research. However, manually distinguishing various tissues for quantitative analysis is labor-intensive. We aimed to automate multilayer segmentation in 3D ultrasound volumes using minimal annotated data by developing generative reinforcement network plus (GRN+), a semi-supervised multi-model framework.
Approach: GRN+ integrates a ResNet-based generator and a U-Net segmentation model. Through a method called segmentation-guided enhancement (SGE), the generator produces new images under the guidance of the segmentation model, with its weights adjusted according to the segmentation loss gradient. To prevent gradient explosion and secure stable training, a two-stage backpropagation strategy was implemented: the first stage propagates the segmentation loss through both the generator and segmentation model, whereas the second stage concentrates on optimizing the segmentation model alone, thereby refining mask prediction using the generated images.
Results: Tested on 69 fully annotated 3D ultrasound scans from 29 subjects with six manually labeled tissue layers, GRN+ outperformed all other semi-supervised methods in terms of the Dice coefficient using only 5% labeled data, despite not using unlabeled data for unsupervised training. In addition, when applied to fully annotated datasets, GRN+ with SGE achieved a 2.16% higher Dice coefficient while incurring lower computational costs compared to other models.
Conclusions: GRN+ provides accurate tissue segmentation while reducing both computational expenses and the dependency on extensive annotations, making it an effective tool for 3D ultrasound analysis in patients with chronic lower back pain.
期刊介绍:
JMI covers fundamental and translational research, as well as applications, focused on medical imaging, which continue to yield physical and biomedical advancements in the early detection, diagnostics, and therapy of disease as well as in the understanding of normal. The scope of JMI includes: Imaging physics, Tomographic reconstruction algorithms (such as those in CT and MRI), Image processing and deep learning, Computer-aided diagnosis and quantitative image analysis, Visualization and modeling, Picture archiving and communications systems (PACS), Image perception and observer performance, Technology assessment, Ultrasonic imaging, Image-guided procedures, Digital pathology, Biomedical applications of biomedical imaging. JMI allows for the peer-reviewed communication and archiving of scientific developments, translational and clinical applications, reviews, and recommendations for the field.