Journal of Imaging最新文献

筛选
英文 中文
State-of-the-Art Deep Learning Methods for Microscopic Image Segmentation: Applications to Cells, Nuclei, and Tissues. 显微图像分割的最先进的深度学习方法:在细胞、细胞核和组织中的应用。
IF 2.7
Journal of Imaging Pub Date : 2024-12-06 DOI: 10.3390/jimaging10120311
Fatma Krikid, Hugo Rositi, Antoine Vacavant
{"title":"State-of-the-Art Deep Learning Methods for Microscopic Image Segmentation: Applications to Cells, Nuclei, and Tissues.","authors":"Fatma Krikid, Hugo Rositi, Antoine Vacavant","doi":"10.3390/jimaging10120311","DOIUrl":"10.3390/jimaging10120311","url":null,"abstract":"<p><p>Microscopic image segmentation (MIS) is a fundamental task in medical imaging and biological research, essential for precise analysis of cellular structures and tissues. Despite its importance, the segmentation process encounters significant challenges, including variability in imaging conditions, complex biological structures, and artefacts (e.g., noise), which can compromise the accuracy of traditional methods. The emergence of deep learning (DL) has catalyzed substantial advancements in addressing these issues. This systematic literature review (SLR) provides a comprehensive overview of state-of-the-art DL methods developed over the past six years for the segmentation of microscopic images. We critically analyze key contributions, emphasizing how these methods specifically tackle challenges in cell, nucleus, and tissue segmentation. Additionally, we evaluate the datasets and performance metrics employed in these studies. By synthesizing current advancements and identifying gaps in existing approaches, this review not only highlights the transformative potential of DL in enhancing diagnostic accuracy and research efficiency but also suggests directions for future research. The findings of this study have significant implications for improving methodologies in medical and biological applications, ultimately fostering better patient outcomes and advancing scientific understanding.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679639/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UV Hyperspectral Imaging with Xenon and Deuterium Light Sources: Integrating PCA and Neural Networks for Analysis of Different Raw Cotton Types. 氙气和氘光源紫外高光谱成像:结合主成分分析和神经网络分析不同原棉类型。
IF 2.7
Journal of Imaging Pub Date : 2024-12-05 DOI: 10.3390/jimaging10120310
Mohammad Al Ktash, Mona Knoblich, Max Eberle, Frank Wackenhut, Marc Brecht
{"title":"UV Hyperspectral Imaging with Xenon and Deuterium Light Sources: Integrating PCA and Neural Networks for Analysis of Different Raw Cotton Types.","authors":"Mohammad Al Ktash, Mona Knoblich, Max Eberle, Frank Wackenhut, Marc Brecht","doi":"10.3390/jimaging10120310","DOIUrl":"10.3390/jimaging10120310","url":null,"abstract":"<p><p>Ultraviolet (UV) hyperspectral imaging shows significant promise for the classification and quality assessment of raw cotton, a key material in the textile industry. This study evaluates the efficacy of UV hyperspectral imaging (225-408 nm) using two different light sources: xenon arc (XBO) and deuterium lamps, in comparison to NIR hyperspectral imaging. The aim is to determine which light source provides better differentiation between cotton types in UV hyperspectral imaging, as each interacts differently with the materials, potentially affecting imaging quality and classification accuracy. Principal component analysis (PCA) and Quadratic Discriminant Analysis (QDA) were employed to differentiate between various cotton types and hemp plant. PCA for the XBO illumination revealed that the first three principal components (PCs) accounted for 94.8% of the total variance: PC1 (78.4%) and PC2 (11.6%) clustered the samples into four main groups-hemp (HP), recycled cotton (RcC), and organic cotton (OC) from the other cotton samples-while PC3 (6%) further separated RcC. When using the deuterium light source, the first three PCs explained 89.4% of the variance, effectively distinguishing sample types such as HP, RcC, and OC from the remaining samples, with PC3 clearly separating RcC. When combining the PCA scores with QDA, the classification accuracy reached 76.1% for the XBO light source and 85.1% for the deuterium light source. Furthermore, a deep learning technique called a fully connected neural network for classification was applied. The classification accuracy for the XBO and deuterium light sources reached 83.6% and 90.1%, respectively. The results highlight the ability of this method to differentiate conventional and organic cotton, as well as hemp, and to identify distinct types of recycled cotton, suggesting varying recycling processes and possible common origins with raw cotton. These findings underscore the potential of UV hyperspectral imaging, coupled with chemometric models, as a powerful tool for enhancing cotton classification accuracy in the textile industry.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11677917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FastQAFPN-YOLOv8s-Based Method for Rapid and Lightweight Detection of Walnut Unseparated Material. 基于fastqafpn - yolov8的核桃未分离物质快速轻量化检测方法
IF 2.7
Journal of Imaging Pub Date : 2024-12-02 DOI: 10.3390/jimaging10120309
Junqiu Li, Jiayi Wang, Dexiao Kong, Qinghui Zhang, Zhenping Qiang
{"title":"FastQAFPN-YOLOv8s-Based Method for Rapid and Lightweight Detection of Walnut Unseparated Material.","authors":"Junqiu Li, Jiayi Wang, Dexiao Kong, Qinghui Zhang, Zhenping Qiang","doi":"10.3390/jimaging10120309","DOIUrl":"10.3390/jimaging10120309","url":null,"abstract":"<p><p>Walnuts possess significant nutritional and economic value. Fast and accurate sorting of shells and kernels will enhance the efficiency of automated production. Therefore, we propose a FastQAFPN-YOLOv8s object detection network to achieve rapid and precise detection of unsorted materials. The method uses lightweight Pconv (Partial Convolution) operators to build the FasterNextBlock structure, which serves as the backbone feature extractor for the Fasternet feature extraction network. The ECIoU loss function, combining EIoU (Efficient-IoU) and CIoU (Complete-IoU), speeds up the adjustment of the prediction frame and the network regression. In the Neck section of the network, the QAFPN feature fusion extraction network is proposed to replace the PAN-FPN (Path Aggregation Network-Feature Pyramid Network) in YOLOv8s with a Rep-PAN structure based on the QARepNext reparameterization framework for feature fusion extraction to strike a balance between network performance and inference speed. To validate the method, we built a three-axis mobile sorting device and created a dataset of 3000 images of walnuts after shell removal for experiments. The results show that the improved network contains 6071008 parameters, a training time of 2.49 h, a model size of 12.3 MB, an mAP (Mean Average Precision) of 94.5%, and a frame rate of 52.1 FPS. Compared with the original model, the number of parameters decreased by 45.5%, with training time reduced by 32.7%, the model size shrunk by 45.3%, and frame rate improved by 40.8%. However, some accuracy is sacrificed due to the lightweight design, resulting in a 1.2% decrease in mAP. The network reduces the model size by 59.7 MB and 23.9 MB compared to YOLOv7 and YOLOv6, respectively, and improves the frame rate by 15.67 fps and 22.55 fps, respectively. The average confidence and mAP show minimal changes compared to YOLOv7 and improved by 4.2% and 2.4% compared to YOLOv6, respectively. The FastQAFPN-YOLOv8s detection method effectively reduces model size while maintaining recognition accuracy.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679546/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elucidating Early Radiation-Induced Cardiotoxicity Markers in Preclinical Genetic Models Through Advanced Machine Learning and Cardiac MRI. 通过先进的机器学习和心脏MRI阐明临床前遗传模型中早期辐射诱导的心脏毒性标志物。
IF 2.7
Journal of Imaging Pub Date : 2024-12-01 DOI: 10.3390/jimaging10120308
Dayeong An, El-Sayed Ibrahim
{"title":"Elucidating Early Radiation-Induced Cardiotoxicity Markers in Preclinical Genetic Models Through Advanced Machine Learning and Cardiac MRI.","authors":"Dayeong An, El-Sayed Ibrahim","doi":"10.3390/jimaging10120308","DOIUrl":"10.3390/jimaging10120308","url":null,"abstract":"<p><p>Radiation therapy (RT) is widely used to treat thoracic cancers but carries a risk of radiation-induced heart disease (RIHD). This study aimed to detect early markers of RIHD using machine learning (ML) techniques and cardiac MRI in a rat model. SS.BN3 consomic rats, which have a more subtle RIHD phenotype compared to Dahl salt-sensitive (SS) rats, were treated with localized cardiac RT or sham at 10 weeks of age. Cardiac MRI was performed 8 and 10 weeks post-treatment to assess global and regional cardiac function. ML algorithms were applied to differentiate sham-treated and irradiated rats based on early changes in myocardial function. Despite normal global left ventricular ejection fraction in both groups, strain analysis showed significant reductions in the anteroseptal and anterolateral segments of irradiated rats. Gradient boosting achieved an F1 score of 0.94 and an ROC value of 0.95, while random forest showed an accuracy of 88%. These findings suggest that ML, combined with cardiac MRI, can effectively detect early preclinical changes in RIHD, particularly alterations in regional myocardial contractility, highlighting the potential of these techniques for early detection and monitoring of radiation-induced cardiac dysfunction.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11677573/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Gap-Aware Attention Model for Temporal Action Proposal Generation. 时间动作建议生成的时间间隙感知注意模型。
IF 2.7
Journal of Imaging Pub Date : 2024-11-29 DOI: 10.3390/jimaging10120307
Sorn Sooksatra, Sitapa Watcharapinchai
{"title":"Temporal Gap-Aware Attention Model for Temporal Action Proposal Generation.","authors":"Sorn Sooksatra, Sitapa Watcharapinchai","doi":"10.3390/jimaging10120307","DOIUrl":"10.3390/jimaging10120307","url":null,"abstract":"<p><p>Temporal action proposal generation is a method for extracting temporal action instances or proposals from untrimmed videos. Existing methods often struggle to segment contiguous action proposals, which are a group of action boundaries with small temporal gaps. To address this limitation, we propose incorporating an attention mechanism to weigh the importance of each proposal within a contiguous group. This mechanism leverages the gap displacement between proposals to calculate attention scores, enabling a more accurate localization of action boundaries. We evaluate our method against a state-of-the-art boundary-based baseline on ActivityNet v1.3 and Thumos 2014 datasets. The experimental results demonstrate that our approach significantly improves the performance of short-duration and contiguous action proposals, achieving an average recall of 78.22%.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11678434/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Review of the SWEET Simulator: Theoretical Verification Against Other Simulators. SWEET模拟器的比较综述:与其他模拟器的理论验证。
IF 2.7
Journal of Imaging Pub Date : 2024-11-27 DOI: 10.3390/jimaging10120306
Amine Ben-Daoued, Frédéric Bernardin, Pierre Duthon
{"title":"A Comparative Review of the SWEET Simulator: Theoretical Verification Against Other Simulators.","authors":"Amine Ben-Daoued, Frédéric Bernardin, Pierre Duthon","doi":"10.3390/jimaging10120306","DOIUrl":"10.3390/jimaging10120306","url":null,"abstract":"<p><p>Accurate luminance-based image generation is critical in physically based simulations, as even minor inaccuracies in radiative transfer calculations can introduce noise or artifacts, adversely affecting image quality. The radiative transfer simulator, SWEET, uses a backward Monte Carlo approach, and its performance is analyzed alongside other simulators to assess how Monte Carlo-induced biases vary with parameters like optical thickness and medium anisotropy. This work details the advancements made to SWEET since the previous publication, with a specific focus on a more comprehensive comparison with other simulators such as Mitsuba. The core objective is to evaluate the precision of SWEET by comparing radiometric quantities like luminance, which serves as a method for validating the simulator. This analysis is particularly important in contexts such as automotive camera imaging, where accurate scene representation is crucial to reducing noise and ensuring the reliability of image-based systems in autonomous driving. By focusing on detailed radiometric comparisons, this study underscores SWEET's ability to minimize noise, thus providing high-quality imaging for advanced applications.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11680047/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IngredSAM: Open-World Food Ingredient Segmentation via a Single Image Prompt. IngredSAM:通过单个图像提示进行开放世界食品成分分割。
IF 2.7
Journal of Imaging Pub Date : 2024-11-26 DOI: 10.3390/jimaging10120305
Leyi Chen, Bowen Wang, Jiaxin Zhang
{"title":"IngredSAM: Open-World Food Ingredient Segmentation via a Single Image Prompt.","authors":"Leyi Chen, Bowen Wang, Jiaxin Zhang","doi":"10.3390/jimaging10120305","DOIUrl":"10.3390/jimaging10120305","url":null,"abstract":"<p><p>Food semantic segmentation is of great significance in the field of computer vision and artificial intelligence, especially in the application of food image analysis. Due to the complexity and variety of food, it is difficult to effectively handle this task using supervised methods. Thus, we introduce IngredSAM, a novel approach for open-world food ingredient semantic segmentation, extending the capabilities of the Segment Anything Model (SAM). Utilizing visual foundation models (VFMs) and prompt engineering, IngredSAM leverages discriminative and matchable semantic features between a single clean image prompt of specific ingredients and open-world images to guide the generation of accurate segmentation masks in real-world scenarios. This method addresses the challenges of traditional supervised models in dealing with the diverse appearances and class imbalances of food ingredients. Our framework demonstrates significant advancements in the segmentation of food ingredients without any training process, achieving 2.85% and 6.01% better performance than previous state-of-the-art methods on both FoodSeg103 and UECFoodPix datasets. IngredSAM exemplifies a successful application of one-shot, open-world segmentation, paving the way for downstream applications such as enhancements in nutritional analysis and consumer dietary trend monitoring.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11677470/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer Dil-DenseUnet: An Advanced Architecture for Stroke Segmentation. 变压器dl - denseunet:一种先进的行程分割体系结构。
IF 2.7
Journal of Imaging Pub Date : 2024-11-25 DOI: 10.3390/jimaging10120304
Nesrine Jazzar, Besma Mabrouk, Ali Douik
{"title":"Transformer Dil-DenseUnet: An Advanced Architecture for Stroke Segmentation.","authors":"Nesrine Jazzar, Besma Mabrouk, Ali Douik","doi":"10.3390/jimaging10120304","DOIUrl":"10.3390/jimaging10120304","url":null,"abstract":"<p><p>We propose a novel architecture, Transformer Dil-DenseUNet, designed to address the challenges of accurately segmenting stroke lesions in MRI images. Precise segmentation is essential for diagnosing and treating stroke patients, as it provides critical spatial insights into the affected brain regions and the extent of damage. Traditional manual segmentation is labor-intensive and error-prone, highlighting the need for automated solutions. Our Transformer Dil-DenseUNet combines DenseNet, dilated convolutions, and Transformer blocks, each contributing unique strengths to enhance segmentation accuracy. The DenseNet component captures fine-grained details and global features by leveraging dense connections, improving both precision and feature reuse. The dilated convolutional blocks, placed before each DenseNet module, expand the receptive field, capturing broader contextual information essential for accurate segmentation. Additionally, the Transformer blocks within our architecture address CNN limitations in capturing long-range dependencies by modeling complex spatial relationships through multi-head self-attention mechanisms. We assess our model's performance on the Ischemic Stroke Lesion Segmentation Challenge 2015 (SISS 2015) and ISLES 2022 datasets. In the testing phase, the model achieves a Dice coefficient of 0.80 ± 0.30 on SISS 2015 and 0.81 ± 0.33 on ISLES 2022, surpassing the current state-of-the-art results on these datasets.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced YOLOv8 Ship Detection Empower Unmanned Surface Vehicles for Advanced Maritime Surveillance. 增强的YOLOv8船舶探测使无人水面车辆能够进行先进的海上监视。
IF 2.7
Journal of Imaging Pub Date : 2024-11-24 DOI: 10.3390/jimaging10120303
Abdelilah Haijoub, Anas Hatim, Antonio Guerrero-Gonzalez, Mounir Arioua, Khalid Chougdali
{"title":"Enhanced YOLOv8 Ship Detection Empower Unmanned Surface Vehicles for Advanced Maritime Surveillance.","authors":"Abdelilah Haijoub, Anas Hatim, Antonio Guerrero-Gonzalez, Mounir Arioua, Khalid Chougdali","doi":"10.3390/jimaging10120303","DOIUrl":"10.3390/jimaging10120303","url":null,"abstract":"<p><p>The evolution of maritime surveillance is significantly marked by the incorporation of Artificial Intelligence and machine learning into Unmanned Surface Vehicles (USVs). This paper presents an AI approach for detecting and tracking unmanned surface vehicles, specifically leveraging an enhanced version of YOLOv8, fine-tuned for maritime surveillance needs. Deployed on the NVIDIA Jetson TX2 platform, the system features an innovative architecture and perception module optimized for real-time operations and energy efficiency. Demonstrating superior detection accuracy with a mean Average Precision (mAP) of 0.99 and achieving an operational speed of 17.99 FPS, all while maintaining energy consumption at just 5.61 joules. The remarkable balance between accuracy, processing speed, and energy efficiency underscores the potential of this system to significantly advance maritime safety, security, and environmental monitoring.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Magnetic Resonance Imaging Template to Standardize Reporting of Evacuation Disorders. 规范疏散障碍报告的磁共振成像模板。
IF 2.7
Journal of Imaging Pub Date : 2024-11-23 DOI: 10.3390/jimaging10120302
Vittorio Piloni, Tiziana Manisco, Marco Fogante
{"title":"Magnetic Resonance Imaging Template to Standardize Reporting of Evacuation Disorders.","authors":"Vittorio Piloni, Tiziana Manisco, Marco Fogante","doi":"10.3390/jimaging10120302","DOIUrl":"10.3390/jimaging10120302","url":null,"abstract":"<p><p>Magnetic resonance (MR) defecography, including both static and dynamic phases, is frequently requested by gastroenterologists and colorectal surgeons for planning the treatment of obstructive defecation syndrome and pelvic organ prolapse. However, reports often lack key information needed to guide treatment strategies, making management challenging and, at times, controversial. It has been hypothesized that using structured radiology reports could reduce missing information. In this paper, we present a structured MR defecography template report that includes nine key descriptors of rectal evacuation. The effectiveness and acceptability of this template are currently being assessed in Italy through a national interdisciplinary study.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11677394/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信