Journal of Imaging最新文献

筛选
英文 中文
DFCNet: Dual-Stage Frequency-Domain Calibration Network for Low-Light Image Enhancement. DFCNet:用于弱光图像增强的双级频域校准网络。
IF 2.7
Journal of Imaging Pub Date : 2025-07-28 DOI: 10.3390/jimaging11080253
Hui Zhou, Jun Li, Yaming Mao, Lu Liu, Yiyang Lu
{"title":"DFCNet: Dual-Stage Frequency-Domain Calibration Network for Low-Light Image Enhancement.","authors":"Hui Zhou, Jun Li, Yaming Mao, Lu Liu, Yiyang Lu","doi":"10.3390/jimaging11080253","DOIUrl":"10.3390/jimaging11080253","url":null,"abstract":"<p><p>Imaging technologies are widely used in surveillance, medical diagnostics, and other critical applications. However, under low-light conditions, captured images often suffer from insufficient brightness, blurred details, and excessive noise, degrading quality and hindering downstream tasks. Conventional low-light image enhancement (LLIE) methods not only require annotated data but also often involve heavy models with high computational costs, making them unsuitable for real-time processing. To tackle these challenges, a lightweight and unsupervised LLIE method utilizing a dual-stage frequency-domain calibration network (DFCNet) is proposed. In the first stage, the input image undergoes the preliminary feature modulation (PFM) module to guide the illumination estimation (IE) module in generating a more accurate illumination map. The final enhanced image is obtained by dividing the input by the estimated illumination map. The second stage is used only during training. It applies a frequency-domain residual calibration (FRC) module to the first-stage output, generating a calibration term that is added to the original input to darken dark regions and brighten bright areas. This updated input is then fed back to the PFM and IE modules for parameter optimization. Extensive experiments on benchmark datasets demonstrate that DFCNet achieves superior performance across multiple image quality metrics while delivering visually clearer and more natural results.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387550/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Techniques for Prostate Cancer Analysis and Detection: Survey of the State of the Art. 前列腺癌分析和检测的深度学习技术:现状调查。
IF 2.7
Journal of Imaging Pub Date : 2025-07-28 DOI: 10.3390/jimaging11080254
Olushola Olawuyi, Serestina Viriri
{"title":"Deep Learning Techniques for Prostate Cancer Analysis and Detection: Survey of the State of the Art.","authors":"Olushola Olawuyi, Serestina Viriri","doi":"10.3390/jimaging11080254","DOIUrl":"10.3390/jimaging11080254","url":null,"abstract":"<p><p>The human interpretation of medical images, especially for the detection of cancer in the prostate, has traditionally been a time-consuming and challenging process. Manual examination for the detection of prostate cancer is not only time-consuming but also prone to errors, carrying the risk of an excess biopsy due to the inherent limitations of human visual interpretation. With the technical advancements and rapid growth of computer resources, machine learning (ML) and deep learning (DL) models have been experimentally used for medical image analysis, particularly in lesion detection. However, several state-of-the-art models have shown promising results. There are still challenges when analysing prostate lesion images due to the distinctive and complex nature of medical images. This study offers an elaborate review of the techniques that are used to diagnose prostate cancer using medical images. The goal is to provide a comprehensive and valuable resource that helps researchers develop accurate and autonomous models for effectively detecting prostate cancer. This paper is structured as follows: First, we outline the issues with prostate lesion detection. We then review the methods for analysing prostate lesion images and classification approaches. We then examine convolutional neural network (CNN) architectures and explore their applications in deep learning (DL) for image-based prostate cancer diagnosis. Finally, we provide an overview of prostate cancer datasets and evaluation metrics in deep learning. In conclusion, this review analyses key findings, highlights the challenges in prostate lesion detection, and evaluates the effectiveness and limitations of current deep learning techniques.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic Scientific Image Generation with VAE, GAN, and Diffusion Model Architectures. 合成科学图像生成与VAE, GAN和扩散模型架构。
IF 2.7
Journal of Imaging Pub Date : 2025-07-26 DOI: 10.3390/jimaging11080252
Zineb Sordo, Eric Chagnon, Zixi Hu, Jeffrey J Donatelli, Peter Andeer, Peter S Nico, Trent Northen, Daniela Ushizima
{"title":"Synthetic Scientific Image Generation with VAE, GAN, and Diffusion Model Architectures.","authors":"Zineb Sordo, Eric Chagnon, Zixi Hu, Jeffrey J Donatelli, Peter Andeer, Peter S Nico, Trent Northen, Daniela Ushizima","doi":"10.3390/jimaging11080252","DOIUrl":"10.3390/jimaging11080252","url":null,"abstract":"<p><p>Generative AI (genAI) has emerged as a powerful tool for synthesizing diverse and complex image data, offering new possibilities for scientific imaging applications. This review presents a comprehensive comparative analysis of leading generative architectures, ranging from Variational Autoencoders (VAEs) to Generative Adversarial Networks (GANs) on through to Diffusion Models, in the context of scientific image synthesis. We examine each model's foundational principles, recent architectural advancements, and practical trade-offs. Our evaluation, conducted on domain-specific datasets including microCT scans of rocks and composite fibers, as well as high-resolution images of plant roots, integrates both quantitative metrics (SSIM, LPIPS, FID, CLIPScore) and expert-driven qualitative assessments. Results show that GANs, particularly StyleGAN, produce images with high perceptual quality and structural coherence. Diffusion-based models for inpainting and image variation, such as DALL-E 2, delivered high realism and semantic alignment but generally struggled in balancing visual fidelity with scientific accuracy. Importantly, our findings reveal limitations of standard quantitative metrics in capturing scientific relevance, underscoring the need for domain-expert validation. We conclude by discussing key challenges such as model interpretability, computational cost, and verification protocols, and discuss future directions where generative AI can drive innovation in data augmentation, simulation, and hypothesis generation in scientific research.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387873/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Google Maps and Smooth Street View Videos for Route Planning. 整合谷歌地图和平滑街景视频的路线规划。
IF 2.7
Journal of Imaging Pub Date : 2025-07-25 DOI: 10.3390/jimaging11080251
Federica Massimi, Antonio Tedeschi, Kalapraveen Bagadi, Francesco Benedetto
{"title":"Integrating Google Maps and Smooth Street View Videos for Route Planning.","authors":"Federica Massimi, Antonio Tedeschi, Kalapraveen Bagadi, Francesco Benedetto","doi":"10.3390/jimaging11080251","DOIUrl":"10.3390/jimaging11080251","url":null,"abstract":"<p><p>This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client-server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reclassification Scheme for Image Analysis in GRASS GIS Using Gradient Boosting Algorithm: A Case of Djibouti, East Africa. 基于梯度增强算法的GRASS GIS图像重分类方案——以东非吉布提为例
IF 2.7
Journal of Imaging Pub Date : 2025-07-23 DOI: 10.3390/jimaging11080249
Polina Lemenkova
{"title":"Reclassification Scheme for Image Analysis in GRASS GIS Using Gradient Boosting Algorithm: A Case of Djibouti, East Africa.","authors":"Polina Lemenkova","doi":"10.3390/jimaging11080249","DOIUrl":"10.3390/jimaging11080249","url":null,"abstract":"<p><p>Image analysis is a valuable approach in a wide array of environmental applications. Mapping land cover categories depicted from satellite images enables the monitoring of landscape dynamics. Such a technique plays a key role for land management and predictive ecosystem modelling. Satellite-based mapping of environmental dynamics enables us to define factors that trigger these processes and are crucial for our understanding of Earth system processes. In this study, a reclassification scheme of image analysis was developed for mapping the adjusted categorisation of land cover types using multispectral remote sensing datasets and Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS) software. The data included four Landsat 8-9 satellite images on 2015, 2019, 2021 and 2023. The sequence of time series was used to determine land cover dynamics. The classification scheme consisting of 17 initial land cover classes was employed by logical workflow to extract 10 key land cover types of the coastal areas of Bab-el-Mandeb Strait, southern Red Sea. Special attention is placed to identify changes in the land categories regarding the thermal saline lake, Lake Assal, with fluctuating salinity and water levels. The methodology included the use of machine learning (ML) image analysis GRASS GIS modules 'r.reclass' for the reclassification of a raster map based on category values. Other modules included 'r.random', 'r.learn.train' and 'r.learn.predict' for gradient boosting ML classifier and 'i.cluster' and 'i.maxlik' for clustering and maximum-likelihood discriminant analysis. To reveal changes in the land cover categories around the Lake of Assal, this study uses ML and reclassification methods for image analysis. Auxiliary modules included 'i.group', 'r.import' and other GRASS GIS scripting techniques applied to Landsat image processing and for the identification of land cover variables. The results of image processing demonstrated annual fluctuations in the landscapes around the saline lake and changes in semi-arid and desert land cover types over Djibouti. The increase in the extent of semi-desert areas and the decrease in natural vegetation proved the processes of desertification of the arid environment in Djibouti caused by climate effects. The developed land cover maps provided information for assessing spatial-temporal changes in Djibouti. The proposed ML-based methodology using GRASS GIS can be employed for integrating techniques of image analysis for land management in other arid regions of Africa.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Radiomic Analysis and Different Machine Learning Models in Prostate Cancer Diagnosis. 放射组学分析和不同机器学习模型在前列腺癌诊断中的作用。
IF 2.7
Journal of Imaging Pub Date : 2025-07-23 DOI: 10.3390/jimaging11080250
Eleni Bekou, Ioannis Seimenis, Athanasios Tsochatzis, Karafyllia Tziagkana, Nikolaos Kelekis, Savas Deftereos, Nikolaos Courcoutsakis, Michael I Koukourakis, Efstratios Karavasilis
{"title":"The Role of Radiomic Analysis and Different Machine Learning Models in Prostate Cancer Diagnosis.","authors":"Eleni Bekou, Ioannis Seimenis, Athanasios Tsochatzis, Karafyllia Tziagkana, Nikolaos Kelekis, Savas Deftereos, Nikolaos Courcoutsakis, Michael I Koukourakis, Efstratios Karavasilis","doi":"10.3390/jimaging11080250","DOIUrl":"10.3390/jimaging11080250","url":null,"abstract":"<p><p>Prostate cancer (PCa) is the most common malignancy in men. Precise grading is crucial for the effective treatment approaches of PCa. Machine learning (ML) applied to biparametric Magnetic Resonance Imaging (bpMRI) radiomics holds promise for improving PCa diagnosis and prognosis. This study investigated the efficiency of seven ML models to diagnose the different PCa grades, changing the input variables. Our studied sample comprised 214 men who underwent bpMRI in different imaging centers. Seven ML algorithms were compared using radiomic features extracted from T2-weighted (T2W) and diffusion-weighted (DWI) MRI, with and without the inclusion of Prostate-Specific Antigen (PSA) values. The performance of the models was evaluated using the receiver operating characteristic curve analysis. The models' performance was strongly dependent on the input parameters. Radiomic features derived from T2WI and DWI, whether used independently or in combination, demonstrated limited clinical utility, with AUC values ranging from 0.703 to 0.807. However, incorporating the PSA index significantly improved the models' efficiency, regardless of lesion location or degree of malignancy, resulting in AUC values ranging from 0.784 to 1.00. There is evidence that ML methods, in combination with radiomic analysis, can contribute to solving differential diagnostic problems of prostate cancers. Also, optimization of the analysis method is critical, according to the results of our study.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Yarn Color Measurement Method Based on Digital Photography. 基于数码摄影的纱线颜色测量方法。
IF 2.7
Journal of Imaging Pub Date : 2025-07-22 DOI: 10.3390/jimaging11080248
Jinxing Liang, Guanghao Wu, Ke Yang, Jiangxiaotian Ma, Jihao Wang, Hang Luo, Xinrong Hu, Yong Liu
{"title":"Yarn Color Measurement Method Based on Digital Photography.","authors":"Jinxing Liang, Guanghao Wu, Ke Yang, Jiangxiaotian Ma, Jihao Wang, Hang Luo, Xinrong Hu, Yong Liu","doi":"10.3390/jimaging11080248","DOIUrl":"10.3390/jimaging11080248","url":null,"abstract":"<p><p>To overcome the complexity of yarn color measurement using spectrophotometry with yarn winding techniques and to enhance consistency with human visual perception, a yarn color measurement method based on digital photography is proposed. This study employs a photographic colorimetry system to capture digital images of single yarns. The yarn and background are segmented using the K-means clustering algorithm, and the centerline of the yarn is extracted using a skeletonization algorithm. Spectral reconstruction and colorimetric principles are then applied to calculate the color values of pixels along the centerline. Considering the nonlinear characteristics of human brightness perception, the final yarn color is obtained through a nonlinear texture-adaptive weighted computation. The method is validated through psychophysical experiments using six yarns of different colors and compared with spectrophotometry and five other photographic measurement methods. Results indicate that among the seven yarn color measurement methods, including spectrophotometry, the proposed method-based on centerline extraction and nonlinear texture-adaptive weighting-yields results that more closely align with actual visual perception. Furthermore, among the six photographic measurement methods, the proposed method produces most similar to those obtained using spectrophotometry. This study demonstrates the inconsistency between spectrophotometric measurements and human visual perception of yarn color and provides methodological support for developing visually consistent color measurement methods for textured textiles.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387683/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative Multi-View Strategies for AI-Assisted Breast Cancer Detection in Mammography. 人工智能辅助乳腺x线摄影中乳腺癌检测的创新多视角策略
IF 2.7
Journal of Imaging Pub Date : 2025-07-22 DOI: 10.3390/jimaging11080247
Beibit Abdikenov, Tomiris Zhaksylyk, Aruzhan Imasheva, Yerzhan Orazayev, Temirlan Karibekov
{"title":"Innovative Multi-View Strategies for AI-Assisted Breast Cancer Detection in Mammography.","authors":"Beibit Abdikenov, Tomiris Zhaksylyk, Aruzhan Imasheva, Yerzhan Orazayev, Temirlan Karibekov","doi":"10.3390/jimaging11080247","DOIUrl":"10.3390/jimaging11080247","url":null,"abstract":"<p><p>Mammography is the main method for early detection of breast cancer, which is still a major global health concern. However, inter-reader variability and the inherent difficulty of interpreting subtle radiographic features frequently limit the accuracy of diagnosis. A thorough assessment of deep convolutional neural networks (CNNs) for automated mammogram classification is presented in this work, along with the introduction of two innovative multi-view integration techniques: Dual-Branch Ensemble (DBE) and Merged Dual-View (MDV). By setting aside two datasets for out-of-sample testing, we evaluate the generalizability of the model using six different mammography datasets that represent various populations and imaging systems. We compare a number of cutting-edge architectures on both individual and combined datasets, including ResNet, DenseNet, EfficientNet, MobileNet, Vision Transformers, and VGG19. Both MDV and DBE strategies improve classification performance, according to experimental results. VGG19 and DenseNet both obtained high ROC AUC scores of 0.9051 and 0.7960 under the MDV approach. DenseNet demonstrated strong performance in the DBE setting, achieving a ROC AUC of 0.8033, while ResNet50 recorded a ROC AUC of 0.8042. These enhancements demonstrate how beneficial multi-view fusion is for boosting model robustness. The impact of domain shift is further highlighted by generalization tests, which emphasize the need for diverse datasets in training. These results offer practical advice for improving CNN architectures and integration tactics, which will aid in the creation of trustworthy, broadly applicable AI-assisted breast cancer screening tools.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DP-AMF: Depth-Prior-Guided Adaptive Multi-Modal and Global-Local Fusion for Single-View 3D Reconstruction. DP-AMF:深度先验引导自适应多模态和全局-局部融合的单视图三维重建。
IF 2.7
Journal of Imaging Pub Date : 2025-07-21 DOI: 10.3390/jimaging11070246
Luoxi Zhang, Chun Xie, Itaru Kitahara
{"title":"DP-AMF: Depth-Prior-Guided Adaptive Multi-Modal and Global-Local Fusion for Single-View 3D Reconstruction.","authors":"Luoxi Zhang, Chun Xie, Itaru Kitahara","doi":"10.3390/jimaging11070246","DOIUrl":"10.3390/jimaging11070246","url":null,"abstract":"<p><p>Single-view 3D reconstruction remains fundamentally ill-posed, as a single RGB image lacks scale and depth cues, often yielding ambiguous results under occlusion or in texture-poor regions. We propose DP-AMF, a novel Depth-Prior-Guided Adaptive Multi-Modal and Global-Local Fusion framework that integrates high-fidelity depth priors-generated offline by the MARIGOLD diffusion-based estimator and cached to avoid extra training cost-with hierarchical local features from ResNet-32/ResNet-18 and semantic global features from DINO-ViT. A learnable fusion module dynamically adjusts per-channel weights to balance these modalities according to local texture and occlusion, and an implicit signed-distance field decoder reconstructs the final mesh. Extensive experiments on 3D-FRONT and Pix3D demonstrate that DP-AMF reduces Chamfer Distance by 7.64%, increases F-Score by 2.81%, and boosts Normal Consistency by 5.88% compared to strong baselines, while qualitative results show sharper edges and more complete geometry in challenging scenes. DP-AMF achieves these gains without substantially increasing model size or inference time, offering a robust and effective solution for complex single-view reconstruction tasks.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-Dimensional Ultraviolet Fluorescence Imaging in Cultural Heritage: A Review of Applications in Multi-Material Artworks. 文物三维紫外荧光成像:多材料艺术品的应用综述。
IF 2.7
Journal of Imaging Pub Date : 2025-07-21 DOI: 10.3390/jimaging11070245
Luca Lanteri, Claudia Pelosi, Paola Pogliani
{"title":"Three-Dimensional Ultraviolet Fluorescence Imaging in Cultural Heritage: A Review of Applications in Multi-Material Artworks.","authors":"Luca Lanteri, Claudia Pelosi, Paola Pogliani","doi":"10.3390/jimaging11070245","DOIUrl":"10.3390/jimaging11070245","url":null,"abstract":"<p><p>Ultraviolet-induced fluorescence (UVF) imaging represents a simple but powerful technique in cultural heritage studies. It is a nondestructive and non-invasive imaging technique which can supply useful and relevant information to define the state of conservation of an artifact. UVF imaging also helps to establish the value of an artwork by indicating inpainting, repaired areas, grouting, etc. In general, ultraviolet fluorescence imaging output takes the form of 2D photographs in the case of both paintings and sculptures. For this reason, a few years ago the idea of applying the photogrammetric method to create 3D digital twins under ultraviolet fluorescence was developed to address the requirements of restorers who need daily documentation tools for their work that are simple to use and can display the entire 3D object in a single file. This review explores recent applications of this innovative method of ultraviolet fluorescence imaging with reference to the wider literature on the UVF technique to make evident the practical importance of its application in cultural heritage.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信