{"title":"DFCNet: Dual-Stage Frequency-Domain Calibration Network for Low-Light Image Enhancement.","authors":"Hui Zhou, Jun Li, Yaming Mao, Lu Liu, Yiyang Lu","doi":"10.3390/jimaging11080253","DOIUrl":"10.3390/jimaging11080253","url":null,"abstract":"<p><p>Imaging technologies are widely used in surveillance, medical diagnostics, and other critical applications. However, under low-light conditions, captured images often suffer from insufficient brightness, blurred details, and excessive noise, degrading quality and hindering downstream tasks. Conventional low-light image enhancement (LLIE) methods not only require annotated data but also often involve heavy models with high computational costs, making them unsuitable for real-time processing. To tackle these challenges, a lightweight and unsupervised LLIE method utilizing a dual-stage frequency-domain calibration network (DFCNet) is proposed. In the first stage, the input image undergoes the preliminary feature modulation (PFM) module to guide the illumination estimation (IE) module in generating a more accurate illumination map. The final enhanced image is obtained by dividing the input by the estimated illumination map. The second stage is used only during training. It applies a frequency-domain residual calibration (FRC) module to the first-stage output, generating a calibration term that is added to the original input to darken dark regions and brighten bright areas. This updated input is then fed back to the PFM and IE modules for parameter optimization. Extensive experiments on benchmark datasets demonstrate that DFCNet achieves superior performance across multiple image quality metrics while delivering visually clearer and more natural results.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387550/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning Techniques for Prostate Cancer Analysis and Detection: Survey of the State of the Art.","authors":"Olushola Olawuyi, Serestina Viriri","doi":"10.3390/jimaging11080254","DOIUrl":"10.3390/jimaging11080254","url":null,"abstract":"<p><p>The human interpretation of medical images, especially for the detection of cancer in the prostate, has traditionally been a time-consuming and challenging process. Manual examination for the detection of prostate cancer is not only time-consuming but also prone to errors, carrying the risk of an excess biopsy due to the inherent limitations of human visual interpretation. With the technical advancements and rapid growth of computer resources, machine learning (ML) and deep learning (DL) models have been experimentally used for medical image analysis, particularly in lesion detection. However, several state-of-the-art models have shown promising results. There are still challenges when analysing prostate lesion images due to the distinctive and complex nature of medical images. This study offers an elaborate review of the techniques that are used to diagnose prostate cancer using medical images. The goal is to provide a comprehensive and valuable resource that helps researchers develop accurate and autonomous models for effectively detecting prostate cancer. This paper is structured as follows: First, we outline the issues with prostate lesion detection. We then review the methods for analysing prostate lesion images and classification approaches. We then examine convolutional neural network (CNN) architectures and explore their applications in deep learning (DL) for image-based prostate cancer diagnosis. Finally, we provide an overview of prostate cancer datasets and evaluation metrics in deep learning. In conclusion, this review analyses key findings, highlights the challenges in prostate lesion detection, and evaluates the effectiveness and limitations of current deep learning techniques.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zineb Sordo, Eric Chagnon, Zixi Hu, Jeffrey J Donatelli, Peter Andeer, Peter S Nico, Trent Northen, Daniela Ushizima
{"title":"Synthetic Scientific Image Generation with VAE, GAN, and Diffusion Model Architectures.","authors":"Zineb Sordo, Eric Chagnon, Zixi Hu, Jeffrey J Donatelli, Peter Andeer, Peter S Nico, Trent Northen, Daniela Ushizima","doi":"10.3390/jimaging11080252","DOIUrl":"10.3390/jimaging11080252","url":null,"abstract":"<p><p>Generative AI (genAI) has emerged as a powerful tool for synthesizing diverse and complex image data, offering new possibilities for scientific imaging applications. This review presents a comprehensive comparative analysis of leading generative architectures, ranging from Variational Autoencoders (VAEs) to Generative Adversarial Networks (GANs) on through to Diffusion Models, in the context of scientific image synthesis. We examine each model's foundational principles, recent architectural advancements, and practical trade-offs. Our evaluation, conducted on domain-specific datasets including microCT scans of rocks and composite fibers, as well as high-resolution images of plant roots, integrates both quantitative metrics (SSIM, LPIPS, FID, CLIPScore) and expert-driven qualitative assessments. Results show that GANs, particularly StyleGAN, produce images with high perceptual quality and structural coherence. Diffusion-based models for inpainting and image variation, such as DALL-E 2, delivered high realism and semantic alignment but generally struggled in balancing visual fidelity with scientific accuracy. Importantly, our findings reveal limitations of standard quantitative metrics in capturing scientific relevance, underscoring the need for domain-expert validation. We conclude by discussing key challenges such as model interpretability, computational cost, and verification protocols, and discuss future directions where generative AI can drive innovation in data augmentation, simulation, and hypothesis generation in scientific research.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387873/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federica Massimi, Antonio Tedeschi, Kalapraveen Bagadi, Francesco Benedetto
{"title":"Integrating Google Maps and Smooth Street View Videos for Route Planning.","authors":"Federica Massimi, Antonio Tedeschi, Kalapraveen Bagadi, Francesco Benedetto","doi":"10.3390/jimaging11080251","DOIUrl":"10.3390/jimaging11080251","url":null,"abstract":"<p><p>This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client-server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reclassification Scheme for Image Analysis in GRASS GIS Using Gradient Boosting Algorithm: A Case of Djibouti, East Africa.","authors":"Polina Lemenkova","doi":"10.3390/jimaging11080249","DOIUrl":"10.3390/jimaging11080249","url":null,"abstract":"<p><p>Image analysis is a valuable approach in a wide array of environmental applications. Mapping land cover categories depicted from satellite images enables the monitoring of landscape dynamics. Such a technique plays a key role for land management and predictive ecosystem modelling. Satellite-based mapping of environmental dynamics enables us to define factors that trigger these processes and are crucial for our understanding of Earth system processes. In this study, a reclassification scheme of image analysis was developed for mapping the adjusted categorisation of land cover types using multispectral remote sensing datasets and Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS) software. The data included four Landsat 8-9 satellite images on 2015, 2019, 2021 and 2023. The sequence of time series was used to determine land cover dynamics. The classification scheme consisting of 17 initial land cover classes was employed by logical workflow to extract 10 key land cover types of the coastal areas of Bab-el-Mandeb Strait, southern Red Sea. Special attention is placed to identify changes in the land categories regarding the thermal saline lake, Lake Assal, with fluctuating salinity and water levels. The methodology included the use of machine learning (ML) image analysis GRASS GIS modules 'r.reclass' for the reclassification of a raster map based on category values. Other modules included 'r.random', 'r.learn.train' and 'r.learn.predict' for gradient boosting ML classifier and 'i.cluster' and 'i.maxlik' for clustering and maximum-likelihood discriminant analysis. To reveal changes in the land cover categories around the Lake of Assal, this study uses ML and reclassification methods for image analysis. Auxiliary modules included 'i.group', 'r.import' and other GRASS GIS scripting techniques applied to Landsat image processing and for the identification of land cover variables. The results of image processing demonstrated annual fluctuations in the landscapes around the saline lake and changes in semi-arid and desert land cover types over Djibouti. The increase in the extent of semi-desert areas and the decrease in natural vegetation proved the processes of desertification of the arid environment in Djibouti caused by climate effects. The developed land cover maps provided information for assessing spatial-temporal changes in Djibouti. The proposed ML-based methodology using GRASS GIS can be employed for integrating techniques of image analysis for land management in other arid regions of Africa.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eleni Bekou, Ioannis Seimenis, Athanasios Tsochatzis, Karafyllia Tziagkana, Nikolaos Kelekis, Savas Deftereos, Nikolaos Courcoutsakis, Michael I Koukourakis, Efstratios Karavasilis
{"title":"The Role of Radiomic Analysis and Different Machine Learning Models in Prostate Cancer Diagnosis.","authors":"Eleni Bekou, Ioannis Seimenis, Athanasios Tsochatzis, Karafyllia Tziagkana, Nikolaos Kelekis, Savas Deftereos, Nikolaos Courcoutsakis, Michael I Koukourakis, Efstratios Karavasilis","doi":"10.3390/jimaging11080250","DOIUrl":"10.3390/jimaging11080250","url":null,"abstract":"<p><p>Prostate cancer (PCa) is the most common malignancy in men. Precise grading is crucial for the effective treatment approaches of PCa. Machine learning (ML) applied to biparametric Magnetic Resonance Imaging (bpMRI) radiomics holds promise for improving PCa diagnosis and prognosis. This study investigated the efficiency of seven ML models to diagnose the different PCa grades, changing the input variables. Our studied sample comprised 214 men who underwent bpMRI in different imaging centers. Seven ML algorithms were compared using radiomic features extracted from T2-weighted (T2W) and diffusion-weighted (DWI) MRI, with and without the inclusion of Prostate-Specific Antigen (PSA) values. The performance of the models was evaluated using the receiver operating characteristic curve analysis. The models' performance was strongly dependent on the input parameters. Radiomic features derived from T2WI and DWI, whether used independently or in combination, demonstrated limited clinical utility, with AUC values ranging from 0.703 to 0.807. However, incorporating the PSA index significantly improved the models' efficiency, regardless of lesion location or degree of malignancy, resulting in AUC values ranging from 0.784 to 1.00. There is evidence that ML methods, in combination with radiomic analysis, can contribute to solving differential diagnostic problems of prostate cancers. Also, optimization of the analysis method is critical, according to the results of our study.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinxing Liang, Guanghao Wu, Ke Yang, Jiangxiaotian Ma, Jihao Wang, Hang Luo, Xinrong Hu, Yong Liu
{"title":"Yarn Color Measurement Method Based on Digital Photography.","authors":"Jinxing Liang, Guanghao Wu, Ke Yang, Jiangxiaotian Ma, Jihao Wang, Hang Luo, Xinrong Hu, Yong Liu","doi":"10.3390/jimaging11080248","DOIUrl":"10.3390/jimaging11080248","url":null,"abstract":"<p><p>To overcome the complexity of yarn color measurement using spectrophotometry with yarn winding techniques and to enhance consistency with human visual perception, a yarn color measurement method based on digital photography is proposed. This study employs a photographic colorimetry system to capture digital images of single yarns. The yarn and background are segmented using the K-means clustering algorithm, and the centerline of the yarn is extracted using a skeletonization algorithm. Spectral reconstruction and colorimetric principles are then applied to calculate the color values of pixels along the centerline. Considering the nonlinear characteristics of human brightness perception, the final yarn color is obtained through a nonlinear texture-adaptive weighted computation. The method is validated through psychophysical experiments using six yarns of different colors and compared with spectrophotometry and five other photographic measurement methods. Results indicate that among the seven yarn color measurement methods, including spectrophotometry, the proposed method-based on centerline extraction and nonlinear texture-adaptive weighting-yields results that more closely align with actual visual perception. Furthermore, among the six photographic measurement methods, the proposed method produces most similar to those obtained using spectrophotometry. This study demonstrates the inconsistency between spectrophotometric measurements and human visual perception of yarn color and provides methodological support for developing visually consistent color measurement methods for textured textiles.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387683/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Innovative Multi-View Strategies for AI-Assisted Breast Cancer Detection in Mammography.","authors":"Beibit Abdikenov, Tomiris Zhaksylyk, Aruzhan Imasheva, Yerzhan Orazayev, Temirlan Karibekov","doi":"10.3390/jimaging11080247","DOIUrl":"10.3390/jimaging11080247","url":null,"abstract":"<p><p>Mammography is the main method for early detection of breast cancer, which is still a major global health concern. However, inter-reader variability and the inherent difficulty of interpreting subtle radiographic features frequently limit the accuracy of diagnosis. A thorough assessment of deep convolutional neural networks (CNNs) for automated mammogram classification is presented in this work, along with the introduction of two innovative multi-view integration techniques: Dual-Branch Ensemble (DBE) and Merged Dual-View (MDV). By setting aside two datasets for out-of-sample testing, we evaluate the generalizability of the model using six different mammography datasets that represent various populations and imaging systems. We compare a number of cutting-edge architectures on both individual and combined datasets, including ResNet, DenseNet, EfficientNet, MobileNet, Vision Transformers, and VGG19. Both MDV and DBE strategies improve classification performance, according to experimental results. VGG19 and DenseNet both obtained high ROC AUC scores of 0.9051 and 0.7960 under the MDV approach. DenseNet demonstrated strong performance in the DBE setting, achieving a ROC AUC of 0.8033, while ResNet50 recorded a ROC AUC of 0.8042. These enhancements demonstrate how beneficial multi-view fusion is for boosting model robustness. The impact of domain shift is further highlighted by generalization tests, which emphasize the need for diverse datasets in training. These results offer practical advice for improving CNN architectures and integration tactics, which will aid in the creation of trustworthy, broadly applicable AI-assisted breast cancer screening tools.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DP-AMF: Depth-Prior-Guided Adaptive Multi-Modal and Global-Local Fusion for Single-View 3D Reconstruction.","authors":"Luoxi Zhang, Chun Xie, Itaru Kitahara","doi":"10.3390/jimaging11070246","DOIUrl":"10.3390/jimaging11070246","url":null,"abstract":"<p><p>Single-view 3D reconstruction remains fundamentally ill-posed, as a single RGB image lacks scale and depth cues, often yielding ambiguous results under occlusion or in texture-poor regions. We propose DP-AMF, a novel Depth-Prior-Guided Adaptive Multi-Modal and Global-Local Fusion framework that integrates high-fidelity depth priors-generated offline by the MARIGOLD diffusion-based estimator and cached to avoid extra training cost-with hierarchical local features from ResNet-32/ResNet-18 and semantic global features from DINO-ViT. A learnable fusion module dynamically adjusts per-channel weights to balance these modalities according to local texture and occlusion, and an implicit signed-distance field decoder reconstructs the final mesh. Extensive experiments on 3D-FRONT and Pix3D demonstrate that DP-AMF reduces Chamfer Distance by 7.64%, increases F-Score by 2.81%, and boosts Normal Consistency by 5.88% compared to strong baselines, while qualitative results show sharper edges and more complete geometry in challenging scenes. DP-AMF achieves these gains without substantially increasing model size or inference time, offering a robust and effective solution for complex single-view reconstruction tasks.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Three-Dimensional Ultraviolet Fluorescence Imaging in Cultural Heritage: A Review of Applications in Multi-Material Artworks.","authors":"Luca Lanteri, Claudia Pelosi, Paola Pogliani","doi":"10.3390/jimaging11070245","DOIUrl":"10.3390/jimaging11070245","url":null,"abstract":"<p><p>Ultraviolet-induced fluorescence (UVF) imaging represents a simple but powerful technique in cultural heritage studies. It is a nondestructive and non-invasive imaging technique which can supply useful and relevant information to define the state of conservation of an artifact. UVF imaging also helps to establish the value of an artwork by indicating inpainting, repaired areas, grouting, etc. In general, ultraviolet fluorescence imaging output takes the form of 2D photographs in the case of both paintings and sculptures. For this reason, a few years ago the idea of applying the photogrammetric method to create 3D digital twins under ultraviolet fluorescence was developed to address the requirements of restorers who need daily documentation tools for their work that are simple to use and can display the entire 3D object in a single file. This review explores recent applications of this innovative method of ultraviolet fluorescence imaging with reference to the wider literature on the UVF technique to make evident the practical importance of its application in cultural heritage.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}