Journal of Imaging最新文献

筛选
英文 中文
A Comparative Survey of Vision Transformers for Feature Extraction in Texture Analysis. 纹理分析中用于特征提取的视觉变换比较研究。
IF 2.7
Journal of Imaging Pub Date : 2025-09-05 DOI: 10.3390/jimaging11090304
Leonardo Scabini, Andre Sacilotti, Kallil M Zielinski, Lucas C Ribas, Bernard De Baets, Odemir M Bruno
{"title":"A Comparative Survey of Vision Transformers for Feature Extraction in Texture Analysis.","authors":"Leonardo Scabini, Andre Sacilotti, Kallil M Zielinski, Lucas C Ribas, Bernard De Baets, Odemir M Bruno","doi":"10.3390/jimaging11090304","DOIUrl":"10.3390/jimaging11090304","url":null,"abstract":"<p><p>Texture, a significant visual attribute in images, plays an important role in many pattern recognition tasks. While Convolutional Neural Networks (CNNs) have been among the most effective methods for texture analysis, alternative architectures such as Vision Transformers (ViTs) have recently demonstrated superior performance on a range of visual recognition problems. However, the suitability of ViTs for texture recognition remains underexplored. In this work, we investigate the capabilities and limitations of ViTs for texture recognition by analyzing 25 different ViT variants as feature extractors and comparing them to CNN-based and hand-engineered approaches. Our evaluation encompasses both accuracy and efficiency, aiming to assess the trade-offs involved in applying ViTs to texture analysis. Our results indicate that ViTs generally outperform CNN-based and hand-engineered models, particularly when using strong pre-training and in-the-wild texture datasets. Notably, BeiTv2-B/16 achieves the highest average accuracy (85.7%), followed by ViT-B/16-DINO (84.1%) and Swin-B (80.8%), outperforming the ResNet50 baseline (75.5%) and the hand-engineered baseline (73.4%). As a lightweight alternative, EfficientFormer-L3 attains a competitive average accuracy of 78.9%. In terms of efficiency, although ViT-B and BeiT(v2) have a higher number of GFLOPs and parameters, they achieve significantly faster feature extraction on GPUs compared to ResNet50. These findings highlight the potential of ViTs as a powerful tool for texture analysis while also pointing to areas for future exploration, such as efficiency improvements and domain-specific adaptations.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470584/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Dou et al. Performance Calibration of the Wavefront Sensor's EMCCD Detector for the Cool Planets Imaging Coronagraph Aboard CSST. J. Imaging 2025, 11, 203. 更正:Dou等人。星载冷行星成像日冕仪波前传感器EMCCD探测器的性能标定。[j] .中国医学工程学报,2015,33(2):391 - 391。
IF 2.7
Journal of Imaging Pub Date : 2025-09-05 DOI: 10.3390/jimaging11090303
Jiangpei Dou, Bingli Niu, Gang Zhao, Xi Zhang, Gang Wang, Baoning Yuan, Di Wang, Xingguang Qian
{"title":"Correction: Dou et al. Performance Calibration of the Wavefront Sensor's EMCCD Detector for the Cool Planets Imaging Coronagraph Aboard CSST. <i>J. Imaging</i> 2025, <i>11</i>, 203.","authors":"Jiangpei Dou, Bingli Niu, Gang Zhao, Xi Zhang, Gang Wang, Baoning Yuan, Di Wang, Xingguang Qian","doi":"10.3390/jimaging11090303","DOIUrl":"10.3390/jimaging11090303","url":null,"abstract":"<p><p>The authors would like to make the following corrections to the published paper [...].</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12471257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Generated Patient-Friendly MRI Fistula Summaries: A Pilot Randomised Study. 人工智能生成的患者友好型MRI瘘总结:一项随机试验研究。
IF 2.7
Journal of Imaging Pub Date : 2025-09-04 DOI: 10.3390/jimaging11090302
Easan Anand, Itai Ghersin, Gita Lingam, Theo Pelly, Daniel Singer, Chris Tomlinson, Robin E J Munro, Rachel Capstick, Anna Antoniou, Ailsa L Hart, Phil Tozer, Kapil Sahnan, Phillip Lung
{"title":"AI-Generated Patient-Friendly MRI Fistula Summaries: A Pilot Randomised Study.","authors":"Easan Anand, Itai Ghersin, Gita Lingam, Theo Pelly, Daniel Singer, Chris Tomlinson, Robin E J Munro, Rachel Capstick, Anna Antoniou, Ailsa L Hart, Phil Tozer, Kapil Sahnan, Phillip Lung","doi":"10.3390/jimaging11090302","DOIUrl":"10.3390/jimaging11090302","url":null,"abstract":"<p><p>Perianal fistulising Crohn's disease (pfCD) affects 1 in 5 Crohn's patients and requires frequent MRI monitoring. Standard radiology reports are written for clinicians using technical language often inaccessible to patients, which can cause anxiety and hinder engagement. This study evaluates the feasibility and safety of AI-generated patient-friendly MRI fistula summaries to improve patient understanding and shared decision-making. MRI fistula reports spanning healed to complex disease were identified and used to generate AI patient-friendly summaries via ChatGPT-4. Six de-identified MRI reports and corresponding AI summaries were assessed by clinicians for hallucinations and readability (Flesch-Kincaid score). Sixteen patients with perianal fistulas were randomized to review either AI summaries or original reports and rated them on readability, comprehensibility, utility, quality, follow-up questions, and trustworthiness using Likert scales. Patients rated AI summaries significantly higher in readability (median 5 vs. 2, <i>p =</i> 0.011), comprehensibility (5 vs. 2, <i>p =</i> 0.007), utility (5 vs. 3, <i>p =</i> 0.014), and overall quality (4.5 vs. 4, <i>p =</i> 0.013), with fewer follow-up questions (3 vs. 4, <i>p =</i> 0.018). Clinicians found AI summaries more readable (mean Flesch-Kincaid 54.6 vs. 32.2, <i>p =</i> 0.005) and free of hallucinations. No clinically significant inaccuracies were identified. AI-generated patient-friendly MRI summaries have potential to enhance patient communication and clinical workflow in pfCD. Larger studies are needed to validate clinical utility, hallucination rates, and acceptability.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12471112/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compton Camera X-Ray Fluorescence Imaging Design and Image Reconstruction Algorithm Optimization. 康普顿相机x射线荧光成像设计及图像重建算法优化。
IF 2.7
Journal of Imaging Pub Date : 2025-09-03 DOI: 10.3390/jimaging11090300
Shunmei Lu, Kexin Peng, Peng Feng, Cheng Lin, Qingqing Geng, Junrui Zhang
{"title":"Compton Camera X-Ray Fluorescence Imaging Design and Image Reconstruction Algorithm Optimization.","authors":"Shunmei Lu, Kexin Peng, Peng Feng, Cheng Lin, Qingqing Geng, Junrui Zhang","doi":"10.3390/jimaging11090300","DOIUrl":"10.3390/jimaging11090300","url":null,"abstract":"<p><p>Traditional X-ray fluorescence computed tomography (XFCT) suffers from issues such as low photon collection efficiency, slow data acquisition, severe noise interference, and poor imaging quality due to the limitations of mechanical collimation. This study proposes to design an X-ray fluorescence imaging system based on bilateral Compton cameras and to develop an optimized reconstruction algorithm to achieve high-quality 2D/3D imaging of low-concentration samples (0.2% gold nanoparticles). A system equipped with bilateral Compton cameras was designed, replacing mechanical collimation with \"electronic collimation\". The traditional LM-MLEM algorithm was optimized through improvements in data preprocessing, system matrix construction, iterative processes, and post-processing, integrating methods such as Total Variation (TV) regularization (anisotropic TV included), filtering, wavelet-domain constraints, and isosurface rendering. Successful 2D and 3D reconstruction of 0.2% gold nanoparticles was achieved. Compared with traditional algorithms, improvements were observed in convergence, stability, speed, quality, and accuracy. The system exhibited high detection efficiency, angular resolution, and energy resolution. The Compton camera-based XFCT overcomes the limitations of traditional methods; the optimized algorithm enables low-noise imaging at ultra-low concentrations and has potential applications in early cancer diagnosis and material analysis.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470824/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnomNet: A Dual-Stage Centroid Optimization Framework for Unsupervised Anomaly Detection. 无监督异常检测的双阶段质心优化框架。
IF 2.7
Journal of Imaging Pub Date : 2025-09-03 DOI: 10.3390/jimaging11090301
Yuan Gao, Yu Wang, Xiaoguang Tu, Jiaqing Shen
{"title":"AnomNet: A Dual-Stage Centroid Optimization Framework for Unsupervised Anomaly Detection.","authors":"Yuan Gao, Yu Wang, Xiaoguang Tu, Jiaqing Shen","doi":"10.3390/jimaging11090301","DOIUrl":"10.3390/jimaging11090301","url":null,"abstract":"<p><p>Anomaly detection plays a vital role in ensuring product quality and operational safety across various industrial applications, from manufacturing to infrastructure monitoring. However, current methods often struggle with challenges such as limited generalization to complex multimodal anomalies, poor adaptation to domain-specific patterns, and reduced feature discriminability due to domain gaps between pre-trained models and industrial data. To address these issues, we propose AnomNet, a novel deep anomaly detection framework that integrates a lightweight feature adapter module to bridge domain discrepancies and enhance multi-scale feature discriminability from pre-trained backbones. AnomNet is trained using a dual-stage centroid learning strategy: the first stage employs separation and entropy regularization losses to stabilize and optimize the centroid representation of normal samples; the second stage introduces a centroid-based contrastive learning mechanism to refine decision boundaries by adaptively managing inter- and intra-class feature relationships. The experimental results on the MVTec AD dataset demonstrate the superior performance of AnomNet, achieving a 99.5% image-level AUROC and 98.3% pixel-level AUROC, underscoring its effectiveness and robustness for anomaly detection and localization in industrial environments.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470577/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid-Recursive-Refinement Network for Camouflaged Object Detection. 伪装目标检测的混合递归优化网络。
IF 2.7
Journal of Imaging Pub Date : 2025-09-02 DOI: 10.3390/jimaging11090299
Hailong Chen, Xinyi Wang, Haipeng Jin
{"title":"Hybrid-Recursive-Refinement Network for Camouflaged Object Detection.","authors":"Hailong Chen, Xinyi Wang, Haipeng Jin","doi":"10.3390/jimaging11090299","DOIUrl":"10.3390/jimaging11090299","url":null,"abstract":"<p><p>Camouflaged object detection (COD) seeks to precisely detect and delineate objects that are concealed within complex and ambiguous backgrounds. However, due to subtle texture variations and semantic ambiguity, it remains a highly challenging task. Existing methods that rely solely on either convolutional neural network (CNN) or Transformer architectures often suffer from incomplete feature representations and the loss of boundary details. To address the aforementioned challenges, we propose an innovative hybrid architecture that synergistically leverages the strengths of CNNs and Transformers. In particular, we devise a Hybrid Feature Fusion Module (HFFM) that harmonizes hierarchical features extracted from CNN and Transformer pathways, ultimately boosting the representational quality of the combined features. Furthermore, we design a Combined Recursive Decoder (CRD) that adaptively aggregates hierarchical features through recursive pooling/upsampling operators and stage-wise mask-guided refinement, enabling precise structural detail capture across multiple scales. In addition, we propose a Foreground-Background Selection (FBS) module, which alternates attention between foreground objects and background boundary regions, progressively refining object contours while suppressing background interference. Evaluations on four widely used public COD datasets, CHAMELEON, CAMO, COD10K, and NC4K, demonstrate that our method achieves state-of-the-art performance.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Brain Lobes to Neurons: Navigating the Brain Using Advanced 3D Modeling and Visualization Tools. 从脑叶到神经元:使用先进的3D建模和可视化工具导航大脑。
IF 2.7
Journal of Imaging Pub Date : 2025-09-01 DOI: 10.3390/jimaging11090298
Mohamed Rowaizak, Ahmad Farhat, Reem Khalil
{"title":"From Brain Lobes to Neurons: Navigating the Brain Using Advanced 3D Modeling and Visualization Tools.","authors":"Mohamed Rowaizak, Ahmad Farhat, Reem Khalil","doi":"10.3390/jimaging11090298","DOIUrl":"10.3390/jimaging11090298","url":null,"abstract":"<p><p>Neuroscience education must convey 3D structure with clarity and accuracy. Traditional 2D renderings are limited as they lose depth information and hinder spatial understanding. High-resolution resources now exist, yet many are difficult to use in the class. Therefore, we developed an educational brain video that moves from gross to microanatomy using MRI-based models and the published literature. The pipeline used Fiji for preprocessing, MeshLab for mesh cleanup, Rhino 6 for target fixes, Houdini FX for materials, lighting, and renders, and Cinema4D for final refinement of the video. We had our brain models validated by two neuroscientists for educational fidelity. We tested the video in a class with 96 undergraduates randomized to video and lecture or lecture only. Students completed the same pretest and posttest questions. Student feedback revealed that comprehension and motivation to learn increased significantly in the group that watched the video, suggesting its potential as a useful supplement to traditional lectures. A short, well-produced 3D video can supplement lectures and improve learning in this setting. We share software versions and key parameters to support reuse.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145150617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal Ultrasound Monitoring of Peripheral Muscle Loss in Neurocritical Patients. 神经危重症患者周围肌损失的纵向超声监测。
IF 2.7
Journal of Imaging Pub Date : 2025-09-01 DOI: 10.3390/jimaging11090297
Talita Santos de Arruda, Rayssa Bruna Holanda Lima, Karla Luciana Magnani Seki, Vanderlei Porto Pinto, Rodrigo Koch, Ana Carolina Dos Santos Demarchi, Gustavo Christofoletti
{"title":"Longitudinal Ultrasound Monitoring of Peripheral Muscle Loss in Neurocritical Patients.","authors":"Talita Santos de Arruda, Rayssa Bruna Holanda Lima, Karla Luciana Magnani Seki, Vanderlei Porto Pinto, Rodrigo Koch, Ana Carolina Dos Santos Demarchi, Gustavo Christofoletti","doi":"10.3390/jimaging11090297","DOIUrl":"10.3390/jimaging11090297","url":null,"abstract":"<p><p>Ultrasound has become an important tool that offers clinical and practical benefits in the intensive care unit (ICU). Its real-time imaging provides immediate information to support prognostic evaluation and clinical decision-making. This study used ultrasound assessment to investigate the impact of hospitalization on muscle properties in neurocritical patients and analyze the relationship between peripheral muscle changes and motor sequelae. A total of 43 neurocritical patients admitted to the ICU were included. The inclusion criteria were patients with acute brain injuries with or without motor sequelae. Muscle ultrasonography assessments were performed during ICU admission and hospital discharge. Measurements included muscle thickness, cross-sectional area, and echogenicity of the biceps brachii, quadriceps femoris, and rectus femoris. Statistical analyses were used to compare muscle properties between time points (hospital admission vs. discharge) and between groups (patients with vs. without motor sequelae). Significance was set at 5%. Hospitalization had a significant effect on muscle thickness, cross-sectional area, and echogenicity in patients with and without motor sequelae (<i>p</i> < 0.05, effect sizes between 0.104 and 0.475). Patients with motor sequelae exhibited greater alterations in muscle echogenicity than those without (<i>p</i> < 0.05, effect sizes between 0.182 and 0.211). Changes in muscle thickness and cross-sectional area were similar between the groups (<i>p</i> > 0.05). Neurocritical patients experience significant muscle deterioration during hospitalization. Future studies should explore why echogenicity is more markedly affected than muscle thickness and cross-sectional area in patients with motor sequelae compared to those without.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Algorithm-Aided Segmentation of Retinal Nerve Fibers Using Fundus Photographs. 基于眼底图像的视网膜神经纤维自动算法辅助分割。
IF 2.7
Journal of Imaging Pub Date : 2025-08-28 DOI: 10.3390/jimaging11090294
Diego Luján Villarreal
{"title":"Automatic Algorithm-Aided Segmentation of Retinal Nerve Fibers Using Fundus Photographs.","authors":"Diego Luján Villarreal","doi":"10.3390/jimaging11090294","DOIUrl":"10.3390/jimaging11090294","url":null,"abstract":"<p><p>This work presents an image processing algorithm for the segmentation of the personalized mapping of retinal nerve fiber layer (RNFL) bundle trajectories in the human retina. To segment RNFL bundles, preprocessing steps were used for noise reduction and illumination correction. Blood vessels were removed. The image was fed to a maximum-minimum modulation algorithm to isolate retinal nerve fiber (RNF) segments. A modified Garway-Heath map categorizes RNF orientation, assuming designated sets of orientation angles for aligning RNFs direction. Bezier curves fit RNFs from the center of the optic disk (OD) to their corresponding end. Fundus images from five different databases (<i>n</i> = 300) were tested, with 277 healthy normal subjects and 33 classified as diabetic without any sign of diabetic retinopathy. The algorithm successfully traced fiber trajectories per fundus across all regions identified by the Garway-Heath map. The resulting trace images were compared to the Jansonius map, reaching an average efficiency of 97.44% and working well with those of low resolution. The average mean difference in orientation angles of the included images was 11.01 ± 1.25 and the average RMSE was 13.82 ± 1.55. A 24-2 visual field (VF) grid pattern was overlaid onto the fundus to relate the VF test points to the intersection of RNFL bundles and their entry angles into the OD. The mean standard deviation (95% limit) obtained 13.5° (median 14.01°), ranging from less than 1° to 28.4° for 50 out of 52 VF locations. The influence of optic parameters was explored using multiple linear regression. Average angle trajectories in the papillomacular region were significantly influenced (<i>p</i> < 0.00001) by the latitudinal optic disk position and disk-fovea angle. Given the basic biometric ground truth data (only fovea and OD centers) that is publicly accessible, the algorithm can be customized to individual eyes and distinguish fibers with accuracy by considering unique anatomical features.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470814/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Roughness Estimation and Image Rendering for Glossy Object Surface. 光滑物体表面粗糙度估计与图像绘制。
IF 2.7
Journal of Imaging Pub Date : 2025-08-28 DOI: 10.3390/jimaging11090296
Shoji Tominaga, Motonori Doi, Hideaki Sakai
{"title":"Roughness Estimation and Image Rendering for Glossy Object Surface.","authors":"Shoji Tominaga, Motonori Doi, Hideaki Sakai","doi":"10.3390/jimaging11090296","DOIUrl":"10.3390/jimaging11090296","url":null,"abstract":"<p><p>We study the relationship between the physical surface roughness of the glossy surfaces of dielectric objects and the roughness parameter in image rendering. The former refers to a measure of the microscopic surface structure of a real object's surface. The latter is a model parameter used to produce the realistic appearance of objects. The target dielectric objects to analyze the surface roughness are handcrafted lacquer plates with controlled surface glossiness, as well as several plastics and lacquer products from everyday life. We first define the physical surface roughness as the standard deviation of the surface normal, and provide the computational procedure. We use a laser scanning system to obtain the precise surface height information at tiny flat areas of a surface. Next, a method is developed for estimating the surface roughness parameter based on images taken of the surface with a camera. With a simple setup for observing a glossy flat surface, we estimate the roughness parameter by fitting the Beckmann function to the image intensity distribution in the observed HDR image using the least squares method. A linear relationship is then found between the measurement-based surface roughness and image-based surface roughness. We present applications to glossy objects with curved surfaces.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 9","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12470256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信