Dipendra J. Mandal, Marius Pedersen, Sony George, Clotilde Boust
{"title":"Comparison of Pigment Classification Algorithms on Non-Flat Surfaces using Hyperspectral Imaging","authors":"Dipendra J. Mandal, Marius Pedersen, Sony George, Clotilde Boust","doi":"10.2352/j.imagingsci.technol.2023.67.5.050405","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050405","url":null,"abstract":"Cultural heritage objects, such as paintings, provide valuable insights into the history and culture of human societies. Preserving these objects is of utmost importance, and developing new technologies for their analysis and conservation is crucial. Hyperspectral imaging is a technology with a wide range of applications in cultural heritage, including documentation, material identification, visualization and pigment classification. Pigment classification is crucial for conservators and curators in preserving works of art and acquiring valuable insights into the historical and cultural contexts associated with their origin. Various supervised algorithms, including machine learning, are used to classify pigments based on their spectral signatures. Since many artists employ impasto techniques in their artworks that produce a relief on the surface, i.e., transforming it from a flat object to a 2.5D or 3D, this further makes the classification task difficult. To our knowledge, no previous research has been conducted on pigment classification using hyperspectral imaging concerning an elevated surface. Therefore, this study compares different spectral classification techniques that employ deterministic and stochastic methods, their hybrid combinations, and machine learning models for an elevated mockup to determine whether such topographical variation affects classification accuracy. In cultural heritage, the lack of adequate data is also a significant challenge for using machine learning, particularly in domains where data collection is expensive, time-consuming, or impractical. Data augmentation can help mitigate this challenge by generating new samples similar to the original. We also analyzed the impact of data augmentation techniques on the effectiveness of machine learning models for cultural heritage applications.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134995368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color Performance Review (CPR): A Color Performance Analyzer for Endoscopy Devices","authors":"Wei-Chung Cheng","doi":"10.2352/j.imagingsci.technol.2023.67.5.050406","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050406","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47084038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Appearance Reproduction Framework for Printed 3D Surfaces","authors":"Tanzima Habib, Phil Green, Peter Nussbaum","doi":"10.2352/j.imagingsci.technol.2023.67.5.050413","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050413","url":null,"abstract":"Bidirectional reflection distribution function (BRDF) is used to measure colour with gloss and surface geometry. In this paper, we aim to provide a practical way of reproducing the appearance of a 3D printed surface in 2.5D printing of any slope angle and colour in a colour-managed workflow as a means for softproofing. To account for the change in colour due to a change in surface slope, we developed a BRDF interpolation algorithm that adjusts the colour of the tristimulus values of the flat target to predict the corresponding colour on a surface with a slope. These adjusted colours are then used by the interpolated BRDF workflow to finally predict the colour parameters for each pixel with a particular slope. The effectiveness of this algorithm in reducing colour differences in 2.5D printing has been successfully demonstrated. We then finally show how all the components, slope colour adjustment method, interpolated BRDF parameters algorithm, and BRDF model encoded profiles using iccMAX are connected to make a practical appearance reproduction framework for 2.5D printing.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grey Balance in Cross Media Reproductions","authors":"Gregory High, Peter Nussbaum, Phil Green","doi":"10.2352/j.imagingsci.technol.2023.67.5.050411","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050411","url":null,"abstract":"Grey balance plays an important role in determining the device values needed to reproduce colours which appear achromatic throughout the tonal range. However, complete observer adaptation to the media white rarely occurs, and these designated device values can still appear non-neutral. This poses a problem for cross-media reproductions, where a mismatch in neutral colours is often the most noticeable difference between them. This paper presents two related experiments which investigate a means of gaining better visual agreement between reproductions which have different background colours or media whites. The first quantifies the degree of adjustment (the degree of media relative transform) needed to make an appearance match between grey patches on a white background and on background colours of various hues and colourfulness. It was found that the degree of adjustment was near-linearly related to the luminance of the patch itself, with lighter patches requiring greater adjustment towards the background colour. Neither the hue nor the chroma of the patch’s background had any significant effect on the underlying function. In the second experiment, this concept is applied to pictorial images on paper-coloured backgrounds. Three pixelwise rendering strategies were compared. In side-by-side viewing, the adaptive control of neutrals outperformed the media relative transform in all cases. Even for modest differences in paper colour (ΔEab of 3), images with significant neutral content benefited from the adaptive approach.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135588789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Matrix R-based Visual Response to Optimal Colors and Application to Image Color Gamut Expansion","authors":"Hiroaki Kotera","doi":"10.2352/j.imagingsci.technol.2023.67.5.050414","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050414","url":null,"abstract":"The optimal colors with maximum chroma at constant lightness present an ideal target for the colorants pursuing the ultimate wide color gamut. MacAdam proved that optimal colors are composed of square pulse-shaped spectra with at least two tansition wavelengths λ1 and λ2 whose reflectances change from 0 to 1 or 1 to 0. The optimal color gamut is created from two-types, a convex-type with reflectance 1.0 in w = λ1 ∼ λ2 and 0.0 otherwise, or a concave-type with reflectance 0.0 in w = λ1 ∼ λ2 and 1.0 otherwise. It takes a high computation cost to search the optimal color candidates in high precision and to create the 3D color gamut. In addition, the human visual spectral responses to the optimal color spectra remain unknown. This paper (1) proposes an alternative simple method for creating the optimal color gamut with GBD (Gamujt Boundary Descriptor) technique, and (2) clarifies how human vision spectrally respond to the optimal colors based on Matrix-R theory, for the first time which was unknown until now, and (3) presents centroid-invariant novel color gamut expansion method considering the optimal color as an ideal target and finally apply it to actual low-saturation images to verify its effect.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135639313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yun Chen, Jie Yang, Fan Zhang, Kaida Xiao, Stephen Westland
{"title":"The Exploration of Specific Associations from Words to Colours","authors":"Yun Chen, Jie Yang, Fan Zhang, Kaida Xiao, Stephen Westland","doi":"10.2352/j.imagingsci.technol.2023.67.5.050401","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050401","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48035398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Locus Filters: Theory and Application","authors":"Rada Deeb, Graham D. Finlayson, Elaheh Daneshvar","doi":"10.2352/j.imagingsci.technol.2023.67.5.050407","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050407","url":null,"abstract":"Recently, a theoretical framework was presented for designing colored filters called Locus Filters. Locus filters are designed so that any Wien-Planckian light, post filtering, is mapped to another Wien-Planckian light. Moreover, it was also shown that only filters designed according to the locus filter framework have this locus-to-locus mapping property. In this paper, we investigate how locus filters work in the real world. We make two main contributions. First, for daylights, we introduce a new daylight locus with respect to which a locus filter always maps a daylight to another daylight (and their correlated color temperature maps in analogy to the Wien-Planckian temperatures). Importantly, we show that our new locus is close to the standard daylight locus (but has a simpler and more elegant formalism). Secondly, we evaluate the extent to which some commercially available light balancing and color correction filters behave like locus filters.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135389547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automotive Paint Defect Classification: Factory-Specific Data Generation using CG Software for Deep-Learning Models","authors":"Kazuki Iwata, Haotong Guo, Ryuichi Yoshida, Yoshihito Souma, Chawan Koopipat, Masato Takahashi, Norimichi Tsumura","doi":"10.2352/j.imagingsci.technol.2023.67.5.050412","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050412","url":null,"abstract":"In recent years, the advances in technology for detecting paint defects on exterior surfaces of automobiles have led to the emergence of research on automatic classification of defect types using deep learning. To develop a deep-learning model capable of identifying defect types, a large dataset consisting of sequential images of paint defects captured during inspection is required. However, generating such a dataset for each factory using actual measurements is expensive. Therefore, we propose a method for generating datasets to train deep-learning models in each factory by simulating images using computer graphics.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135640921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mei Bie, Huan-Yu Xu, Quanle Liu, Yan Gao, Xiangjiu Che
{"title":"Multi-dimension and Multi-level Information Fusion for Facial Expression Recognition","authors":"Mei Bie, Huan-Yu Xu, Quanle Liu, Yan Gao, Xiangjiu Che","doi":"10.2352/j.imagingsci.technol.2023.67.4.040410","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.4.040410","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42603807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Intelligent Material Handling System for Hybrid Robot based on Visual Navigation","authors":"Xiaorui Zhao, Xue-Fang Chen","doi":"10.2352/j.imagingsci.technol.2023.67.4.040409","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.4.040409","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45088939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}