{"title":"Color Image Stitching Elimination Method based on Co-occurrence Matrix","authors":"Y. Su","doi":"10.2352/j.imagingsci.technol.2023.67.6.060502","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.6.060502","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46085981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kexin Wang, Jack Miller, Philippe Meister, Michael C. Dorneich, Lori Brown, Geoff Whitehurst, E. Winer
{"title":"Development and Implementation of an Augmented Reality Thunderstorm Simulation for General Aviation Weather Theory Training","authors":"Kexin Wang, Jack Miller, Philippe Meister, Michael C. Dorneich, Lori Brown, Geoff Whitehurst, E. Winer","doi":"10.2352/j.imagingsci.technol.2023.67.6.060402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.6.060402","url":null,"abstract":". In 2021, there were 1,157 general aviation (GA) accidents, 210 of them fatal, making GA the deadliest civil aviation category. Research shows that accidents are partially caused by ineffective weather theory training. Current weather training in classrooms relies on 2D materials that students often find difficult to map into a real 3D environment. To address these issues, Augmented Reality (AR) was utilized to provide 3D immersive content while running on commodity devices. However, mobile devices have limitations in rendering, camera tracking, and screen size. These limitations make the implementation of mobile device based AR especially challenging for complex visualization of weather phenomena. This paper presents research on how to address the technical challenges of developing and implementing a complex thunderstorm visualization in a marker-based mobile AR application. The development of the system and a technological evaluation of the application’s rendering and tracking performance across different devices is presented.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139305111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quan Liu, Hanyu Zheng, Brandon T Swartz, Ho Hin Lee, Zuhayr Asad, Ivan Kravchenko, Jason G Valentine, Yuankai Huo
{"title":"Digital Modeling on Large Kernel Metamaterial Neural Network.","authors":"Quan Liu, Hanyu Zheng, Brandon T Swartz, Ho Hin Lee, Zuhayr Asad, Ivan Kravchenko, Jason G Valentine, Yuankai Huo","doi":"10.2352/j.imagingsci.technol.2023.67.6.060404","DOIUrl":"10.2352/j.imagingsci.technol.2023.67.6.060404","url":null,"abstract":"<p><p>Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3×3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI.</p>","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10970463/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140305875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acquisition of Color Reproduction Technique based on Deep Learning Using a Database of Color-converted Images in the Printing Industry","authors":"Ikumi Hirose, Ryosuke Yabe, Toshiyuki Inoue, Koushi Hashimoto, Yoshikatsu Arizono, Kazunori Harada, Vinh-Tiep Nguyen, Thanh Duc Ngo, Duy-Dinh Le, Norimichi Tsumura","doi":"10.2352/j.imagingsci.technol.2023.67.5.050402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050402","url":null,"abstract":"Color-space conversion technology is important to output accurate colors on different devices. In particular, CMYK (Cyan, Magenta, Yellow and Key plate) used by printers has a limited range of representable colors compared with RGB (Red, Green and Blue) used for normal images. This leads to the problem of loss of color information when printing. When an RGB image captured by a camera is printed as is, colors outside the CMYK gamut are degraded, and colors that differ significantly from the actual image may be output. Therefore, printers and other companies manually correct color tones before printing. This process is based on empirical know-how and human sensitivity and has not yet been automated by machines. Therefore, this study aims to automate color correction in color-space conversion from RGB to CMYK. Specifically, we use machine learning, utilising a large color-conversion database owned by printing companies, which has been cultivated through past correction work, to learn the color-correction techniques of skilled workers. This reduces the burden on the part of the work that has been done manually, and leads to increased efficiency. In addition, the machine can compensate for some of the empirical know-how, which is expected to simplify the transfer of skills. Quantitative and qualitative evaluation results show the effectiveness of the proposed method for automatic color correction.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Characterization of Wood Materials Using Perception-Related Image Statistics","authors":"Jiří Filip, Veronika Vilímovská","doi":"10.2352/j.imagingsci.technol.2023.67.5.050408","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050408","url":null,"abstract":"An efficient computational characterization of real-world materials is one of the challenges in image understanding. An automatic assessment of materials, with similar performance as human observer, usually relies on complicated image filtering derived from models of human perception. However, these models become too complicated when a real material is observed in the form of dynamic stimuli. This study tackles the challenge from the other side. First, we collected human ratings of the most common visual attributes for videos of wood samples and analyzed their relationship to selected image statistics. In our experiments on a set of sixty wood samples, we have found that such image statistics can perform surprisingly well in the discrimination of individual samples with reasonable correlation to human ratings. We have also shown that these statistics can be also effective in the discrimination of images of the same material taken under different illumination and viewing conditions.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135433955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Hyperspectral Data Processing using File Fragmentation","authors":"C. Caruncho, P. J. Pardo, H. Cwierz","doi":"10.2352/j.imagingsci.technol.2023.67.5.050403","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050403","url":null,"abstract":"In this article, we present a method for processing hyperspectral data in an easy and quick manner. We explain how we split the hyperspectral cube in different sections to be processed using fewer resources. We describe the processing, which includes extraction of the raw data along with white and black calibration data, calibration of the data and application of desired light source, color space, and gamma transformation. We then present a built-in software, including an easy interactive Graphical User Interface (GUI) that will allow fellow researchers to process hyperspectral images in a simple fashion.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt-Erik Baltzersen
{"title":"Multi-Attention Guided SKFHDRNet For HDR Video Reconstruction","authors":"Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt-Erik Baltzersen","doi":"10.2352/j.imagingsci.technol.2023.67.5.050409","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050409","url":null,"abstract":"We propose a three stage learning-based approach for High Dynamic Range (HDR) video reconstruction with alternating exposures. The first stage performs alignment of neighboring frames to the reference frame by estimating the flows between them, the second stage is composed of multi-attention modules and a pyramid cascading deformable alignment module to refine aligned features, and the final stage merges and estimates the final HDR scene using a series of dilated selective kernel fusion residual dense blocks (DSKFRDBs) to fill the over-exposed regions with details. The proposed model variants give HDR-VDP-2 values on a dynamic dataset of 79.12, 78.49, and 78.89 respectively, compared to Chen et al. [“HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset,” Proc. IEEE/CVF Int’l. Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan et al. [“Attention-guided network for ghost-free high dynamic range imaging,” Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et al. [“Patch-based high dynamic range video,” ACM Trans. Graph. 32 (2013) 202–1] 70.36, and Kalantari et al. [“Deep hdr video from sequences with alternating exposures,” Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91. We achieve better detail reproduction and alignment in over-exposed regions compared to state-of-the-art methods and with a smaller number of parameters.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135640659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emilie Robert, Che Shen, Magali Estribeau, Edoardo Cucchetti, Mark Fairchild
{"title":"Color Correction of Mars Images: A Study of Illumination Discrimination Along Solight Locus","authors":"Emilie Robert, Che Shen, Magali Estribeau, Edoardo Cucchetti, Mark Fairchild","doi":"10.2352/j.imagingsci.technol.2023.67.5.050410","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050410","url":null,"abstract":"Geologists consider it crucial to work on faithful images of Mars. However, no color correction is yet done systematically on those images, especially due to the poor knowledge of the local martian weather. The weather is highly fluctuating and with the low gravity of the planet, it tends to set the conditions for varying amounts of dust in the atmosphere and ground illumination variations as well. Low discrimination of light variations by the Human Visual System is explained by Chromatic Adaptation (CA). Color images processing therefore often accounts for a step related to CA. This study investigates whether this step has to be applied to Mars images as well and is done through an illumination discrimination task performed on 15 observers for stimuli along daylight locus and solight locus (lights of Mars planet) generated through a 7-LEDs lighting system. This study gives outputs in agreement with other on daylight locus while showing low differences between results under daylight and solight.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135736478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing Perceptual Differences in White Color Constancy","authors":"Marco Buzzelli","doi":"10.2352/j.imagingsci.technol.2023.67.5.050404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050404","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47383306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From the Editor","authors":"Chunghui Kuo","doi":"10.2352/j.imagingsci.technol.2023.67.5.050101","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050101","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}