IS&T International Symposium on Electronic Imaging最新文献

筛选
英文 中文
Image Processing: Algorithms and Systems XXI Conference Overview and Papers Program 图像处理:算法和系统第21届会议综述和论文计划
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.9.ipas-a09
{"title":"Image Processing: Algorithms and Systems XXI Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.9.ipas-a09","DOIUrl":"https://doi.org/10.2352/ei.2023.35.9.ipas-a09","url":null,"abstract":"Abstract Image Processing: Algorithms and Systems continues the tradition of the past conference, Nonlinear Image Processing and Pattern Analysis, in exploring new image processing algorithms. Specifically, the conference aims at highlighting the importance of the interaction between transform-, model-, and learning-based approaches for creating effective algorithms and building modern imaging systems for new and emerging applications. It also reverberates the growing call for integration of the theoretical research on image processing algorithms with the more applied research on image processing systems.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Robotics and Industrial Applications using Computer Vision 2023 Conference Overview and Papers Program 使用计算机视觉的智能机器人和工业应用2023会议综述和论文计划
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.5.iriacv-a05
{"title":"Intelligent Robotics and Industrial Applications using Computer Vision 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.5.iriacv-a05","DOIUrl":"https://doi.org/10.2352/ei.2023.35.5.iriacv-a05","url":null,"abstract":"Abstract This conference brings together real-world practitioners and researchers in intelligent robots and computer vision to share recent applications and developments. Topics of interest include the integration of imaging sensors supporting hardware, computers, and algorithms for intelligent robots, manufacturing inspection, characterization, and/or control. The decreased cost of computational power and vision sensors has motivated the rapid proliferation of machine vision technology in a variety of industries, including aluminum, automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, fishing, agriculture, archaeological products, medical products, artistic products, etc. Other industries, such as semiconductor and electronics manufacturing, have been employing machine vision technology for several decades. Machine vision supporting handling robots is another main topic. With respect to intelligent robotics another approach is sensor fusion – combining multi-modal sensors in audio, location, image and video data for signal processing, machine learning and computer vision, and additionally other 3D capturing devices. There is a need for accurate, fast, and robust detection of objects and their position in space. Their surface, background, and illumination are uncontrolled, and in most cases the objects of interest are within a bulk of many others. For both new and existing industrial users of machine vision, there are numerous innovative methods to improve productivity, quality, and compliance with product standards. There are several broad problem areas that have received significant attention in recent years. For example, some industries are collecting enormous amounts of image data from product monitoring systems. New and efficient methods are required to extract insight and to perform process diagnostics based on this historical record. Regarding the physical scale of the measurements, microscopy techniques are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale technologies. Techniques such as resolution enhancement, model-based methods, and statistical imaging may provide the means to extend these systems beyond current capabilities. Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial environments is a challenge for machine vision researchers, especially when the manufacturer cannot make significant changes to their facility or process.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive security personnel training module for active shooter events 身临其境的安全人员培训模块为主动射击事件
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.12.ervr-217
Sharad Sharma, JeeWoong Park, Brendan Tran Morris
{"title":"Immersive security personnel training module for active shooter events","authors":"Sharad Sharma, JeeWoong Park, Brendan Tran Morris","doi":"10.2352/ei.2023.35.12.ervr-217","DOIUrl":"https://doi.org/10.2352/ei.2023.35.12.ervr-217","url":null,"abstract":"There is a need to prepare for emergencies such as active shooter events. Emergency response training drills and exercises are necessary to train for such events as we are unable to predict when emergencies do occur. There has been progress in understanding human behavior, unpredictability, human motion synthesis, crowd dynamics, and their relationships with active shooter events, but challenges remain. This paper presents an immersive security personnel training module for active shooter events in an indoor building. We have created an experimental platform for conducting active shooter drills for training that gives a fully immersive feel of the situation and allow one to perform virtual evacuation drills. The security personnel training module also incorporates four sub-modules namely 1) Situational assessment module, 2) Individual officer intervention module, 3) Team Response Module, and 4) Rescue Task Force module. We have developed an immersive virtual reality training module for active shooter events using an Oculus for course of action, visualization, and situational awareness for active shooter events as shown in Fig.1. The immersive security personnel training module aims to get information about the emergency situation inside the building. The dispatched officer will verify the active shooter situation in the building. The security personnel should find a safe zone in the building and secure the people in that area. The security personnel should also find the number and location of persons in possible jeopardy. Upon completion of the initial assessment, the first security personnel shall advise communications and request resources as deemed necessary. This will allow determining whether to take immediate action alone or with another officer or wait until additional resources are available. After successfully gathering the information, the personnel needs to update the info to their officer through a communication device.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Engineering Reality of Virtual Reality 2023 Conference Overview and Papers Program 2023虚拟现实工程现实会议综述和论文计划
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.12.ervr-a12
{"title":"Engineering Reality of Virtual Reality 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.12.ervr-a12","DOIUrl":"https://doi.org/10.2352/ei.2023.35.12.ervr-a12","url":null,"abstract":"Abstract Virtual and augmented reality systems are evolving. In addition to research, the trend toward content building continues and practitioners find that technologies and disciplines must be tailored and integrated for specific visualization and interactive applications. This conference serves as a forum where advances and practical advice toward both creative activity and scientific investigation are presented and discussed. Research results can be presented and applications can be demonstrated.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional synthetic food image generation 条件合成食品图像生成
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.7.image-268
Wenjin Fu, Yue Han, Jiangpeng He, Sriram Baireddy, Mridul Gupta, Fengqing Zhu
{"title":"Conditional synthetic food image generation","authors":"Wenjin Fu, Yue Han, Jiangpeng He, Sriram Baireddy, Mridul Gupta, Fengqing Zhu","doi":"10.2352/ei.2023.35.7.image-268","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-268","url":null,"abstract":"Generative Adversarial Networks (GAN) have been widely investigated for image synthesis based on their powerful representation learning ability. In this work, we explore the StyleGAN and its application of synthetic food image generation. Despite the impressive performance of GAN for natural image generation, food images suffer from high intra-class diversity and inter-class similarity, resulting in overfitting and visual artifacts for synthetic images. Therefore, we aim to explore the capability and improve the performance of GAN methods for food image generation. Specifically, we first choose StyleGAN3 as the baseline method to generate synthetic food images and analyze the performance. Then, we identify two issues that can cause performance degradation on food images during the training phase: (1) inter-class feature entanglement during multi-food classes training and (2) loss of high-resolution detail during image downsampling. To address both issues, we propose to train one food category at a time to avoid feature entanglement and leverage image patches cropped from high-resolution datasets to retain fine details. We evaluate our method on the Food-101 dataset and show improved quality of generated synthetic food images compared with the baseline. Finally, we demonstrate the great potential of improving the performance of downstream tasks, such as food image classification by including high-quality synthetic training samples in the data augmentation.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance of OSINT/SOCMINT for modern disaster management evaluation - Australia, Haiti, Japan OSINT/SOCMINT对现代灾害管理评估的重要性-澳大利亚、海地、日本
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-354
Nazneen Mansoor, Klaus Schwarz, Reiner Creutzburg
{"title":"Importance of OSINT/SOCMINT for modern disaster management evaluation - Australia, Haiti, Japan","authors":"Nazneen Mansoor, Klaus Schwarz, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-354","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-354","url":null,"abstract":"Open-source technologies (OSINT) and Social Media Intelligence (SOCMINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations. These OSINT and SOCMINT technologies use sophisticated techniques and special tools to efficiently analyze the continually growing sources of information. There is a great need for training and further education in the OSINT field worldwide. This report describes the importance of open source or social media intelligence for evaluating disaster management. It also gives an overview of the government work in Australia, Haiti, and Japan for disaster management using various OSINT tools and platforms. Thus, decision support for using OSINT and SOCMINT tools is given, and the necessary training needs for investigators can be better estimated.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Am I safe? A preliminary examination of how everyday people interpret covid data visualizations 我安全吗?初步研究日常人们如何解读covid数据可视化
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.10.hvei-251
Bernice Rogowitz, Paul Borrel
{"title":"Am I safe? A preliminary examination of how everyday people interpret covid data visualizations","authors":"Bernice Rogowitz, Paul Borrel","doi":"10.2352/ei.2023.35.10.hvei-251","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-251","url":null,"abstract":"During these past years, international COVID data have been collected by several reputable organizations and made available to the worldwide community. This has resulted in a wellspring of different visualizations. Many different measures can be selected (e.g., cases, deaths, hospitalizations). And for each measure, designers and policy makers can make a myriad of different choices of how to represent the data. Data from individual countries may be presented on linear or log scales, daily, weekly, or cumulative, alone or in the context of other countries, scaled to a common grid, or scaled to their own range, raw or per capita, etc. It is well known that the data representation can influence the interpretation of data. But, what visual features in these different representations affect our judgments? To explore this idea, we conducted an experiment where we asked participants to look at time-series data plots and assess how safe they would feel if they were traveling to one of the countries represented, and how confident they are of their judgment. Observers rated 48 visualizations of the same data, rendered differently along 6 controlled dimensions. Our initial results provide insight into how characteristics of the visual representation affect human judgments of time series data. We also discuss how these results could impact how public policy and news organizations choose to represent data to the public.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement of vehicles accident detection using object tracking with U-Net 基于U-Net的目标跟踪改进车辆事故检测
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-363
Kirsnaragavan Arudpiragasam, Taraka Rama Krishna Kanth Kannuri, Klaus Schwarz, Michael Hartmann, Reiner Creutzburg
{"title":"Improvement of vehicles accident detection using object tracking with U-Net","authors":"Kirsnaragavan Arudpiragasam, Taraka Rama Krishna Kanth Kannuri, Klaus Schwarz, Michael Hartmann, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-363","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-363","url":null,"abstract":"Over the past decade, researchers have suggested many methods to find anomalies. However, none of the studies has applied frame reconstruction with Object Tracking (OT) to detect anomalies. Therefore, this study focuses on road accident detection using a combination of OT and U-Net associated with variants such as skip, skip residual and attention connections. The U-Net algorithm is developed for reconstructing the images using the UFC-Crime dataset. Furthermore, YOLOV4 and DeepSort are used for object detection and tracking within the frames. Finally, the Mahalanobis distance and the reconstruction error (RCE) are determined using a Kalman filter and the U-Net model.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Vision and Electronic Imaging 2023 Conference Overview and Papers Program 人类视觉和电子成像2023会议综述和论文计划
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.10.hvei-a10
{"title":"Human Vision and Electronic Imaging 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.10.hvei-a10","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-a10","url":null,"abstract":"Abstract The conference on Human Vision and Electronic Imaging explores the role of human perception and cognition in the design, analysis, and use of electronic media systems. Over the years, it has brought together researchers, technologists, and artists, from all over the world, for a rich and lively exchange of ideas. We believe that understanding the human observer is fundamental to the advancement of electronic media systems, and that advances in these systems and applications drive new research into the perception and cognition of the human observer. Every year, we introduce new topics through our Special Sessions, centered on areas driving innovation at the intersection of perception and emerging media technologies.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of AR and VR memory palace quality in second-language vocabulary acquisition (Invited) AR和VR在二语词汇习得中的记忆宫殿质量比较(特邀)
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.10.hvei-220
Nicko R. Caluya, Xiaoyang Tian, Damon M. Chandler
{"title":"Comparison of AR and VR memory palace quality in second-language vocabulary acquisition (Invited)","authors":"Nicko R. Caluya, Xiaoyang Tian, Damon M. Chandler","doi":"10.2352/ei.2023.35.10.hvei-220","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-220","url":null,"abstract":"The method of loci (memory palace technique) is a learning strategy that uses visualizations of spatial environments to enhance memory. One particularly popular use of the method of loci is for language learning, in which the method can help long-term memory of vocabulary by allowing users to associate location and other spatial information with particular words/concepts, thus making use of spatial memory to assist memory typically associated with language. Augmented reality (AR) and virtual reality (VR) have been shown to potentially provide even better memory enhancement due to their superior visualization abilities. However, a direct comparison of the two techniques in terms of language-learning enhancement has not yet been investigated. In this presentation, we present the results of a study designed to compare AR and VR when using the method of loci for learning vocabulary from a second language.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信