IS&T International Symposium on Electronic Imaging最新文献

筛选
英文 中文
Practical OSINT investigation in Twitter utilizing AI-based aggressiveness analysis 利用基于人工智能的攻击性分析对Twitter进行实际OSINT调查
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-355
Artem Sklyar, Klaus Schwarz, Reiner Creutzburg
{"title":"Practical OSINT investigation in Twitter utilizing AI-based aggressiveness analysis","authors":"Artem Sklyar, Klaus Schwarz, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-355","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-355","url":null,"abstract":"Open-source intelligence is gaining popularity due to the rapid development of social networks. There is more and more information in the public domain. One of the most popular social networks is Twitter. It was chosen to analyze the dependence of changes in the number of likes, reposts, quotes and retweets on the aggressiveness of the post text for a separate profile, as this information can be important not only for the owner of the channel in the social network, but also for other studies that in some way influence user accounts and their behavior in the social network. Furthermore, this work includes a detailed analysis and evaluation of the Tweety library capabilities and situations in which it can be effectively applied. Lastly, this work includes the creation and description of a compiled neural network whose purpose is to predict changes in the number of likes, reposts, quotes, and retweets from the aggressiveness of the post text for a separate profile.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Imaging and Applications 2023 Conference Overview and Papers Program 3D成像与应用2023会议综述和论文计划
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.17.3dia-a17
{"title":"3D Imaging and Applications 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.17.3dia-a17","DOIUrl":"https://doi.org/10.2352/ei.2023.35.17.3dia-a17","url":null,"abstract":"Abstract Scientific and technological advances during the last decade in the fields of image acquisition, data processing, telecommunications, and computer graphics have contributed to the emergence of new multimedia, especially 3D digital data. Modern 3D imaging technologies allow for the acquisition of 3D and 4D (3D video) data at higher speeds, resolutions, and accuracies. With the ability to capture increasingly complex 3D/4D information, advancements have also been made in the areas of 3D data processing (e.g., filtering, reconstruction, compression). As such, 3D/4D technologies are now being used in a large variety of applications, such as medicine, forensic science, cultural heritage, manufacturing, autonomous vehicles, security, and bioinformatics. Further, with mixed reality (AR, VR, XR), 3D/4D technologies may also change the ways we work, play, and communicate with each other every day.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer Vision and Image Analysis of Art 2023 Conference Overview and Papers Program 计算机视觉与图像分析艺术2023会议综述和论文计划
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.13.cvaa-a13
{"title":"Computer Vision and Image Analysis of Art 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.13.cvaa-a13","DOIUrl":"https://doi.org/10.2352/ei.2023.35.13.cvaa-a13","url":null,"abstract":"Abstract This conference on computer image analysis in the study of art presents leading research in the application of image analysis, computer vision, and pattern recognition to problems of interest to art historians, curators and conservators. A number of recent questions and controversies have highlighted the value of rigorous image analysis in the service of the analysis of art, particularly painting. Consider these examples: the fractal image analysis for the authentication of drip paintings possibly by Jackson Pollock; sophisticated perspective, shading and form analysis to address claims that early Renaissance masters such as Jan van Eyck or Baroque masters such as Georges de la Tour traced optically projected images; automatic multi-scale analysis of brushstrokes for the attribution of portraits within a painting by Perugino; and multi-spectral, x-ray and infra-red scanning and image analysis of the Mona Lisa to reveal the painting techniques of Leonardo. The value of image analysis to these and other questions strongly suggests that current and future computer methods will play an ever larger role in the scholarship of visual arts.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using simulation to quantify the performance of automotive perception systems 用仿真方法量化汽车感知系统的性能
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-118
Zhenyi Liu, Devesh Shah, Alireza Rahimpour, Devesh Upadhyay, Joyce Farrell, Brian Wandell
{"title":"Using simulation to quantify the performance of automotive perception systems","authors":"Zhenyi Liu, Devesh Shah, Alireza Rahimpour, Devesh Upadhyay, Joyce Farrell, Brian Wandell","doi":"10.2352/ei.2023.35.16.avm-118","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-118","url":null,"abstract":"The design and evaluation of complex systems can benefit from a software simulation - sometimes called a digital twin. The simulation can be used to characterize system performance or to test its performance under conditions that are difficult to measure (e.g., nighttime for automotive perception systems). We describe the image system simulation software tools that we use to evaluate the performance of image systems for object (automobile) detection. We describe experiments with 13 different cameras with a variety of optics and pixel sizes. To measure the impact of camera spatial resolution, we designed a collection of driving scenes that had cars at many different distances. We quantified system performance by measuring average precision and we report a trend relating system resolution and object detection performance. We also quantified the large performance degradation under nighttime conditions, compared to daytime, for all cameras and a COCO pre-trained network.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical phase retrieval using double deep image priors 基于双深度图像先验的实际相位检索
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.14.coimg-153
Zhong Zhuang, David Yang, Felix Hofmann, David Barmherzig, Ju Sun
{"title":"Practical phase retrieval using double deep image priors","authors":"Zhong Zhuang, David Yang, Felix Hofmann, David Barmherzig, Ju Sun","doi":"10.2352/ei.2023.35.14.coimg-153","DOIUrl":"https://doi.org/10.2352/ei.2023.35.14.coimg-153","url":null,"abstract":"Phase retrieval (PR) consists of recovering complex-valued objects from their oversampled Fourier magnitudes and takes a central place in scientific imaging. A critical issue around PR is the typical nonconvexity in natural formulations and the associated bad local minimizers. The issue is exacerbated when the support of the object is not precisely known and hence must be overspecified in practice. Practical methods for PR hence involve convolved algorithms, e.g., multiple cycles of hybrid input-output (HIO) + error reduction (ER), to avoid the bad local minimizers and attain reasonable speed, and heuristics to refine the support of the object, e.g., the famous shrinkwrap trick. Overall, the convolved algorithms and the support-refinement heuristics induce multiple algorithm hyperparameters, to which the recovery quality is often sensitive. In this work, we propose a novel PR method by parameterizing the object as the output of a learnable neural network, i.e., deep image prior (DIP). For complex-valued objects in PR, we can flexibly parametrize the magnitude and phase, or the real and imaginary parts separately by two DIPs. We show that this simple idea, free from multi-hyperparameter tuning and support-refinement heuristics, can obtain superior performance than gold-standard PR methods. For the session: Computational Imaging using Fourier Ptychography and Phase Retrieval.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative adversarial networks (GANs) and object tracking (OT) for vehicle accident detection 生成对抗网络(GANs)和目标跟踪(OT)在车辆事故检测中的应用
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-364
Taraka Rama Krishna Kanth Kannuri, Kirsnaragavan Arudpiragasam, Klaus Schwarz, Michael Hartmann, Reiner Creutzburg
{"title":"Generative adversarial networks (GANs) and object tracking (OT) for vehicle accident detection","authors":"Taraka Rama Krishna Kanth Kannuri, Kirsnaragavan Arudpiragasam, Klaus Schwarz, Michael Hartmann, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-364","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-364","url":null,"abstract":"Accident detection is one of the biggest challenges as there are various anomalies, occlusions, and objects in the image at different times. Therefore, this paper focuses on detecting traffic accidents through a combination of Object Tracking (OT) and image generation using GAN with variants such as skip connection, residual, and attention connection. The background removal techniques will be applied to reduce the background variation in the frame. Later, YOLO-R is used to detect objects, followed by DeepSort tracking of objects in the frame. Finally, the distance error metric and the adversarial error are determined using the Kalman filter and the GAN approach and help to decide accidents in videos.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optical flow for autonomous driving: Applications, challenges and improvements 自动驾驶的光流:应用、挑战和改进
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-128
Shihao Shen, Louis Kerofsky, Senthil Yogamani
{"title":"Optical flow for autonomous driving: Applications, challenges and improvements","authors":"Shihao Shen, Louis Kerofsky, Senthil Yogamani","doi":"10.2352/ei.2023.35.16.avm-128","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-128","url":null,"abstract":"Estimating optical flow presents unique challenges in AV applications: large translational motion, wide variations in depth of important objects, strong lens distortion in commonly used fisheye cameras and rolling shutter artefacts in dynamic scenes. Even simple translational motion can produce complicated optical flow fields. Lack of ground truth data also creates a challenge. We evaluate recent optical flow methods on fisheye imagery found in AV applications. We explore various training techniques in challenging scenarios and domain adaptation for transferring models trained on synthetic data where ground truth is available to real-world data. We propose novel strategies that facilitate learning robust representations efficiently to address low-light degeneracies. Finally, we discuss the main challenges and open problems in this problem domain.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135644696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FastPoints: A state-of-the-art point cloud renderer for Unity FastPoints:最先进的Unity点云渲染器
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.1.vda-394
Elias Neuman-Donihue, Michael Jarvis, Yuhao Zhu
{"title":"FastPoints: A state-of-the-art point cloud renderer for Unity","authors":"Elias Neuman-Donihue, Michael Jarvis, Yuhao Zhu","doi":"10.2352/ei.2023.35.1.vda-394","DOIUrl":"https://doi.org/10.2352/ei.2023.35.1.vda-394","url":null,"abstract":"In this paper, we introduce FastPoints, a state-of-the-art point cloud renderer for the Unity game development platform. Our program supports standard unprocessed point cloud formats with non-programmatic, drag-and-drop support, and creates an out-of-core data structure for large clouds without requiring an explicit preprocessing step; instead, the software renders a decimated point cloud immediately and constructs a shallow octree online, during which time the Unity editor remains fully interactive.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical OSINT investigation - Similarity calculation using Reddit user profile data 实用OSINT调查-相似度计算使用Reddit用户资料数据
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-356
Valeria Vishnevskaya, Klaus Schwarz, Reiner Creutzburg
{"title":"Practical OSINT investigation - Similarity calculation using Reddit user profile data","authors":"Valeria Vishnevskaya, Klaus Schwarz, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-356","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-356","url":null,"abstract":"This paper presents a practical Open Source Intelligence (OSINT) use case for user similarity measurements with the use of open profile data from the Reddit social network. This PoC work combines the open data from Reddit and the part of the state-of-the-art BERT model. Using the PRAW Python library, the project fetches comments and posts of users. Then these texts are converted into a feature vector - representation of all user posts and comments. The main idea here is to create a comparable user's pair similarity score based on their comments and posts. For example, if we fix one user and calculate scores of all mutual pairs with other users, we will produce a total order on the set of all mutual pairs with that user. This total order can be described as a degree of written similarity with this chosen user. A set of \"similar\" users for one particular user can be used to recommend to the user interesting for him people. The similarity score also has a \"transitive property\": if $user_1$ is \"similar\" to $user_2$ and $user_2$ is similar to $user_3$ then inner properties of our model guarantees that $user_1$ and $user_3$ are pretty \"similar\" too. In this way, this score can be used to cluster a set of users into sets of \"similar\" users. It could be used in some recommendation algorithms or tune already existing algorithms to consider a cluster's peculiarities. Also, we can extend our model and calculate feature vectors for subreddits. In that way, we can find similar to the user's subreddits and recommend them to him.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A qualitative study of LiDAR technologies and their application areas 激光雷达技术及其应用领域的定性研究
IS&T International Symposium on Electronic Imaging Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-368
Daniel Jaster, Reiner Creutzburg, Eberhard Hasche
{"title":"A qualitative study of LiDAR technologies and their application areas","authors":"Daniel Jaster, Reiner Creutzburg, Eberhard Hasche","doi":"10.2352/ei.2023.35.3.mobmu-368","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-368","url":null,"abstract":"In this work, the most relevant 3D LiDAR technologies and their applications in 2022 were investigated. For this purpose, applications of LiDAR systems were classified into the typical application areas \"3D modeling\", \"smart city\", \"robotics\", \"smart automotive\" and \"consumer goods\". The investigation has shown that neither \"mechanical\" LiDAR technologies, nor so-called solid-state LiDAR technologies, nor \"hybrid\" LiDAR technologies can be evaluated as optimal for the typical application areas. In none of the application areas could all of the elaborated requirements be met. However, the \"hybrid\" LiDAR technologies such as sequential MEMS LiDAR technology and sequential flash LiDAR technology proved to be among the most suitable for most typical application areas. However, other technologies also tended to be suitable for individual typical application areas. Finally, it was found that several of the LiDAR technologies investigated are currently equally suitable for some typical application areas. To evaluate the suitability, concrete LiDAR systems - of different technologies and properties - were compared with the specific requirements of exemplary applications of an application area. The results of the investigation provide an orientation as to which LiDAR technology is promising for which application area.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信