2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)最新文献

筛选
英文 中文
Experimental Investigation of Non-contact 3D Sensors for Marine-growth Cleaning Operations 用于海洋生物清洁作业的非接触式三维传感器实验研究
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10053020
Christian Mai, Jesper Liniger, A. Jensen, H. Sørensen, Simon Pedersen
{"title":"Experimental Investigation of Non-contact 3D Sensors for Marine-growth Cleaning Operations","authors":"Christian Mai, Jesper Liniger, A. Jensen, H. Sørensen, Simon Pedersen","doi":"10.1109/IPAS55744.2022.10053020","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10053020","url":null,"abstract":"Marine growth on submerged structures causes additional mechanical loads from drag and mass increases. In order to ensure structural integrity, regular inspection and cleaning procedures are carried out on the submerged structures, most commonly using remote-operated vehicles (ROVs). Often, the measurement methodology in these inspections is spot-checks using simple mechanical gauges, which yield a rough estimate of marine growth thickness. Expanding on this method, in order to optimize these inspection and cleaning procedures, modern methods for 3D surface measurement can be applied to increase inspection quality and ensure that superfluous cleaning is not carried out. This work investigates three state-of-the-art sensor technologies: a Time-of-Flight depth camera based on modulated visible blue laser illumination, a commercial stereo-vision solution based on visible-light sensors, and high-frequency imaging sonar. The sensors' performance has been compared in a laboratory environment to assess their suitability for use as a measurement device for marine-growth measurement in terms of accuracy, resolution, and noise/artifacts. It is concluded that the measurement fidelity of all evaluated sensors shows promise for the application, pending future evaluation in a real-world test.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116431914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hybrid Watermarking Algorithm to Protect and Authenticate KhalifaSat Imagery using DWT-SVD and SHA3 Hash Key 基于DWT-SVD和SHA3哈希密钥的KhalifaSat图像保护与认证混合水印算法
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10052903
A. Panthakkan, M. AnzarS., S. Al-Mansoori, Hussain Al-Ahmad
{"title":"Hybrid Watermarking Algorithm to Protect and Authenticate KhalifaSat Imagery using DWT-SVD and SHA3 Hash Key","authors":"A. Panthakkan, M. AnzarS., S. Al-Mansoori, Hussain Al-Ahmad","doi":"10.1109/IPAS55744.2022.10052903","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052903","url":null,"abstract":"Using DWT-SVD and SHA3 Hash function, this research aims to develop an ownership protection and image authentication technique that embeds the watermark information and hash authentication key in a hybrid domain. The experiment was conducted with multispectral images from the KhalifaSat. The Performance of the proposed method is evaluated using wavelet domain signal to noise ratio (WSNR), structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR). To analyse the efficacy of the recovered watermark, two metrics are used: Normalized Correlation (NC) and Image Quality Index (IQI). The method presented is robust against many intended and unintended attacks. Without sacrificing transparency, our proposed watermarking approach meets the objectives of imperceptibility and robustness. It accurately detects the manipulated locations on the satellite image and is sensitive to even small changes.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"Five 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130784821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Benchmark Database for Animal Re-Identification and Tracking 动物再识别与追踪基准数据库
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10052988
L. Kuncheva, Francis Williams, Samuel L. Hennessey, Juan José Rodríguez Diez
{"title":"A Benchmark Database for Animal Re-Identification and Tracking","authors":"L. Kuncheva, Francis Williams, Samuel L. Hennessey, Juan José Rodríguez Diez","doi":"10.1109/IPAS55744.2022.10052988","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052988","url":null,"abstract":"While there are multiple sources of annotated images and videos for human and vehicle re-identification, databases for individual animal recognition are still in demand. We present a database containing five annotated video clips each containing between 9 and 27 identities. The overall number of individual animals is 20,490, and the total number of classes is 93. The database can be used for testing novel methods for animal re-identification, object detection and tracking. The main challenge of the database is that multiple animals are present in the same video frame, leading to problems with occlusion and noisy, cluttered bounding boxes. To set-up a benchmark on individual animal recognition, we trained and tested 26 classification methods for the five videos and three feature representations. We also report results with state-of-the-art deep learning methods for object detection (MMDet) and tracking (Uni-Track).","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134222954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Panel discussion I 小组讨论一
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/ipas55744.2022.10052811
{"title":"Panel discussion I","authors":"","doi":"10.1109/ipas55744.2022.10052811","DOIUrl":"https://doi.org/10.1109/ipas55744.2022.10052811","url":null,"abstract":"","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130974087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Combined Acute and Chronic Risk Assessment Rolling Window for Type 1 Diabetes 1型糖尿病急慢性联合风险评估滚动窗口
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10052880
Faizan Munawar, J. Donovan, Etain Kiely, Konrad Mulrennan
{"title":"A Combined Acute and Chronic Risk Assessment Rolling Window for Type 1 Diabetes","authors":"Faizan Munawar, J. Donovan, Etain Kiely, Konrad Mulrennan","doi":"10.1109/IPAS55744.2022.10052880","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052880","url":null,"abstract":"Monitoring the control of persons with type 1 diabetes based on their history of blood glucose levels is essential for self-management. Persons with diabetes must keep their blood glucose levels in a very narrow glycaemic region (70–180 mg/dl) to avoid hypoglycaemia and hyperglycaemia. An extended period of time in the hypoglycaemic or hyperglycaemic region can lead to short-term and long-term complications, respectively. Many measures have been proposed for the management of diabetes, such as the Glucose Management Indicator (GMI) and the Average Daily Risk Range (ADRR). A major drawback of these measures is that they only address acute (ADRR) or chronic (GMI) complications and provide no information on the trend. This paper proposes a rolling window to calculate ADRR and GMI. Calculating ADRR and GMI using a rolling window results in new data, which provide information on the efficacy of self-management of an individual and their risk trend. Use of a rolling window for the risk analysis provides novel information about the glycaemic variability and can be used for improved personal diabetes management plans. Furthermore, ADRR and GMI are combined to propose four new risk levels, which represents the lowest to the highest probable risk of complications. The analysis was performed on 12 subjects from the OhioT1DM data set. The results presented include a detailed examination and summary of all risks to the subjects and the information about their ADRR and GMI trend.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133342824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Studies on Similarity Distances for Remote Sensing Image Classification 遥感影像分类相似距离的比较研究
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10052824
Omid Ghozatlou, M. Datcu
{"title":"Comparative Studies on Similarity Distances for Remote Sensing Image Classification","authors":"Omid Ghozatlou, M. Datcu","doi":"10.1109/IPAS55744.2022.10052824","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052824","url":null,"abstract":"Scene classification is one of the most important tasks in the remote sensing field. In general, remotely sensed data comprises targets of different nature with many detailed classes. Therefore, the classification of patches in a satellite scene is a challenging issue. To address the problem, the preferred alternative is to transform to polar coordinates and analyze angular distances. Prior works have so far considered angular distances between points, while ignoring that the target class is not a point, but a distribution. In this paper, we take advantage of this critical fact by using a point-to-probability distribution measure rather than an $ell_{n}$ norm. In this paper, two similarity measures (Euclidean and Mahalanobis) in two different feature space are experimentally investigated through some remote sensing datasets.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132126845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DUNE: Deep UNcertainty Estimation for tracked visual features 跟踪视觉特征的深度不确定性估计
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10052984
Katia Sousa Lillo, Andrea de Maio, S. Lacroix, Amaury Nègre, M. Rombaut, Nicolas Marchand, N. Vercier
{"title":"DUNE: Deep UNcertainty Estimation for tracked visual features","authors":"Katia Sousa Lillo, Andrea de Maio, S. Lacroix, Amaury Nègre, M. Rombaut, Nicolas Marchand, N. Vercier","doi":"10.1109/IPAS55744.2022.10052984","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052984","url":null,"abstract":"Uncertainty estimation of visual feature is essential for vision-based systems, such as visual navigation. We show that errors inherent to visual tracking, in particular using KLT tracker, can be learned using a probabilistic loss function to estimate the covariance matrix on each tracked feature position. The proposed system is trained and evaluated on synthetic data, as well as on real data, highlighting good results in comparison to the state of the art. The benefits of the tracking uncertainty estimates are illustrated for visual motion estimation.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117036629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Powered Wheelchair Assistive Navigation System Based on Intelligent Semantic Segmentation for Visually Impaired Users 基于智能语义分割的视障用户动力轮椅实时辅助导航系统
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10053051
Elhassan Mohamed, K. Sirlantzis, G. Howells
{"title":"Real-time Powered Wheelchair Assistive Navigation System Based on Intelligent Semantic Segmentation for Visually Impaired Users","authors":"Elhassan Mohamed, K. Sirlantzis, G. Howells","doi":"10.1109/IPAS55744.2022.10053051","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10053051","url":null,"abstract":"People with movement disabilities may find powered wheelchair driving a challenging task due to their comorbidities. Certain visually impaired persons with mobility disabilities are not prescribed a powered wheelchair because of their sight condition. However, powered wheelchairs are essential to the majority of these disabled users for commuting and social interaction. It is vital for their independence and wellbeing. In this paper, we propose to use a semantic segmentation (SS) system based on deep learning algorithms to provide environmental cues and information to visually impaired wheelchair users to aid with the navigation process. The system classifies the objects of the indoor environment and presents the annotated output on a display customised to the user's condition. The user can select a target object, for which the system can display the estimated distance from the current position of the wheelchair. The system runs in real-time, using a depth camera installed on the wheelchair, and it displays the scene in front of the wheelchair with every pixel annotated with distinguishable colour to represent the different components of the environment along with the distance to the target object. Our system has been designed, implemented and deployed on a real powered wheelchair for practical evaluation. The proposed system helped the users to estimate more accurately the distance to the target objects with a relative error of 19.8% and 18.4% for the conditions of a) semi-neglect and b) short-sightedness, respectively, compared to errors of 47.8% and 5.6% without the SS system. In our experiments, healthy participants were put in simulated conditions representing the above visual impairments using instruments commonly used in medical research for this purpose. Finally, our system helps to visualise, on the display, hidden areas of the environment and blind spots that visually impaired users would not be able to see without it.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117236849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liver Segmentation in Time-resolved C-arm CT Volumes Reconstructed from Dynamic Perfusion Scans using Time Separation Technique 时间分离技术在动态灌注扫描重建时间分辨c臂CT体积中的肝脏分割
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10052849
S. Chatterjee, Hana Haselji'c, R. Frysch, V. Kulvait, V. Semshchikov, B. Hensen, F. Wacker, Inga Brüsch, T. Werncke, O. Speck, A. Nürnberger, G. Rose
{"title":"Liver Segmentation in Time-resolved C-arm CT Volumes Reconstructed from Dynamic Perfusion Scans using Time Separation Technique","authors":"S. Chatterjee, Hana Haselji'c, R. Frysch, V. Kulvait, V. Semshchikov, B. Hensen, F. Wacker, Inga Brüsch, T. Werncke, O. Speck, A. Nürnberger, G. Rose","doi":"10.1109/IPAS55744.2022.10052849","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10052849","url":null,"abstract":"Perfusion imaging is a valuable tool for diagnosing and treatment planning for liver tumours. The time separation technique (TST) has been successfully used for modelling C-arm cone-beam computed tomography (CBCT) perfusion data. The reconstruction can be accompanied by the segmentation of the liver - for better visualisation and for generating comprehensive perfusion maps. Recently introduced Turbolift learning has been seen to perform well while working with TST reconstructions, but has not been explored for the time-resolved volumes (TRV) estimated out of TST reconstructions. The segmentation of the TRVs can be useful for tracking the movement of the liver over time. This research explores this possibility by training the multi-scale attention UNet of Turbolift learning at its third stage on the TRVs and shows the robustness of Turbolift learning since it can even work efficiently with the TRVs, resulting in a Dice score of $0.864pm 0.004$.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124035400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual Learning in an Industrial Scenario: Equipment Classification on Edge Devices 工业场景中的持续学习:边缘设备的设备分类
2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS) Pub Date : 2022-12-05 DOI: 10.1109/IPAS55744.2022.10053047
A. Morgado, R. Carvalho, Catarina Andrade, Telmo Barbosa, Gonçalo Santos, M.J.M. Vasconcelos
{"title":"Continual Learning in an Industrial Scenario: Equipment Classification on Edge Devices","authors":"A. Morgado, R. Carvalho, Catarina Andrade, Telmo Barbosa, Gonçalo Santos, M.J.M. Vasconcelos","doi":"10.1109/IPAS55744.2022.10053047","DOIUrl":"https://doi.org/10.1109/IPAS55744.2022.10053047","url":null,"abstract":"The ability to incrementally learn to categorize objects is a key feature for a personalized system in real-world applications. The major constraint for such scenario relies on the catastrophic forgetting problem, which negatively impacts the performance of the models on previously learned representations. In this work, we developed an equipment classification model to be deployed on edge devices by applying regularization and memory-based class-incremental strategies, such that it can detect new classes while preserving its ability to detect previously known classes, mitigating the forgetting phenomenon. The strategies were tested on three datasets: CIFAR100 to validate the implementation, Stanford Dogs to ensure the reliability of the results as it is a more representative dataset, and SINATRA, which is the work's industrial dataset for equipment recognition. Experimental results on these datasets show that the Experience Replay strategy performed better. For the SINATRA dataset, average accuracy values of 95.57% and of 100% were achieved for Águas e Energias do Porto and Plastaze subsets, respectively. The outcomes of this work proved that by retaining only a limited number of exemplars from old classes, it is possible to update a pre-existing system to classify new devices in a shorter period and avoid catastrophic forgetting.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131455177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信