Signal and image processing : an international journal最新文献

筛选
英文 中文
Target Detection and Classification Improvements using Contrast Enhanced 16-bit Infrared Videos 使用对比度增强的16位红外视频改进目标检测和分类
Signal and image processing : an international journal Pub Date : 2021-02-28 DOI: 10.5121/SIPIJ.2021.12103
C. Kwan, David Gribben
{"title":"Target Detection and Classification Improvements using Contrast Enhanced 16-bit Infrared Videos","authors":"C. Kwan, David Gribben","doi":"10.5121/SIPIJ.2021.12103","DOIUrl":"https://doi.org/10.5121/SIPIJ.2021.12103","url":null,"abstract":"In our earlier target detection and classification papers, we used 8-bit infrared videos in the Defense Systems Information Analysis Center(DSIAC) video dataset. In this paper, we focus on how we can improve the target detection and classification results using 16-bit videos. One problem with the 16-bit videos is that some image frames have very low contrast. Two methods were explored to improve upon previous detection and classification results. The first method used to improve contrast was effectively the same as the baseline 8-bit video data but using the 16-bit raw data rather than the 8-bit data taken from the avi files. The second method used was a second order histogram matching algorithm that preserves the 16-bit nature of the videos while providing normalization and contrast enhancement. Results showed the second order histogram matching algorithm improved the target detection using You Only Look Once (YOLO) and classificationusing Residual Network (ResNet) performance. The average precision (AP) metric in YOLO was improved by 8%. This is quite significant. The overall accuracy (OA) of ResNet has been improved by 12%. This is also very significant.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"108 1","pages":"23-38"},"PeriodicalIF":0.0,"publicationDate":"2021-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85677560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Modelling, Conception and Simulation of a Digital Watermarking System based on Hyperbolic Geometry 基于双曲几何的数字水印系统建模、构想与仿真
Signal and image processing : an international journal Pub Date : 2021-01-01 DOI: 10.5121/sipij.2021.12401
Coulibaly Cheick Yacouba Rachid, Tiendrebeogo B. Telesphore
{"title":"Modelling, Conception and Simulation of a Digital Watermarking System based on Hyperbolic Geometry","authors":"Coulibaly Cheick Yacouba Rachid, Tiendrebeogo B. Telesphore","doi":"10.5121/sipij.2021.12401","DOIUrl":"https://doi.org/10.5121/sipij.2021.12401","url":null,"abstract":"The digital revolution has increased the production and exchange of high-value documents between institutions, businesses and the general public. In order to secure these exchanges, it is essential to guarantee the authenticity, integrity and ownership of these documents. Digital watermarking is a possible solution to this challenge as it has already been used for copyright protection, source tracking and video authentication. It also provides integrity protection, which is useful for many types of documents (official documents, medical images). In this paper, we propose a new watermarking solution applicable to images and based on the hyperbolic geometry. Our new solution is based on existing work in the field of digital watermarking","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83864472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Graph Representation for Skeleton-based Action Recognition 一种新的基于骨架的动作识别图表示方法
Signal and image processing : an international journal Pub Date : 2020-12-30 DOI: 10.5121/SIPIJ.2020.11605
Tingwei Li, Ruiwen Zhang, Qing Li
{"title":"A Novel Graph Representation for Skeleton-based Action Recognition","authors":"Tingwei Li, Ruiwen Zhang, Qing Li","doi":"10.5121/SIPIJ.2020.11605","DOIUrl":"https://doi.org/10.5121/SIPIJ.2020.11605","url":null,"abstract":"Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so that it can effectively capture the features of related nodes and improve the performance of model. More attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges with the existing methods based on GCNs. First, the consistency of temporal and spatial features is ignored due to extracting features node by node and frame by frame. We design a generic representation of skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks (TGN), which can obtain spatiotemporal features simultaneously. Secondly, the adjacency matrix of graph describing the relation of joints are mostly depended on the physical connection between joints. We propose a multi-scale graph strategy to appropriately describe the relations between joints in skeleton graph, which adopts a full-scale graph, part-scale graph and core-scale graph to capture the local features of each joint and the contour features of important joints. Extensive experiments are conducted on two large datasets including NTU RGB+D and Kinetics Skeleton. And the experiments results show that TGN with our graph strategy outperforms other state-of-the-art methods.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75514286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Age Estimation using Transfer Learning and Bayesian Optimization based on Gender Information 基于性别信息的迁移学习和贝叶斯优化面部年龄估计
Signal and image processing : an international journal Pub Date : 2020-12-30 DOI: 10.5121/SIPIJ.2020.11604
Marwa Ahmed, Serestina Viriri
{"title":"Facial Age Estimation using Transfer Learning and Bayesian Optimization based on Gender Information","authors":"Marwa Ahmed, Serestina Viriri","doi":"10.5121/SIPIJ.2020.11604","DOIUrl":"https://doi.org/10.5121/SIPIJ.2020.11604","url":null,"abstract":"Age estimation of unrestricted imaging circumstances has attracted an augmented recognition as it is appropriate in several real-world applications such as surveillance, face recognition, age synthesis, access control, and electronic customer relationship management. Current deep learning-based methods have displayed encouraging performance in age estimation field. Males and Females have a variable type of appearance aging pattern; this results in age differently. This fact leads to assuming that using gender information may improve the age estimator performance. We have proposed a novel model based on Gender Classification. A Convolutional Neural Network (CNN) is used to get Gender Information, then Bayesian Optimization is applied to this pre-trained CNN when fine-tuned for age estimation task. Bayesian Optimization reduces the classification error on the validation set for the pre-trained model. Extensive experiments are done to assess our proposed model on two data sets: FERET and FG-NET. The experiments’ result indicates that using a pre-trained CNN containing Gender Information with Bayesian Optimization outperforms the state of the arts on FERET and FG-NET data sets with a Mean Absolute Error (MAE) of 1.2 and 2.67 respectively.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82437669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neighbour Local Variability for Multi-Focus Images Fusion 多焦点图像融合的邻居局部变异
Signal and image processing : an international journal Pub Date : 2020-12-30 DOI: 10.5121/SIPIJ.2020.11603
I. Wahyuni, R. Sabre
{"title":"Neighbour Local Variability for Multi-Focus Images Fusion","authors":"I. Wahyuni, R. Sabre","doi":"10.5121/SIPIJ.2020.11603","DOIUrl":"https://doi.org/10.5121/SIPIJ.2020.11603","url":null,"abstract":"The goal of multi-focus image fusion is to integrate images with different focus objects in order to obtain a single image with all focus objects. In this paper, we give a new method based on neighbour local variability (NLV) to fuse multi-focus images. At each pixel, the method uses the local variability calculated from the quadratic difference between the value of the pixel and the value of all pixels in its neighbourhood. It expresses the behaviour of the pixel with respect to its neighbours. The variability preserves the edge function because it detects the sharp intensity of the image. The proposed fusion of each pixel consists of weighting each pixel by the exponential of its local variability. The quality of this fusion depends on the size of the neighbourhood region considered. The size depends on the variance and the size of the blur filter. We start by modelling the value of the neighbourhood region size as a function of the variance and the size of the blur filter. We compare our method to other methods given in the literature. We show that our method gives a better result.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79246865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Further Improvements of CFA 3.0 by Combining Inpainting and Pansharpening Techniques 结合Inpainting和Pansharpening技术对CFA 3.0的进一步改进
Signal and image processing : an international journal Pub Date : 2020-12-30 DOI: 10.5121/SIPIJ.2020.11601
C. Kwan, Jude Larkin
{"title":"Further Improvements of CFA 3.0 by Combining Inpainting and Pansharpening Techniques","authors":"C. Kwan, Jude Larkin","doi":"10.5121/SIPIJ.2020.11601","DOIUrl":"https://doi.org/10.5121/SIPIJ.2020.11601","url":null,"abstract":"Color Filter Array (CFA) has been widely used in digital cameras. There are many variants of CFAs in the literature. Recently, a new CFA known as CFA 3.0 was proposed by us and has been shown to yield reasonable performance as compared to some standard ones. In this paper, we investigate the use of inpainting algorithms to further improve the demosaicing performance of CFA 3.0. Six conventional and deep learning based inpainting algorithms were compared. Extensive experiments demonstrated that one algorithm improved over other approaches.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75454891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Eye Gaze Estimation Invisible and IR Spectrum for Driver Monitoring System 驾驶员监控系统的眼注视估计、不可见光谱和红外光谱
Signal and image processing : an international journal Pub Date : 2020-10-30 DOI: 10.5121/sipij.2020.11501
Susmitha Mohan, M. Phirke
{"title":"Eye Gaze Estimation Invisible and IR Spectrum for Driver Monitoring System","authors":"Susmitha Mohan, M. Phirke","doi":"10.5121/sipij.2020.11501","DOIUrl":"https://doi.org/10.5121/sipij.2020.11501","url":null,"abstract":"Driver monitoring system has gained lot of popularity in automotive sector to ensure safety while driving. Collisions due to driver inattentiveness or driver fatigue or over reliance on autonomous driving features arethe major reasons for road accidents and fatalities. Driver monitoring systems aims to monitor various aspect of driving and provides appropriate warnings whenever required. Eye gaze estimation is a key element in almost all of the driver monitoring systems. Gaze estimation aims to find the point of gaze which is basically,” -where is driver looking”. This helps in understanding if the driver is attentively looking at the road or if he is distracted. Estimating gaze point also plays important role in many other applications like retail shopping, online marketing, psychological tests, healthcare etc. This paper covers the various aspects of eye gaze estimation for a driver monitoring system including sensor choice and sensor placement. There are multiple ways by which eye gaze estimation can be done. A detailed comparative study on two of the popular methods for gaze estimation using eye features is covered in this paper. An infra-red camera is used to capture data for this study. Method 1 tracks corneal reflection centre w.r.t the pupil centre and method 2 tracks the pupil centre w.r.t the eye centre to estimate gaze. There are advantages and disadvantages with both the methods which has been looked into. This paper can act as a reference for researchers working in the same field to understand possibilities and limitations of eye gaze estimation for driver monitoring system.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"73 1","pages":"1-20"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75754908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face Verification Across Age Progression using Enhanced Convolution Neural Network 基于增强卷积神经网络的跨年龄人脸验证
Signal and image processing : an international journal Pub Date : 2020-10-30 DOI: 10.5121/sipij.2020.11504
A. M. Osman, Serestina Viriri
{"title":"Face Verification Across Age Progression using Enhanced Convolution Neural Network","authors":"A. M. Osman, Serestina Viriri","doi":"10.5121/sipij.2020.11504","DOIUrl":"https://doi.org/10.5121/sipij.2020.11504","url":null,"abstract":"This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand to develop robust methods to verify facial images when they age. In this paper, a deep learning method based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG) and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and classification. The experiments are based on the facial images collected from MORPH and FG-Net benchmarked datasets. Euclidean distance has been used to measure the similarity between pairs of feature vectors with the age gap. Experiments results show an improvement in the validation accuracy conducted on the FG-NET database, which it reached 100%, while with MORPH database the validation accuracy is 99.8%. The proposed method has better performance and higher accuracy than current state-of-the-art methods.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"40 5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88510441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Batch Normalized Convolution Neural Network for Liver Segmentation 批量归一化卷积神经网络肝脏分割
Signal and image processing : an international journal Pub Date : 2020-10-30 DOI: 10.5121/sipij.2020.11502
Fatima Abdalbagi, Serestina Viriri, M. T. Mohammed
{"title":"Batch Normalized Convolution Neural Network for Liver Segmentation","authors":"Fatima Abdalbagi, Serestina Viriri, M. T. Mohammed","doi":"10.5121/sipij.2020.11502","DOIUrl":"https://doi.org/10.5121/sipij.2020.11502","url":null,"abstract":"With the huge innovative improvement in all lifestyles, it has been important to build up the clinical fields, remembering the finding for which treatment is done; where the fruitful treatment relies upon the preoperative. Models for the preoperative, for example, planning to understand the complex internal structure of the liver and precisely localize the liver surface and its tumors; there are various algorithms proposed to do the automatic liver segmentation. In this paper, we propose a Batch Normalization After All Convolutional Neural Network (BATA-Convnet) model to segment the liver CT images using Deep Learning Technique. The proposed liver segmentation model consists of four main steps: pre-processing, training the BATA-Convnet, liver segmentation, and the postprocessing step to maximize the result efficiency. Medical Image Computing and Computer Assisted Intervention (MICCAI) dataset and 3DImage Reconstruction for Comparison of Algorithm Database (3D-IRCAD) were used in the experimentation and the average results using MICCAI are 0.91% for Dice, 13.44% for VOE, 0.23% for RVD, 0.29mm for ASD, 1.35mm for RMSSD and 0.36mm for MaxASD. The average results using 3DIRCAD dataset are 0.84% for Dice, 13.24% for VOE, 0.16% for RVD, 0.32mm for ASD, 1.17mm for RMSSD and 0.33mm for MaxASD.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"4 1","pages":"21-35"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72873807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gender Discrimination based on the Thermal Signature of the Face and the External Ear 基于面部和外耳热特征的性别歧视
Signal and image processing : an international journal Pub Date : 2020-08-31 DOI: 10.5121/sipij.2020.11402
G. Koukiou, V. Anastassopoulos
{"title":"Gender Discrimination based on the Thermal Signature of the Face and the External Ear","authors":"G. Koukiou, V. Anastassopoulos","doi":"10.5121/sipij.2020.11402","DOIUrl":"https://doi.org/10.5121/sipij.2020.11402","url":null,"abstract":"Simple features extracted from the thermal infrared images of the persons' face are proposed for gender discrimination. Two different types of thermal features are used. The first type is actually based on the mean value of the pixels of specific locations on the face. All cases of persons from the used database, males and females, are correctly distinguished based on this feature. Classification results are verified using two conventional approaches, namely: a. the simplest possible neural network so that generalization is achieved along with successful discrimination between all persons and b. the leave-one-out approach to demonstrate the classification performance on unknown persons using the simplest classifiers possible. The second type takes advantage of the temperature distribution on the ear of the persons. It is found that for the men the cooler region on the ear is larger as percentage compared to that of the women.","PeriodicalId":90726,"journal":{"name":"Signal and image processing : an international journal","volume":"65 1","pages":"13-23"},"PeriodicalIF":0.0,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83171639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信