2013 IEEE International Conference on Signal and Image Processing Applications最新文献

筛选
英文 中文
A video steganography attack using multi-dimensional Discrete Spring Transform 基于多维离散弹簧变换的视频隐写攻击
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708000
Aaron T. Sharp, Qilin Qi, Yaoqing Yang, D. Peng, H. Sharif
{"title":"A video steganography attack using multi-dimensional Discrete Spring Transform","authors":"Aaron T. Sharp, Qilin Qi, Yaoqing Yang, D. Peng, H. Sharif","doi":"10.1109/ICSIPA.2013.6708000","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708000","url":null,"abstract":"Video steganography is fast emerging as a next-generation steganographic medium that offers many advantages over traditional steganographic cover media such as audio and images. Various schemes have recently emerged which take advantage of video specific properties for information hiding, most notably through the use of motion vectors. Although many steganographic schemes have been proposed which exploit several possible steganographic domains within video sequences, few attacks have been proposed to combat such schemes, and no current attacks have been shown to be capable of defeating multiple schemes at once. In this paper, we will further expand upon our proposed Discrete Spring Transform (DST) steganographic attack. We will explore further applications of the transform and how it may be used to defeat multiple steganographic schemes, specifically current video steganography schemes. The effectiveness of the proposed algorithm will be shown by attacking a multi-dimensional steganographic algorithm embedded in video sequences, where the scheme operates in two different dimensions of the video. The attack is successful in defeating multiple steganographic schemes verified by determining the BER after DST attack which always remains approximately 0.5. Furthermore, the attack preserves the integrity of the video sequence which is verified by determining the PSNR which always remains approximately above 30dB.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A multi-agent mobile robot system with environment perception and HMI capabilities 具有环境感知和人机交互能力的多智能体移动机器人系统
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708013
M. Tornow, A. Al-Hamadi, Vinzenz Borrmann
{"title":"A multi-agent mobile robot system with environment perception and HMI capabilities","authors":"M. Tornow, A. Al-Hamadi, Vinzenz Borrmann","doi":"10.1109/ICSIPA.2013.6708013","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708013","url":null,"abstract":"A multi-agent robot system can speed up exploration or search and rescue operations in dangerous environments by working as a distributed sensor network. Each robot (e.g. Eddi Robot) equipped with a combined 2D/3D sensor (MS Kinect) and additional sensors needs to efficiently exchange its collected data with the other group members for task planning. For environment perception a 2D/3D panorama is generated from a sequence of images which were obtained while the robot was rotating. Furthermore the 2D/3D sensor data is used for a Human-Machine Interaction based on hand postures and gestures. The hand posture classification is realized by an Artificial Neural Network (ANN) which is processing a feature vector composed of Cosine-Descriptors (COD), Hu-moments and geometric features extracted of the hand shape. The System achieves an overall classification rate of more than 93%. It is used within the hand posture and gesture based human machine interface to control the robot team.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121537111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Robust reversible watermarking scheme based on wavelet-like transform 基于类小波变换的鲁棒可逆水印方案
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708032
R. T. Mohammed, B. Khoo
{"title":"Robust reversible watermarking scheme based on wavelet-like transform","authors":"R. T. Mohammed, B. Khoo","doi":"10.1109/ICSIPA.2013.6708032","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708032","url":null,"abstract":"Watermarking reversibility is one of the basic requirements for medical imaging, military imaging, and remote sensing applications. In these fields a slight change in the original image can lead to a significant difference in the final decision making process. However, the reversibility alone is not enough for practical applications because the hidden data must be extracted even after unintentional attacks (e.g., noise addition, JPEG compression) so a robust (i.e., semi-fragile) reversible watermarking methods became required. In this paper, we present a new robust reversible watermarking method that utilizes the Slantlet transform (SLT) to transform image blocks and modifying the SLT coefficients to embed the watermark bits. If the watermarked image is not attacked, the method is completely reversible (i.e., the watermark and the original image will be recovered correctly). After JPEG compression, the hidden data can be extracted without error. Experimental results prove that the presented scheme achieves high visual quality, complete reversibility, and better robustness in comparison with the previous methods.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129720239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Depth error concealment based on decision making 基于决策的深度错误隐藏
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708002
M. Ranjbari, A. Sali, H. A. Karim, F. Hashim
{"title":"Depth error concealment based on decision making","authors":"M. Ranjbari, A. Sali, H. A. Karim, F. Hashim","doi":"10.1109/ICSIPA.2013.6708002","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708002","url":null,"abstract":"One of the common form of representing stereoscopic video is combination of 2D video with its corresponding depth map which is made by a laser camera to illustrate depth in the video. When this type of video is transmitted over error prone channels, the packet loss leads to frame loss; and mostly this frame lost occur in depth frames. Thus, a depth error concealment based on decision making termed as DM-PV, which exploits high correlation of 2-D image and its corresponding depth map. The 2D image provide information about the missing frame in the depth sequence to assist the decision making process in order to conceal the lost frames. The process involves inserting proper blank frame and duplication of previous frames instead of missing frames in depth sequence. PSNR performance improves over frame copy method has no decision making. Furthermore, subjective quality of stereoscopic video is better using DM-PV.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130945448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Beamspace bearing estimation based on wavelet transform 基于小波变换的波束空间方位估计
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708022
Jinxiang Du, Yan Ma
{"title":"Beamspace bearing estimation based on wavelet transform","authors":"Jinxiang Du, Yan Ma","doi":"10.1109/ICSIPA.2013.6708022","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708022","url":null,"abstract":"In this paper, we propose a new wideband bearing estimation method based on wavelet transform. By analyzing the relationship between the wavelet transform of the frequency invariant beam's output and the array's beampattern, we derived spatial power spectrum based on wavelet transform (SPS-WT). The method has good performance on noise suppression by utilizing the statistical uncorrelation character between signals and noise, and also has high resolution on bearing estimation. The performance of the proposed method is illustrated in simulation results.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131337665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the effects of pre- and post-processing in video cartoonization with bilateral filters 双边滤波器在视频卡通化中的预处理和后处理效果
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6707974
Zoya Shahcheraghi, John See
{"title":"On the effects of pre- and post-processing in video cartoonization with bilateral filters","authors":"Zoya Shahcheraghi, John See","doi":"10.1109/ICSIPA.2013.6707974","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707974","url":null,"abstract":"In recent years, advances in image-based artistic rendering have grown steadily with the additional leverage of image and video processing techniques. Video cartoonization or stylization is the process of artificially incorporating cartoon-like effects to photorealistic input videos. This paper investigates the effects of integrating relevant pre- and post-processing tasks to significantly improve the quality of cartoonized videos processed with bilateral filters (BF). Our video cartoonization framework work extends the original Winnemöller's real-time video abstraction framework, which applies the edge-preserving BF with additional use of edge maps, luminance quantization and frame temporal coherency. In our work, we propose a contrast enhancement option by intensity stretching and Laplacian filtering to finetune the contrast levels of the pre-BF frames. For the post-BF recombined frames, an unsharp masking procedure is proposed to accentuate feature details in the final output video. Results from extensive experiments conducted by qualitative user evaluation underline the essentiality of pre- and post-processing tasks for improved video cartoonization.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115152966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A non-destructive technique using 3D X-ray Computed Tomography to reveal semiconductor internal physical defects 一种利用三维x射线计算机断层扫描技术揭示半导体内部物理缺陷的非破坏性技术
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6707977
C. H. Tan, C. Lau
{"title":"A non-destructive technique using 3D X-ray Computed Tomography to reveal semiconductor internal physical defects","authors":"C. H. Tan, C. Lau","doi":"10.1109/ICSIPA.2013.6707977","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707977","url":null,"abstract":"This paper focuses on the application of 3D X-ray Computed Tomography (CT) to precisely detect and confirm semiconductor internal physical defects without the need to decapsulate the sample. Equipped with advanced technologies and innovations, today's X-ray machine is capable of reconstructing the two-dimension (2D) sliced images to form 3D images and videos in much shorter time. With the introduction of 3D X-ray CT designed for electronics field, failure mechanisms once only visible after destructive analysis can now be revealed in non-destructive way. The technique not only saves cost, it shortens the turnaround time tremendously and allows customer's response and relevant improvement actions to be taken more efficiently.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133981456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use an efficient neural network to improve the Arabic handwriting recognition 利用高效的神经网络改进阿拉伯语手写识别
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708016
H. Hamad
{"title":"Use an efficient neural network to improve the Arabic handwriting recognition","authors":"H. Hamad","doi":"10.1109/ICSIPA.2013.6708016","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708016","url":null,"abstract":"Using an efficient neural network for recognition and segmentation will definitely improve the performance and accuracy of the results; in addition to reduce the efforts and costs. This paper investigates and compares between results of four different artificial neural network models. The same algorithm has been applied for all with applying two major techniques, first, neural-segmentation technique, second, apply a new fusion equation. The neural techniques calculate the confidence values for each Prospective Segmentation Points (PSP) using the proposed classifiers in order to recognize the better model, this will enhance the overall recognition results of the handwritten scripts. The fusion equation evaluates each PSP by obtaining a fused value from three neural confidence values. CPU times and accuracies are also reported. Experiments that were performed of classifiers will be compared with each other and with the literature.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125740410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Watermarking schemes to secure the face database and test images in a biometric system 在生物识别系统中,采用水印技术保护人脸数据库和测试图像
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6707990
Himanshu Agarwal, B. Raman, P. Atrey
{"title":"Watermarking schemes to secure the face database and test images in a biometric system","authors":"Himanshu Agarwal, B. Raman, P. Atrey","doi":"10.1109/ICSIPA.2013.6707990","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6707990","url":null,"abstract":"This paper attempts to solve the integrity issues of a compromised face biometric system using two watermarking schemes. Two new blind watermarking schemes, namely S1 and S2, are proposed to ensure the integrity of the training face database and of the test images, respectively. Scheme S1 is fragile spatial-domain based and scheme S2 works in the discrete cosine transformation (DCT) domain and is robust to channel noise. The novelty of S1 lies in the fact that it is lossless and the ratio of watermark bits to the size of the host image is 2.67, while S2 has better robustness than existing blind watermarking schemes. The performance of both schemes is evaluated on a subset of the Indian face database and the results show that both schemes verify the integrity with very high accuracy without affecting the performance of the biometric system.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130725516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
High security image steganography using IWT and graph theory 基于IWT和图论的高安全性图像隐写
2013 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2013-10-01 DOI: 10.1109/ICSIPA.2013.6708029
V. Thanikaiselvan, P. Arulmozhivarman
{"title":"High security image steganography using IWT and graph theory","authors":"V. Thanikaiselvan, P. Arulmozhivarman","doi":"10.1109/ICSIPA.2013.6708029","DOIUrl":"https://doi.org/10.1109/ICSIPA.2013.6708029","url":null,"abstract":"Steganography conceals the secret information inside the cover medium. There are two types of steganography techniques available practically. They are spatial domain steganography and Transform domain steganography. The objectives to be considered in the steganography methods are high capacity, imperceptibility and robustness. In this paper, a Color image steganography in transform domain is proposed. Reversible Integer Haar wavelet transform is applied to the R, G and B planes separately and the data is embedded in a random manner. Random selection of wavelet coefficients is based on the graph theory. This proposed system uses three different keys for embedding and extraction of the secret data, where key1(Subband Selection - SB) is used to select the Wavelet subband for embedding, key2(Selection of Co-effecients-SC) is used to select the co-efficients randomly and key3 (Selection of Bit length-SB) is used to select the number of bits to be embedded in the selected co-efficients. This method shows good imperceptibility, High capacity and Robustness.","PeriodicalId":440373,"journal":{"name":"2013 IEEE International Conference on Signal and Image Processing Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115215978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信