2017 International Conference on Systems, Signals and Image Processing (IWSSIP)最新文献

筛选
英文 中文
An analysis of the applicability of the TFD IP option for QoS assurance of multiple video streams in a congested network 分析了TFD IP选项在拥塞网络中对多个视频流的QoS保证的适用性
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965597
R. Chodorek, A. Chodorek
{"title":"An analysis of the applicability of the TFD IP option for QoS assurance of multiple video streams in a congested network","authors":"R. Chodorek, A. Chodorek","doi":"10.1109/IWSSIP.2017.7965597","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965597","url":null,"abstract":"The Trafïîc Flow Description (TFD) option is an experimental option of the IP protocol, designed by the Authors, able to assure signaling for QoS purposes. The option is used as a carrier of knowledge about forthcoming traffic. If planning horizons are short enough, this knowledge can be used for dynamic bandwidth allocation. In the paper an analysis of QoS assurance using the TFD option is presented The analysis was made for a case of QoS protection of multiple video streams. Results show that dynamic bandwidth allocation using the TFD option gives better QoS-related parameters than the typical approach to QoS assurance, based on the RSVP protocol.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128525387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hierarchical co-segmentation of 3D point clouds for indoor scene 室内场景三维点云分层共分割
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965590
Yan-Ting Lin
{"title":"Hierarchical co-segmentation of 3D point clouds for indoor scene","authors":"Yan-Ting Lin","doi":"10.1109/IWSSIP.2017.7965590","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965590","url":null,"abstract":"Segmentation of point clouds has been studied under a variety of scenarios. However, the segmentation of scanned point clouds for a clustered indoor scene remains significantly challenging due to noisy and incomplete data, as well as scene complexity. Based on the observation that objects in an indoor scene vary largely in scale but are typically supported by planes, we propose a co-segmentation approach. This technique utilizes the mutual agency between the point clouds captured at different times after the objects' poses change due to human actions. Hence, we hierarchically segment scenes from different times into patches and generate tree structures to store their relations. By iteratively clustering patches and co-analyzing them based on the relations between patches, we modify the tree structures and generate our results. To test the robustness of our method, we evaluate it on imperfectly scanned point clouds from a childroom, a bedroom, and two offices scenes.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CrowdSync: User generated videos synchronization using crowdsourcing CrowdSync:用户使用众包生成视频同步
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965599
R. M. C. Segundo, M. N. Amorim, Celso A. S. Santos
{"title":"CrowdSync: User generated videos synchronization using crowdsourcing","authors":"R. M. C. Segundo, M. N. Amorim, Celso A. S. Santos","doi":"10.1109/IWSSIP.2017.7965599","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965599","url":null,"abstract":"User Generated Videos are contents created by heterogeneous users around an event. Each user films the event with his point of view, and according to his limitations. In this scenario, it is impossible to guarantee that all the videos will be stable, focused on a point of the event or other characteristics that turn the automatic video synchronization process possible. Focused on this scenario we propose the use of crowdsourcing techniques in video synchronization (CrowdSync). The crowd is not affected by heterogeneous videos as the automatic processes are, so it is possible to use them to process videos and find the synchronization points. In order to make this process possible, a structure is described that can manage both crowd and video synchronization: the Dynamic Alignment List (DAL). Therefore, we carried out two experiments to verify that the crowd can perform the proposed approach through two experiments: a crowd simulator and a small task based experiment.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Non-contact signal detection and processing techniques for cardio-respiratory thoracic activity 心肺胸廓活动非接触式信号检测与处理技术
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965579
D. Shahu, Ismail Baxhaku, Alban Rakipi
{"title":"Non-contact signal detection and processing techniques for cardio-respiratory thoracic activity","authors":"D. Shahu, Ismail Baxhaku, Alban Rakipi","doi":"10.1109/IWSSIP.2017.7965579","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965579","url":null,"abstract":"The possibility to monitor the cardio-respiratory activity in a non-invasive way, by measuring thoracic displacement using cost-effective microwave Doppler technology, has been studied. Several laboratory tests were performed in order to demonstrate the feasibility of the proposed electromagnetic measurement method. Time domain signal processing techniques have been used in order to evaluate the main frequency and rate variability of the heartbeat and respiration signals. They have shown a very good correlation coefficient with the electrocardiogram and spirometer results, used as reference instruments. On the other hand, a simple electromagnetic model was developed in order to analyze the scattering problem, using appropriate analytical and numerical techniques.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123350111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An effective consistency correction and blending method for camera-array-based microscopy imaging 一种有效的相机阵列显微成像一致性校正和混合方法
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965602
J. Bao, Jingtao Fan, Xiaowei Hu, Jinnan Wang, Lei Wang
{"title":"An effective consistency correction and blending method for camera-array-based microscopy imaging","authors":"J. Bao, Jingtao Fan, Xiaowei Hu, Jinnan Wang, Lei Wang","doi":"10.1109/IWSSIP.2017.7965602","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965602","url":null,"abstract":"Camera-array-based microscopy imaging is an effective scheme to satisfy the requirements of wide field of view, high spatial resolution and real-time imaging simultaneously. However, with the increasing number of cameras and expansion of field of view, the nonlinear camera response, vignetting of camera lens, ununiformity of illumination system, and the low overlapping ratio all lower the quality of microscopic image stitching and blending. In this paper, we propose an image consistency correction and blending method for 5 × 7 camera-array-based 0.17-gigapixel microscopic images. Firstly, we establish an image consistency correction model. Then, we obtain the response functions and compensation factors. Next, we restore captured images based on above model. Finally, we adopt an improved alpha-blending method to stitch and blend images of multiple fields of view. Experimental results show that our proposed method eliminates the inconsistency and seams among stitched images effectively.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124932410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Evaluation of background noise for significance level identification 评价显著性水平识别的背景噪声
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965614
J. Poměnková, E. Klejmova, T. Malach
{"title":"Evaluation of background noise for significance level identification","authors":"J. Poměnková, E. Klejmova, T. Malach","doi":"10.1109/IWSSIP.2017.7965614","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965614","url":null,"abstract":"The paper deals with the identification of the significance level for testing the time-frequency transform of the data. The usual procedure of time-frequency significance testing is based on the knowledge of background spectrum. Very often, we have certain expectations about the character the background noise (White noise, Red noise, etc.). Our paper deals with the case when the character of the noise is unknown and may not be Gaussian despite our assumptions. Thus, we propose how to identify our own critical values for testing time-frequency transform significance with respect to the data character. We compare our findings with the critical quantile of χ22.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122651389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Encoding mode selection in HEVC with the use of noise reduction 编码模式选择在HEVC与使用降噪
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965589
O. Stankiewicz, K. Wegner, D. Karwowski, J. Stankowski, K. Klimaszewski, T. Grajek
{"title":"Encoding mode selection in HEVC with the use of noise reduction","authors":"O. Stankiewicz, K. Wegner, D. Karwowski, J. Stankowski, K. Klimaszewski, T. Grajek","doi":"10.1109/IWSSIP.2017.7965589","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965589","url":null,"abstract":"This paper concerns optimization of encoding in HEVC. A novel method is proposed in which encoding modes, e.g. coding block structure, prediction types and motion vectors, are selected basing on noise-reduced version of the input sequence, while the content, e.g. transform coefficients, are coded basing on the unaltered input sequence. Although the proposed scheme involves encoding of two versions of the input sequence, the proposed realization ensures that the complexity is only negligibly larger than complexity of a single encoder. The proposal has been implemented and assessed. The experimental results show that the proposal provides up to 1.5% bitrate reduction while preserving the same video quality.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122155031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
On buffer overflow duration in WSN with a vacation-type power saving mechanism 基于休假型节电机制的WSN缓冲器溢出持续时间研究
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965620
W. Kempa
{"title":"On buffer overflow duration in WSN with a vacation-type power saving mechanism","authors":"W. Kempa","doi":"10.1109/IWSSIP.2017.7965620","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965620","url":null,"abstract":"A queue-based model of WSN's node with power saving mechanism described by the single vacation policy is considered. Whenever the queue of packets directed to the node becomes empty, the radio transmitter/receiver is being switched off for a random and generally distributed time period. During the vacation the processing is blocked and the incoming packets are buffered. Modelling the transient behavior of the node by an M/G/1/N-type system with single server vacations, the CDF (cumulative distribution function) of buffer overflow duration, conditioned by the initial buffer state, is investigated. Applying analytical approach based on the idea of embedded Markov chain, integral equations and linear algebra, the compact-form representation for the first buffer overflow duration CDF is found. Hence, the formula for the CDF of next such periods is derived. Moreover, probability distributions of the number of packet losses in successive buffer overflow periods are found. Numerical example is attached as well.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126753848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Improving matching performance of the keypoints in images of 3D scenes by using depth information 利用深度信息提高三维场景图像中关键点的匹配性能
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965571
K. Matusiak, P. Skulimowski, P. Strumiłło
{"title":"Improving matching performance of the keypoints in images of 3D scenes by using depth information","authors":"K. Matusiak, P. Skulimowski, P. Strumiłło","doi":"10.1109/IWSSIP.2017.7965571","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965571","url":null,"abstract":"Keypoint detection is a basic step in many computer vision algorithms aimed at recognition of objects, automatic navigation, medicine and other application fields. Successful implementation of higher level image analysis tasks, however, is conditioned by reliable detection of characteristic image local regions termed keypoints. A large number of keypoint detection algorithms has been proposed and verified. The main part of this work is devoted to description of an original keypoint detection algorithm that incorporates depth information computed from stereovision cameras or other depth sensing devices. It was shown that filtering out keypoints that are context dependent, e.g. located on object boundaries can improve the matching performance of the keypoints which is the basis for object recognition tasks. This improvement was shown quantitatively by comparing the proposed algorithm to the widely accepted SIFT keypoint detector algorithm. Our study is motivated by a development of a system aimed at aiding the visually impaired in space perception and object identification.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131869623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast cloud image segmentation with superpixel analysis based convolutional networks 基于卷积网络的超像素分析快速云图像分割
2017 International Conference on Systems, Signals and Image Processing (IWSSIP) Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965591
Lifang Wu, Jiaoyu He, Meng Jian, Jianan Zhang, Yunzhen Zou
{"title":"Fast cloud image segmentation with superpixel analysis based convolutional networks","authors":"Lifang Wu, Jiaoyu He, Meng Jian, Jianan Zhang, Yunzhen Zou","doi":"10.1109/IWSSIP.2017.7965591","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965591","url":null,"abstract":"Due to the various noises, the cloud image segmentation becomes a big challenge for atmosphere prediction. CNN is capable of learning discriminative features from complex data, but this may be quite time-consuming in pixel-level segmentation problems. In this paper we propose superpixel analysis based CNN (SP-CNN) for high efficient cloud image segmentation. SP-CNN employs image over-segmentation of superpixels as basic entities to preserve local consistency. SP-CNN takes the image patches centered at representative pixels in every superpixel as input, and all superpixels are classified as cloud or non-cloud part by voting of the representative pixels. It greatly reduces the computational burden on CNN learning. In order to avoid the ambiguity from superpixel boundaries, SP-CNN selects the representative pixels uniformly from the eroded superpixels. Experimental analysis demonstrates that SP-CNN guarantees both the effectiveness and efficiency in cloud segmentation.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115779098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信