2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)最新文献

筛选
英文 中文
Space-invariant signature algorithm processing of ultrasound images for the detection and localization of early abnormalities in animal tissues 空间不变签名算法处理超声图像,用于动物组织早期异常的检测和定位
Kushagra Parolia, M. Gupta, P. Babyn, W. Zhang, D. Bitner
{"title":"Space-invariant signature algorithm processing of ultrasound images for the detection and localization of early abnormalities in animal tissues","authors":"Kushagra Parolia, M. Gupta, P. Babyn, W. Zhang, D. Bitner","doi":"10.1109/CISP-BMEI.2017.8302204","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302204","url":null,"abstract":"In this paper we present an innovative space-variance approach named as “Space-Invariant Signature Algorithm (SISA)” for processing images from active systems, such as cancer cells, tumor growth, and dead cells, for the detection and localization of abnormalities at an early stage. In this paper, a SISA processing algorithm is developed, and this algorithm is tested on animal tissues such as pigs and chicken tissues. The abnormality in an active system can be defined as the obstacle or a failure which impedes the activities in tissues such as smooth flow of blood or electrical signals etc. Due to this impeding nature of the abnormality, some parameter perturbations are induced. In this paper using the SISA approach, these perturbations were detected in a preliminary experiment on animal tissues. The degree and position of the space-variance helps us in the detection and localization of abnormality even at an early (incipient) stage. The space-variance signature pattern is named as a ‘SISA signature pattern’. In the absence of any abnormality, the signature pattern is space invariant, whereas, in the presences of any abnormality, the SISA signature pattern varies in the space (space-variant). The basic experimental studies on animal tissues using ultra sound imaging strongly suggest a possible use of the SISA approach as a non-invasive method for the detection and localization of abnormalities in biological tissues such as cancer cells non-invasively.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"161 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86738323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Security mechanism of video content integrated broadcast control platform under triple play 三网融合下视频内容集成播出控制平台的安全机制
YU Peng, Nenghuan Zhang, Shengyan Zhang, Qi Wang
{"title":"Security mechanism of video content integrated broadcast control platform under triple play","authors":"YU Peng, Nenghuan Zhang, Shengyan Zhang, Qi Wang","doi":"10.1109/CISP-BMEI.2017.8301930","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8301930","url":null,"abstract":"According to the new format of triple play, video contents interact in the integrated broadcast control platform, there are security risks and participation dispute liability trace ability problem, we propose an integrated control platform security mechanism under triple play. The mechanism builds a bit commitment protocol through hash function, preventing the video tampering in public network transport, using exchanged ElGamal encryption mechanism to achieve a confidentiality comparison in the event of a dispute, to trace the responsible party. Theoretical analysis shows that this method has good security and feasibility, and can effectively solve the security of video content in integrated broadcast control environment.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"2014 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86764972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An image patch matching method based on multi-feature fusion 一种基于多特征融合的图像补丁匹配方法
Xiangru Yu, Yukun Guo, Jinping Li, Fudong Cai
{"title":"An image patch matching method based on multi-feature fusion","authors":"Xiangru Yu, Yukun Guo, Jinping Li, Fudong Cai","doi":"10.1109/CISP-BMEI.2017.8302044","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302044","url":null,"abstract":"Appropriate features are very important for the robustness and effectiveness of matching algorithms. In general, current algorithms depend on descriptors like SIFT, SURF, which makes them only care about information of key points while ignoring the knowledge of the whole image, thus those methods are easy to result in false matches. We propose a novel matching method called multi-feature fusion, which takes full advantage of geometric, gray, color and texture features. Then we validate the effect of our method using images captured from practical applications. Experiments show the method can effectively complete the matching task of image patch.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"113 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86791062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Beating heart motion prediction using iterative optimal sine filtering 用迭代最优正弦滤波预测心脏跳动运动
Bo Yang, Tingting Cao, Wenfeng Zheng
{"title":"Beating heart motion prediction using iterative optimal sine filtering","authors":"Bo Yang, Tingting Cao, Wenfeng Zheng","doi":"10.1109/CISP-BMEI.2017.8302214","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302214","url":null,"abstract":"A novel motion prediction algorithm is proposed to robustly track heart beat in minimally invasive surgery. To model the movement of Points of Interest (POI) on heart tissue, the Dual Time-Varying Fourier Series (DTVFS) is employed. The Fourier coefficients and the frequencies of the DTVFS model are estimated separately using the dual Kalman filtering. An iterative optimal sine filtering algorithm is developed, which can accurately measure the instantaneous frequencies of breathing circle and heart beating from the motion curves of the POI. The proposed method is verified on the simulated dataset and the real-measured datasets captured by the daVinci surgical system.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"5 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88889641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A miniaturized phased high intensity focused ultrasound transducer-driven system with MR compatibility 具有核磁共振兼容性的小型化相位高强度聚焦超声换能器驱动系统
Li Gong, G. Shen, S. Qiao, Wenjie Liu, Yuchi Yi
{"title":"A miniaturized phased high intensity focused ultrasound transducer-driven system with MR compatibility","authors":"Li Gong, G. Shen, S. Qiao, Wenjie Liu, Yuchi Yi","doi":"10.1109/CISP-BMEI.2017.8302248","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302248","url":null,"abstract":"A new transducer-driven system is designed for these important characteristics: lower harmonic energy, higher integration and more flexible wave output mode. According to the flowing direction of signals, the whole system mainly has three parts: the controlling module based on FPGA, the wave generator group and the signal processing group. FPGA concurrently controls the following wave generator groups by digital signals outputting in parallel. The wave generator, which could generate continuous sine wave and pulse wave in a low harmonic energy level, has a programmable Direct Digital Synthesizer (DDS) with a 12 bit Digital to Analog Converter (DAC) and sampling up to 180 MSPS. The signal processing group contains the amplification module and the matching network group for transducers. The power of sine wave is magnified by integrated operational amplifier (op amp). The matching network is designed for the transducer with frequency 1.36MHz. Finally the measurement of one channel output shows that: the frequency accuracy is 1.36MHz ±0.04MHz and the wave mode can be configured as continuous sine wave or pulse wave. The amplitude of each harmonics in the FFT analysis is over 50dB lower than the fundamental component. The acoustic power of one circular transducer with radius of 5mm is more than 2.8W.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"34 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90039845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Web-based training for radiologists of breast ultrasound 乳腺超声放射科医师网络培训
Xianhai Huang, L. Ling, Qinghua Huang, Yidi Lin, Xingzhang Long, Longzhong Liu
{"title":"Web-based training for radiologists of breast ultrasound","authors":"Xianhai Huang, L. Ling, Qinghua Huang, Yidi Lin, Xingzhang Long, Longzhong Liu","doi":"10.1109/CISP-BMEI.2017.8302273","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302273","url":null,"abstract":"Breast cancer is still considered as the most common form of cancer as well as the leading causes of cancer deaths among women all over the world. Fortunately, the mortality of breast cancer can be significantly reduced via the early detection and diagnosis of breast cancer. As one of the most continually used diagnosis tools, ultrasonography (US) scan plays an important role in the detection and classification of the breast tumor. In this paper, we introduce a large breast ultrasound image database which stored breast ultrasound images and pathology results from breast tumor patients as well as their clinic diagnostic information. Furthermore, we design a web-based training system based on the database using a feature scoring scheme which based on the fifth edition of Breast Imaging Reporting and Data System (BI-RADS) lexicon for US. This online training system (new web-based teaching framework) automatically creates case-based exercises to train and guide the newly employed or resident sonographers for diagnosis of breast cancer using breast ultrasound images based on the BI-RADS.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"83 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90150509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Object identification and location used by the fruit and vegetable picking robot based on human-decision making 基于人决策的果蔬采摘机器人对目标的识别与定位
Yu Chen, Binbin Chen, Haitao Li
{"title":"Object identification and location used by the fruit and vegetable picking robot based on human-decision making","authors":"Yu Chen, Binbin Chen, Haitao Li","doi":"10.1109/CISP-BMEI.2017.8302010","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302010","url":null,"abstract":"The key to a picking robot is to identify and locate accurately in a fruit and vegetable picking site. This paper presented a method that was based on human-decision making. The human-decision making could overcome the difficulties brought by light environment, leaves shading, fruit ripening, fruit overlapping, etc. First, the binocular vision system was applied to obtain close-range pictures of the fruit and vegetable picking site; second, the picking points were chosen by human-decision making; then, the corresponding points of picking points were clicked on the screen based on epipolar geometry; finally, the coordinate transformation was used to calculate the spatial value of the picking points. The simulation experiment of cucumber picking (4 groups, 10 picking points in each group) in lab shown the maximum errors obtained were 15.1mm in vision depth direction and 8.7mm in horizontal direction. Both errors had no regular pattern, which was caused by inaccuracy in pixel when researchers click the picking points. Meanwhile, light condition, whether sunny or cloudy, had little effect on accuracy of identification and location. The research displays that this method can satisfy the need of accurate identification and location of picking points by the robot so it can be applied in design of the fruit and vegetable picking robot, improving the picking robot's quality of simplicity and accuracy in practice.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"9 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90483943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional neural networks based transfer learning for diabetic retinopathy fundus image classification 基于卷积神经网络的糖尿病视网膜病变眼底图像分类
Xiaogang Li, Tiantian Pang, B. Xiong, Weixiang Liu, Ping Liang, Tianfu Wang
{"title":"Convolutional neural networks based transfer learning for diabetic retinopathy fundus image classification","authors":"Xiaogang Li, Tiantian Pang, B. Xiong, Weixiang Liu, Ping Liang, Tianfu Wang","doi":"10.1109/CISP-BMEI.2017.8301998","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8301998","url":null,"abstract":"Convolutional Neural Networks (CNNs) have gained remarkable success in computer vision, which is mostly owe to their ability that enables learning rich image representations from large-scale annotated data. In the field of medical image analysis, large amounts of annotated data may be not always available. The number of acquired ground-truth data is sometimes insufficient to train the CNNs without overfitting and convergence issues from scratch. Hence application of the deep CNNs is a challenge in medical imaging domain. However, transfer learning techniques are shown to provide solutions for this challenge. In this paper, our target task is to implement diabetic retinopathy fundus image classification using CNNs based transfer learning. Experiments are performed on 1014 and 1200 fundus images from two publicly available DR1 and MESSIDOR datasets. In order to complete the target task, we carry out experiments using three different methods: 1) fine-tuning all network layers of each of different pre-trained CNN models; 2) fine-tuning a pre-trained CNN model in a layer-wise manner; 3) using pre-trained CNN models to extract features from fundus images, and then training support vector machines using these features. Experimental results show that convolutional neural networks based transfer learning can achieve better classification results in our task with small datasets (target domain), by taking advantage of knowledge learned from other related tasks with larger datasets (source domain). Transfer learning is a promising technique that promotes the use of deep CNNs in medical field with limited amounts of data.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"87 1","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90502498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
Video copy detection using histogram based spatio-temporal features 基于直方图时空特征的视频拷贝检测
Feifei Lee, Junjie Zhao, K. Kotani, Qiu Chen
{"title":"Video copy detection using histogram based spatio-temporal features","authors":"Feifei Lee, Junjie Zhao, K. Kotani, Qiu Chen","doi":"10.1109/CISP-BMEI.2017.8301917","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8301917","url":null,"abstract":"We propose a robust video copy detection method in this paper, which using combined histogram-based Spatiotemporal features for massive video database. The first is based on Histogram of Oriented Gradients (HOG) descriptor, an effective descriptor for object detection. It is used for describing the global feature of a frame in video sequence. The second is based on ordinal measure representation which is robust to size variation and color shifting as temporal feature. Furthermore, by adding an active search algorithm, the spatio-temporal features are combined to achieve video copy detection fast and accurately. Experiments show that our approach outperforms traditional algorithms in running time and detection accuracy.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"35 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80725350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A novel infrared and visible face fusion recognition method based on non-subsampled contourlet transform 一种基于非下采样contourlet变换的红外与可见光人脸融合识别新方法
Guodon Liu, Shuai Zhang, Zhihua Xie
{"title":"A novel infrared and visible face fusion recognition method based on non-subsampled contourlet transform","authors":"Guodon Liu, Shuai Zhang, Zhihua Xie","doi":"10.1109/CISP-BMEI.2017.8301965","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8301965","url":null,"abstract":"Near infrared and visible face fusion recognition is an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in (NSCT) domain is proposed for infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The proposed fusion face recognition method is tested on HITSZ Lab2 visible and near infrared face database. Experiment results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"27 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83263371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信