2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)最新文献

筛选
英文 中文
Image Contrast Enhancement Using Weighted Uniform Histogram for Maximum Information 利用加权均匀直方图增强图像对比度以获取最大信息
N. Kwok, Haiyan Shi, Ying-Hao Yu, Yeping Peng, Shilong Liu, Ruowei Li, Hongkun Wu
{"title":"Image Contrast Enhancement Using Weighted Uniform Histogram for Maximum Information","authors":"N. Kwok, Haiyan Shi, Ying-Hao Yu, Yeping Peng, Shilong Liu, Ruowei Li, Hongkun Wu","doi":"10.1109/CISP-BMEI.2018.8633178","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633178","url":null,"abstract":"Contrast enhancement is a fundamental and important process in image based applications, which often require the image to provide the viewer with the maximum amount of details from the captured scene. This objective can be accomplished by maximizing the information content of the image. Instead of modifying the image histogram as in most equalization methods, we propose to firstly process the image intensity to follow an exact uniform distribution for information maximization. Then this intermediate image is combined with the original input in a weighted sum to preserve the features. A resultant image of high information content, while maintaining the original features, can then be produced. Experimental results had shown the effectiveness of our developed method with regard to maximizing the information content.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123827371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Top-Down Saliency Object Localization Based on Deep-Learned Features 基于深度学习特征的自顶向下显著性对象定位
Duzhen Zhang, Shu Liu
{"title":"Top-Down Saliency Object Localization Based on Deep-Learned Features","authors":"Duzhen Zhang, Shu Liu","doi":"10.1109/CISP-BMEI.2018.8633218","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633218","url":null,"abstract":"How to accurately and efficiently localize objects in images is a challenging computer vision problem. In this article, a novel top-down fine-grained saliency object localization method based on deep-learned features is proposed, which can localize the same object in input image as the query image. The query image and its three subsample images are used as top-down cues to guide saliency detection. We ameliorate Convolutional Neural Network (CNN) using the fast VGG network (VGG-f) and retrained on the Pascal VOC 2012 dataset. Experiment on the FiFA dataset demonstrates that the proposed algorithm can effectively localize the saliency region and find the same object (human face) as the query. Experiments on the David1 and Face1 sequences conclusively prove that the proposed algorithm is able to effectively deal with different challenging factors including appearance and scale variations, shape deformation and partial occlusion.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131462548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PoseLab: A Levenberg-Marquardt Based Prototyping Environment for Camera Pose Estimation PoseLab:一个基于Levenberg-Marquardt的相机姿态估计原型环境
M. Darcis, W. Swinkels, A. E. Guzel, L. Claesen
{"title":"PoseLab: A Levenberg-Marquardt Based Prototyping Environment for Camera Pose Estimation","authors":"M. Darcis, W. Swinkels, A. E. Guzel, L. Claesen","doi":"10.1109/CISP-BMEI.2018.8633112","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633112","url":null,"abstract":"A pose identifies the 3D position and 3D rotation of an object with respect to a reference coordinate system. Cameras can be used to determine the pose of objects in a scene. Fiducial markers on a object make it easier to segment the object. For applications where either high-accuracy, high speed and/or very low latency are required, dedicated hardware architectures are needed as regular processors are not performant enough. The dedicated hardware needs to be designed such that the constraints are taken into account in order to meet the requirements. A software prototyping environment PoseLab is presented which can be used to evaluate the effects of various design trade-offs.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131315880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Complexity Analysis of EEG Signals for Fatigue Driving Based on Sample Entropy 基于样本熵的疲劳驾驶脑电信号复杂度分析
Chunxiao Han, Yaru Yang, Xiaozhou Sun, Yingmei Qin
{"title":"Complexity Analysis of EEG Signals for Fatigue Driving Based on Sample Entropy","authors":"Chunxiao Han, Yaru Yang, Xiaozhou Sun, Yingmei Qin","doi":"10.1109/CISP-BMEI.2018.8633011","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633011","url":null,"abstract":"Sample entropy describes the complexity of time series by information growth rate, which has been widely used in EEG signal analysis. In order to explore the brain activity during fatigue driving, we built a simulated automobile driving experimental platform based on Unity3D software, and designed an experiment that simulating fatigue driving process to collects EEG signals of the brain from 17 healthy subjects. The changes of the complexity of the EEG signals in different rhythms are studied by comparing the sample entropy of different regions during the sober and fatigue states, respectively. The results show that the sample entropy of the EEG signals of the brain in the delta, theta, alpha, beta and gamma rhythms decrease during fatigue in which the beta rhythm and gamma rhythm decrease significantly. The sample entropy of frontal region of the brain in beta rhythm decrease significantly during fatigue state, and alpha, beta and gamma rhythm of central region of brain also decrease significantly during fatigue state, while there is no significant change in other brain regions. This experiment shows that the randomness of nerve cell activity is small and the complexity of brain decreases during fatigue state, which mainly manifest in that the beta rhythm of frontal and central regions is significantly decreased, which can provide a theoretical support for fatigue driving detection.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126297323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3D Head Shape Analysis of Suspected Zika Infected Infants 疑似寨卡病毒感染婴儿的三维头部形状分析
X. Ju, I. G. Junior, Leonardo de Freitas Silva, P. Mossey, Dhelal Al-Rudainy, A. Ayoub, Adriana Marques De Mattos
{"title":"3D Head Shape Analysis of Suspected Zika Infected Infants","authors":"X. Ju, I. G. Junior, Leonardo de Freitas Silva, P. Mossey, Dhelal Al-Rudainy, A. Ayoub, Adriana Marques De Mattos","doi":"10.1109/CISP-BMEI.2018.8633125","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633125","url":null,"abstract":"The babies infected from Zika before they are born are at risk for problems with brain development and microcephaly. 3D head images of 43 Zika cases and 43 controls were collected aiming to extract shape characteristics for diagnosis purposes. Principal component analysis (PCA) has been applied on the vaults and faces of the collected 3D images and the scores on the second principal components of the vaults and faces showed significant differences between the control and Zika groups. The shape variations from $-2sigma text{to} 2sigma$ illustrated the typical characteristics of microcephaly of the Zika babies. Canonical correlation analysis (CCA) showed a significant correlation in the first CCA variates of face and vault which indicated the potential of 3D facial imaging for Zika surveillance. Further head circumferences and distances from ear to ear were measured from the 3D images and preliminary results showed the adding ear to ear distances for classifying control and Zika children strengthened the abilities of tested classification models.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125590375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges and Preprocessing Recommendations for MADCAT Dataset of Handwritten Arabic Documents MADCAT阿拉伯语手写文档数据集的挑战与预处理建议
Gheith A. Abandah, Ahmad S. Ai-Hourani
{"title":"Challenges and Preprocessing Recommendations for MADCAT Dataset of Handwritten Arabic Documents","authors":"Gheith A. Abandah, Ahmad S. Ai-Hourani","doi":"10.1109/CISP-BMEI.2018.8633103","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633103","url":null,"abstract":"In this paper, we analyze the dataset often used in training and testing Arabic handwritten document recognition systems, the Multilingual Automatic Document Classification Analysis and Translation dataset (MADCAT). We report here the main challenges present in MADCAT that the preprocessing stage of any recognition algorithm faces and affect the performance of the systems that use it for training and testing. MADCAT is a representative dataset of Arabic handwritten documents and investigating its challenges helps to identify the requirements of the preprocessing stage. After presenting these challenges, we review the literature and recommend preprocessing algorithms suitable to preprocess this dataset for handwritten Arabic word recognition systems such as JU-OCR2.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130576508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Segmentation of Mammography Images Based on Spectrum Clustering Method 基于谱聚类方法的乳腺造影图像分割
Silin Liu, Y. Wang
{"title":"Segmentation of Mammography Images Based on Spectrum Clustering Method","authors":"Silin Liu, Y. Wang","doi":"10.1109/CISP-BMEI.2018.8633062","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633062","url":null,"abstract":"Mass segmentation in mammography images is one of the effective ways to screen breast cancer. The accurate segmentation of the pectoral muscle can improve the accuracy of mass recognition. However, the results of traditional mammography image segmentation methods often appear incomplete segmentation and over-segmentation, the accuracy is low, which directly affects the accuracy of breast cancer screening. To solve these problems, a segmentation method of mammography images based on spectral clustering is proposed in this paper. Firstly, we use the spectral clustering to segment the pectoral muscle preliminarily. In view of the stratification of pectoral muscle and the unclear boundary of breast muscle and breast tissue, we use the maximum grayscale difference constraint and shape constraint to achieve accurate breast muscle segmentation. The mass is recognized accurately with the segmented image. The experimental results of the MIAS breast image database show that the proposed method can effectively segment the uneven grayscale pectoral muscle caused by the overlap of the pectoral muscle tissues, and it is robust to the segmentation of tumors of different sizes.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115013063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and Analysis of Breathing Sounds from Trachea 气管呼吸音的检测与分析
Xiong Li, Seung H. Lee, Soo M. Chae, Kyoung H. Hwang, Moo J. Cho, J. Im
{"title":"Detection and Analysis of Breathing Sounds from Trachea","authors":"Xiong Li, Seung H. Lee, Soo M. Chae, Kyoung H. Hwang, Moo J. Cho, J. Im","doi":"10.1109/CISP-BMEI.2018.8633198","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633198","url":null,"abstract":"A major component of risk management during deep sedation or anesthesia involves the monitoring of various physiological signals. The aim of this study was to develop a sensing module for monitoring breathing sounds from trachea in order to minimize the risk factor of hypopnea and hypoxia during intervention under deep sedation. PVDF(polyvinylidene fluoride) film as a sensing material and wearable sensing module were developed in house for detecting tracheal sounds during respiration. Analysis algorithm for removing environmental noise and calculating respiratory rate was proved to be accurate. Results show that the proposed sensor, simple in design, noninvasive, patient friendly, detects breathing sounds easily. It could be used to alert clinicians about the patient's respiratory deterioration earlier than the currently existing methods.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116436189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research of EEG Signal Based on Permutation Entropy and Limited Penetrable Visibility Graph 基于置换熵和有限穿透可见图的脑电信号研究
Honghong Xu, Jiafei Dai, Jin Li, Jun Wang, F. Hou
{"title":"Research of EEG Signal Based on Permutation Entropy and Limited Penetrable Visibility Graph","authors":"Honghong Xu, Jiafei Dai, Jin Li, Jun Wang, F. Hou","doi":"10.1109/CISP-BMEI.2018.8633121","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633121","url":null,"abstract":"Complex networks can be seen as a way of describing complex systems. Starting from the end of the twentieth Century, the theory of complex network has gradually penetrated into all fields of social science, and it has become one of the most important tools for people to solve the problem. The complex network theory is helpful in studying the interaction between different brain regions, topology structure and the dynamic information, as well as the relationship between disease and physiological function. Electroencephalogram(EEG) is an important tool for the disease diagnosis and prediction. The paper adopts Permutation Entropy(PE) and Limited Penetrable Visibility Graph(LPVG) algorithm to construct the complex networks and implement networks visualization. Using this method to research 21 normal people and 21 epilepsy EEG signal, in addition compare statistical characteristics of different brain networks. The results verify the validity of the PE and LPVG algorithm for analyzing brain functional networks and show that the properties of the different attention EEG are different. This method provides important reference for further study of the brain function network dynamics of epileptic EEG signals.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132974276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A 3D Fully Convolutional Network Based Semantic Segmentation for Ear Computed Tomography Images 基于三维全卷积网络的耳部计算机断层图像语义分割
Zhaopeng Gong, Xiaoguang Li, Li Zhou, Hui Zhang
{"title":"A 3D Fully Convolutional Network Based Semantic Segmentation for Ear Computed Tomography Images","authors":"Zhaopeng Gong, Xiaoguang Li, Li Zhou, Hui Zhang","doi":"10.1109/CISP-BMEI.2018.8633242","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2018.8633242","url":null,"abstract":"Ear computed tomography (CT) has become an important means of diagnosing ear diseases, which provides doctors with a chance of observing the shape and components of the key anatomical structures of the auditory system. Therefore, it is helpful to diagnose the ear diseases early. However, the anatomical structures of the auditory system are characterized by complexity, sophisticated, and large individual differences, meanwhile, they are small and difficult to segment. Most of the existing medical image segmentation algorithms fail in segmenting the ear anatomical structures. To address the problem, a 3D fully convolutional network (3D- FCN) based semantic segmentation method is proposed for the key anatomical structures of ear CT Images. We evaluated our approach on the ear CT dataset. Compared to the 2D fully convolutional network (2D-FCN), the mean Dice-Serensen Coefficient (DSC) of our method is improved significantly in the task of segmentation for six key anatomical structures of the ear. The experimental results show that our method can effectively improve the segmentation accuracy of key anatomical structures of ear CT images.","PeriodicalId":117227,"journal":{"name":"2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131130455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信