2007 IEEE International Workshop on Imaging Systems and Techniques最新文献

筛选
英文 中文
A blade coating inspection method based on an electromagnetic inverse scattering approach 基于电磁逆散射法的叶片涂层检测方法
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379582
A. Randazzo, E. Pignone
{"title":"A blade coating inspection method based on an electromagnetic inverse scattering approach","authors":"A. Randazzo, E. Pignone","doi":"10.1109/IST.2007.379582","DOIUrl":"https://doi.org/10.1109/IST.2007.379582","url":null,"abstract":"The quantitative evaluation of blade coatings is a key task for evaluating the structural integrity in power generation plants. Several conventional techniques are available, e.g., the ones based on micrographic analysis and eddy current techniques. However, the small thickness (of the order of hundreds of microns) of the highly conductive coatings of the blades allows the use of electromagnetic fields to inspect those structures. In this paper, an innovative multi-frequency approach based on electromagnetic waves is preliminarily proposed. In this method, the coating characterization problem is formulated as an inverse scattering problem, in which the measured scattered-field data are inverted in order to retrieve the information on the relevant diagnostic parameters. A new and efficient inversion technique, which exhibit high regularization capabilities, is employed. In this paper, the proposed approach is preliminary validated by means of a simplified structure, in order to study its feasibility for the solution of the coating characterization problem.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116376820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Marker-less Intra-Fraction Organ Motion Tracking - A Hybrid ASM Approach 无标记的器官运动跟踪-一种混合ASM方法
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379608
Y. Su, M. H. Fisher, R. Rowland, I. Introduction
{"title":"Marker-less Intra-Fraction Organ Motion Tracking - A Hybrid ASM Approach","authors":"Y. Su, M. H. Fisher, R. Rowland, I. Introduction","doi":"10.1109/IST.2007.379608","DOIUrl":"https://doi.org/10.1109/IST.2007.379608","url":null,"abstract":"External beam radiation therapy attempts to deliver a high dose of ionizing radiation to destroy cancerous tissue, while sparing healthy tissues and organs at risk (OAR). Advances in intensity modulated radiotherapy treatment (IMRT) call for a greater understanding of uncertainties in the treatment process and more rigorous protocols leading to greater precision in treatment delivery. The degree to which this can be achieved depends largely on the cancer site. The treatment of organs comprised of soft tissue (e.g. in the abdomen) and those subject to rhythmic movements (e.g. lungs) cause inter and intra-fraction motion artifacts that are particularly problematic. Various methods have been developed to tackle the problems caused by organ motion during radiotherapy treatment, e.g. real-time position management (RPM) respiratory gating (varian) and synchronized moving aperture radiation therapy (SMART), developed by researchers at Harvard medical school. The majority of the work focuses on tracking the position of the pathologic region, with the intra-fraction shape variation of the region being largely ignored. This paper proposes a novel method that addresses both the position and shape variation caused by the intra-fraction movement. This approach is seen able to reduce the margin of clinical treatment volume (CTV), hence, spare yet more surrounding healthy tissues from being exposed to radiation and limiting irradiation of OAR.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116574033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Log-Polar Interpolation Applied to Image Scaling 应用于图像缩放的对数极坐标插值
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379610
A. Amanatiadis, I. Andreadis, A. Gasteratos
{"title":"A Log-Polar Interpolation Applied to Image Scaling","authors":"A. Amanatiadis, I. Andreadis, A. Gasteratos","doi":"10.1109/IST.2007.379610","DOIUrl":"https://doi.org/10.1109/IST.2007.379610","url":null,"abstract":"This paper proposes a bio-inspired interpolation algorithm suitable for image scaling. A log-polar neighbor model is adopted, utilizing the feature of applying larger weights to pixels at the center of the interpolation region and logarithmically decreasing weights to pixels away from the center. The interpolation is performed in the Cartesian plane without requiring the full transformation of the image to the log-polar plane. Experiments show that in both visual comparisons and quantitative analysis, the results extracted by the proposed log-polar neighbor model are better than those extracted from pixel repetition, bilinear and bicubic interpolation.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Discriminant Analysis Diagram for Pattern Recognition 模式识别的判别分析图
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379584
W. Skarbek
{"title":"Discriminant Analysis Diagram for Pattern Recognition","authors":"W. Skarbek","doi":"10.1109/IST.2007.379584","DOIUrl":"https://doi.org/10.1109/IST.2007.379584","url":null,"abstract":"In this lecture notes a novel point of view onto discriminant models used for biometric verification is presented. Various linear discriminant algorithms based on Fisher-like class separation measures are incorporated into discriminant analysis diagram (DAD). This new methodology can be used for design of special class of pattern recognition systems. Namely, pattern recognition embracing verification, identification, and indexing of patterns are based on intra-class errors when pattern classes used in training time are different than classes recognized in system exploiting time. This is typical case in biometric identity verification. The point is illustrated well by analysis of recent advances in development efface recognition algorithms.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132281237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Digital Imaging Based Measurement of Combustion Flame Characteristics 基于数字成像的燃烧火焰特性测量
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379607
G. Lu, A. Stasiak, J. Shao, Yong Yan
{"title":"Digital Imaging Based Measurement of Combustion Flame Characteristics","authors":"G. Lu, A. Stasiak, J. Shao, Yong Yan","doi":"10.1109/IST.2007.379607","DOIUrl":"https://doi.org/10.1109/IST.2007.379607","url":null,"abstract":"This paper presents the system software development and the experimental calibration of a flame imaging system. The new system software is developed based on a unique architecture through a careful combination of Visual Basic and Visual C programming strategies for better system stability and response time. The imaging system is calibrated using high standard blackbody sources as temperature/light references over the temperature range from 800degC to 1650degC for an improved measurement accuracy of flame temperature. The new system software and calibration curves are evaluated through off-line analysis where flame images captured in previous tests on industry-scale coal combustion test facilities are used. Test results and their comparisons with online data are presented and discussed.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132497676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Localization of Endoscopic Capsule in the GI Tract Based on MPEG-7 Visual Descriptors 基于MPEG-7视觉描述符的内镜囊在胃肠道中的定位
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379580
K. Duda, T. Zieliński, R. Frączek, J. Bulat, M. Duplaga
{"title":"Localization of Endoscopic Capsule in the GI Tract Based on MPEG-7 Visual Descriptors","authors":"K. Duda, T. Zieliński, R. Frączek, J. Bulat, M. Duplaga","doi":"10.1109/IST.2007.379580","DOIUrl":"https://doi.org/10.1109/IST.2007.379580","url":null,"abstract":"The paper addresses the problem of localization of video endoscopic capsule in the gastrointestinal (GI) tract on the base of appropriate classification of images received from it. In this context usefulness of MPEG-7 image descriptors as classification features has been verified. For classification purpose various state of the art tools were used including Neural Networks and Vector Quantization. The dimension of the problem was also reduced by the Principal Component Analysis. Novelty of the presented approach consists in joint application of mentioned above techniques for recognition of the GI region inspected by the capsule by means of classification of MPEG-7 features to different parts of GI tract. In this research recognition of the upper part organs of the GI tract has been performed.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"81 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130854444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Velocity extraction from spin-tagging MRI images using a weighted least-squares optical flow method 基于加权最小二乘光流法的自旋标记MRI图像速度提取
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379577
J. Stoitsis, E. Bastouni, D. Karampinos, J. Bosshard, Jiaxi Lu, S. Golemati, S. Wright, J. Georgiadis, K. Nikita
{"title":"Velocity extraction from spin-tagging MRI images using a weighted least-squares optical flow method","authors":"J. Stoitsis, E. Bastouni, D. Karampinos, J. Bosshard, Jiaxi Lu, S. Golemati, S. Wright, J. Georgiadis, K. Nikita","doi":"10.1109/IST.2007.379577","DOIUrl":"https://doi.org/10.1109/IST.2007.379577","url":null,"abstract":"Magnetic resonance imaging (MRI) can provide truly non-invasive measurements of internal flow fields. The extraction of velocity from spin-tagging images requires the quantitative tracking of grid nodes. A weighted least-squares optical flow method was used in this work to estimate the displacements of the grid nodes and tags from synthetic and real spin-tagging MRI images. To investigate the accuracy of the proposed method, synthetic spin-tagging images were generated using the Poiseuille law analytical profile. Three synthetic sequences with different levels of noise were generated and the average and maximum absolute errors were estimated for points corresponding to grid nodes and tags. Different sizes and shapes of region of interest (ROI) were investigated to determine the optimal size and shape for reliable extraction of velocity both for synthetic and real spin-tagging MRI images. The optimal ROI size was found to be 13x13 pixels2 . The average and maximum absolute error for the velocity in vertical direction for synthetic data using the optimal ROI size ranged from 5.46% to 14.42% and from 6.39% to 31.96% respectively.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130946529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Rotational and Translational Image Stabilization System for Remotely Operated Robots 一种用于遥控机器人的旋转和平移稳像系统
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379602
A. Amanatiadis, I. Andreadis, A. Gasteratos, N. Kyriakoulis
{"title":"A Rotational and Translational Image Stabilization System for Remotely Operated Robots","authors":"A. Amanatiadis, I. Andreadis, A. Gasteratos, N. Kyriakoulis","doi":"10.1109/IST.2007.379602","DOIUrl":"https://doi.org/10.1109/IST.2007.379602","url":null,"abstract":"Remotely operated robots equipped with on board cameras, apart from providing video input to operators, perform optical measurements to assist their navigation as well. Such image processing algorithms require image sequences, free of high frequency unwanted movements, in order to generate their optimal results. Image stabilization is the process which removes the undesirable position fluctuations of a video sequence improving, therefore, its visual quality. In this paper, we introduce the implementation of an image stabilization system that utilizes input from an on board camera and a gyrosensor. The frame sequence is processed by an optic flow algorithm and the inertial data is processed by a discrete Kalman filter. The compensation is performed using two servo motors for the pan and tilt movements and frame shifting for the vertical and horizontal movements. Experimental results of the robot head, have shown fine stabilized image sequences and a system capable of processing 320 times 240 pixel image sequences at approximately 10 frames/sec, with a maximum acceleration of A deg/sec2.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121266773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Improvement in Minutiae Detection by Single Ridge Local Analysis for Fingerprint Image Processing 基于单脊局部分析的指纹图像细节检测改进
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379585
M. Jedryka, Z. Wawrzyniak
{"title":"Improvement in Minutiae Detection by Single Ridge Local Analysis for Fingerprint Image Processing","authors":"M. Jedryka, Z. Wawrzyniak","doi":"10.1109/IST.2007.379585","DOIUrl":"https://doi.org/10.1109/IST.2007.379585","url":null,"abstract":"This paper concerns algorithms related to analysis of fingerprint images and can be useful in the areas of image preprocessing and fingerprint matching based on extraction of minutiae. Algorithms are based on efficient application of detailed information about ridges in fingerprint patterns. The main goal of this algorithm is to optimize the so called ridge-following algorithm in grayscale images for detection of minutiae. Proposed modifications make use of local characteristic features from patterns of neighboring parallel ridges in order to minimize probability of skipping over ridges in a process of segmentation of a single ridge. This is a crucial problem in analysis of low quality images acquisited from a real sensor. The method shows to be efficient in proper minutiae detection together with other methods in fingerprint image processing. The solution of the use of local characteristic features proved to be useful also for filtration and segmentation of fingerprint images.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126093494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Colon Cleansing for Virtual Colonoscopy Using Non-linear Transfer Function and Morphological Operations 基于非线性传递函数和形态学操作的虚拟结肠镜结肠清洗
2007 IEEE International Workshop on Imaging Systems and Techniques Pub Date : 2007-05-05 DOI: 10.1109/IST.2007.379575
A. Skalski, M. Socha, T. Zielinski, M. Duplaga
{"title":"Colon Cleansing for Virtual Colonoscopy Using Non-linear Transfer Function and Morphological Operations","authors":"A. Skalski, M. Socha, T. Zielinski, M. Duplaga","doi":"10.1109/IST.2007.379575","DOIUrl":"https://doi.org/10.1109/IST.2007.379575","url":null,"abstract":"The virtual colonoscopy (VC) techniques try to simulate a real colonoscopy. A doctor who makes real colonoscopy examination does not have optimal information about anatomical structures which he looks at. He sees the inner colon structure only. 3D visualization of the colon segmented from computed tomography (CT) data allows him to see the whole organ, its inner and outer part. The VC helps doctors during diagnostic processes in identification and localization of pathological changes and offers computer support for endoscopic procedures. In this paper we present new colon cleansing method based on non-linear transfer function and morphological operations. Colon cleansing is required when we receive non-clean CT data or when a patient had administered contrast before the CT scan. It allows to see the whole colon even this one lying under fluid and to compute colon centerline correctly.","PeriodicalId":329519,"journal":{"name":"2007 IEEE International Workshop on Imaging Systems and Techniques","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127996648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信