Segmentation of Defocus Blur using Local Triplicate Co-Occurrence Patterns (LTCoP)

Awais Khan, Syed Aun Irtaza, A. Javed, Muhammad Ammar Khan
{"title":"Segmentation of Defocus Blur using Local Triplicate Co-Occurrence Patterns (LTCoP)","authors":"Awais Khan, Syed Aun Irtaza, A. Javed, Muhammad Ammar Khan","doi":"10.1109/MACS48846.2019.9024808","DOIUrl":null,"url":null,"abstract":"Many digital images contain blurred regions which are caused by motion or defocus. The defocus blur reduces the contrast and sharpness detail of the image. Automatic blur detection and segmentation is an important and challenging task in the field of Computer vision “e.g. object recognition and scene interpretation” that requires the extraction and processing of large amounts of data from sharp areas of the image. Therefore, the sharp and blur areas must be segmented separately to assure that the information is extracted from the sharp regions. The existing techniques on blur detection and segmentation have taken a lot of effort and time to design metric maps of local clarity. Furthermore, these methods have various limitations “i.e. low accuracy rate in noisy images, detecting blurred smooth and sharp smooth regions, and high execution cost”. Therefore, there is a dire necessity to propose a method for the detection and segmentation of defocus blur robust to the aforementioned limitations. In this paper, we present a novel defocus blur detection and segmentation algorithm, “Local Triplicate Co-occurrence Patterns” (LTCoP) for the separation of in-focus and out-of-focus regions. It is observed that the fusion of extracted higher and lower patterns of LTCoP produces far better results than the others. To test the effectiveness of our algorithm, the proposed method is compared with several state-of-the-art techniques over a large number of sample images. The experimental results show that the proposed technique obtains comparative results with state-of-the-art methods and offers a significant high-speed advantage over them. Therefore, we argue that the proposed method can reliably be used for defocus blur detection and segmentation in high-density images.","PeriodicalId":434612,"journal":{"name":"2019 13th International Conference on Mathematics, Actuarial Science, Computer Science and Statistics (MACS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 13th International Conference on Mathematics, Actuarial Science, Computer Science and Statistics (MACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MACS48846.2019.9024808","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Many digital images contain blurred regions which are caused by motion or defocus. The defocus blur reduces the contrast and sharpness detail of the image. Automatic blur detection and segmentation is an important and challenging task in the field of Computer vision “e.g. object recognition and scene interpretation” that requires the extraction and processing of large amounts of data from sharp areas of the image. Therefore, the sharp and blur areas must be segmented separately to assure that the information is extracted from the sharp regions. The existing techniques on blur detection and segmentation have taken a lot of effort and time to design metric maps of local clarity. Furthermore, these methods have various limitations “i.e. low accuracy rate in noisy images, detecting blurred smooth and sharp smooth regions, and high execution cost”. Therefore, there is a dire necessity to propose a method for the detection and segmentation of defocus blur robust to the aforementioned limitations. In this paper, we present a novel defocus blur detection and segmentation algorithm, “Local Triplicate Co-occurrence Patterns” (LTCoP) for the separation of in-focus and out-of-focus regions. It is observed that the fusion of extracted higher and lower patterns of LTCoP produces far better results than the others. To test the effectiveness of our algorithm, the proposed method is compared with several state-of-the-art techniques over a large number of sample images. The experimental results show that the proposed technique obtains comparative results with state-of-the-art methods and offers a significant high-speed advantage over them. Therefore, we argue that the proposed method can reliably be used for defocus blur detection and segmentation in high-density images.
局部三重共现模式(LTCoP)分割离焦模糊
许多数字图像包含由运动或散焦引起的模糊区域。散焦模糊降低了图像的对比度和清晰度细节。自动模糊检测和分割是计算机视觉领域的一项重要而具有挑战性的任务,例如物体识别和场景解释,它需要从图像的清晰区域提取和处理大量数据。因此,必须将锐利区域和模糊区域分开分割,以保证从锐利区域提取信息。现有的模糊检测和分割技术已经花费了大量的精力和时间来设计局部清晰度的度量图。此外,这些方法还存在各种局限性,如在有噪声的图像中准确率低,检测模糊的光滑和尖锐的光滑区域,执行成本高。因此,迫切需要提出一种鲁棒的离焦模糊检测和分割方法,以克服上述限制。在本文中,我们提出了一种新的离焦模糊检测和分割算法——“局部三重复共现模式”(LTCoP),用于焦内和焦外区域的分离。结果表明,提取的LTCoP高、低模式的融合效果远好于其他模式。为了测试我们的算法的有效性,在大量的样本图像上,将提出的方法与几种最先进的技术进行了比较。实验结果表明,该方法与现有方法相比,具有明显的高速优势。因此,我们认为该方法可以可靠地用于高密度图像的散焦模糊检测和分割。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信