Giuseppe Airò Farulla, Marco Indaco, P. Prinetto, Daniele Rolfo, Pascal Trotta
{"title":"ABLUR: An FPGA-based adaptive deblurring core for real-time applications","authors":"Giuseppe Airò Farulla, Marco Indaco, P. Prinetto, Daniele Rolfo, Pascal Trotta","doi":"10.1109/AHS.2014.6880165","DOIUrl":null,"url":null,"abstract":"If a camera moves while taking a picture, motion blur is induced. There exist mechanical techniques to prevent this effect to occur, but they are cumbersome and expensive. Considering for example an Unmanned Aerial Vehicle (UAV) engaged in a save and rescue mission, where recording frames of scene to identify people and animals to rescue is required. In such cases, weight of equipments is of absolute importance, and no extra hardware can be used. In such case, vibrations are unavoidably transmitted to the camera, and recorded frames are affected by blur. It is then necessary to deblur in real-time every frame to allow post-processing algorithms to extract the largest possible amount of information from them. For more than 40 years, numerous researchers have developed theories and algorithms for this purpose, which work quite well but very often require multiple different versions of the input image, huge amount of computational resources, large execution times or intensive parameters tuning.","PeriodicalId":428581,"journal":{"name":"2014 NASA/ESA Conference on Adaptive Hardware and Systems (AHS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 NASA/ESA Conference on Adaptive Hardware and Systems (AHS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AHS.2014.6880165","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
If a camera moves while taking a picture, motion blur is induced. There exist mechanical techniques to prevent this effect to occur, but they are cumbersome and expensive. Considering for example an Unmanned Aerial Vehicle (UAV) engaged in a save and rescue mission, where recording frames of scene to identify people and animals to rescue is required. In such cases, weight of equipments is of absolute importance, and no extra hardware can be used. In such case, vibrations are unavoidably transmitted to the camera, and recorded frames are affected by blur. It is then necessary to deblur in real-time every frame to allow post-processing algorithms to extract the largest possible amount of information from them. For more than 40 years, numerous researchers have developed theories and algorithms for this purpose, which work quite well but very often require multiple different versions of the input image, huge amount of computational resources, large execution times or intensive parameters tuning.