Jianqun Zhou;Pengwen Dai;Yang Li;Manjiang Hu;Xiaochun Cao
{"title":"用于场景文本编辑的显式解耦文本传输与最小化背景重构","authors":"Jianqun Zhou;Pengwen Dai;Yang Li;Manjiang Hu;Xiaochun Cao","doi":"10.1109/TIP.2024.3477355","DOIUrl":null,"url":null,"abstract":"Scene text editing aims to replace the source text with the target text while preserving the original background. Its practical applications span various domains, such as data generation and privacy protection, highlighting its increasing importance in recent years. In this study, we propose a novel Scene Text Editing network with Explicitly-decoupled text transfer and Minimized background reconstruction, called STEEM. Unlike existing methods that usually fuse text style, text content, and background, our approach focuses on decoupling text style and content from the background and utilizes the minimized background reconstruction to reduce the impact of text replacement on the background. Specifically, the text-background separation module predicts the text mask of the scene text image, separating the source text from the background. Subsequently, the style-guided text transfer decoding module transfers the geometric and stylistic attributes of the source text to the content text, resulting in the target text. Next, the background and target text are combined to determine the minimal reconstruction area. Finally, the context-focused background reconstruction module is applied to the reconstruction area, producing the editing result. Furthermore, to ensure stable joint optimization of the four modules, a task-adaptive training optimization strategy has been devised. Experimental evaluations conducted on two popular datasets demonstrate the effectiveness of our approach. STEEM outperforms state-of-the-art methods, as evidenced by a reduction in the FID index from 29.48 to 24.67 and an increase in text recognition accuracy from 76.8% to 78.8%.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5921-5935"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explicitly-Decoupled Text Transfer With Minimized Background Reconstruction for Scene Text Editing\",\"authors\":\"Jianqun Zhou;Pengwen Dai;Yang Li;Manjiang Hu;Xiaochun Cao\",\"doi\":\"10.1109/TIP.2024.3477355\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scene text editing aims to replace the source text with the target text while preserving the original background. Its practical applications span various domains, such as data generation and privacy protection, highlighting its increasing importance in recent years. In this study, we propose a novel Scene Text Editing network with Explicitly-decoupled text transfer and Minimized background reconstruction, called STEEM. Unlike existing methods that usually fuse text style, text content, and background, our approach focuses on decoupling text style and content from the background and utilizes the minimized background reconstruction to reduce the impact of text replacement on the background. Specifically, the text-background separation module predicts the text mask of the scene text image, separating the source text from the background. Subsequently, the style-guided text transfer decoding module transfers the geometric and stylistic attributes of the source text to the content text, resulting in the target text. Next, the background and target text are combined to determine the minimal reconstruction area. Finally, the context-focused background reconstruction module is applied to the reconstruction area, producing the editing result. Furthermore, to ensure stable joint optimization of the four modules, a task-adaptive training optimization strategy has been devised. Experimental evaluations conducted on two popular datasets demonstrate the effectiveness of our approach. STEEM outperforms state-of-the-art methods, as evidenced by a reduction in the FID index from 29.48 to 24.67 and an increase in text recognition accuracy from 76.8% to 78.8%.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"33 \",\"pages\":\"5921-5935\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10719657/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10719657/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explicitly-Decoupled Text Transfer With Minimized Background Reconstruction for Scene Text Editing
Scene text editing aims to replace the source text with the target text while preserving the original background. Its practical applications span various domains, such as data generation and privacy protection, highlighting its increasing importance in recent years. In this study, we propose a novel Scene Text Editing network with Explicitly-decoupled text transfer and Minimized background reconstruction, called STEEM. Unlike existing methods that usually fuse text style, text content, and background, our approach focuses on decoupling text style and content from the background and utilizes the minimized background reconstruction to reduce the impact of text replacement on the background. Specifically, the text-background separation module predicts the text mask of the scene text image, separating the source text from the background. Subsequently, the style-guided text transfer decoding module transfers the geometric and stylistic attributes of the source text to the content text, resulting in the target text. Next, the background and target text are combined to determine the minimal reconstruction area. Finally, the context-focused background reconstruction module is applied to the reconstruction area, producing the editing result. Furthermore, to ensure stable joint optimization of the four modules, a task-adaptive training optimization strategy has been devised. Experimental evaluations conducted on two popular datasets demonstrate the effectiveness of our approach. STEEM outperforms state-of-the-art methods, as evidenced by a reduction in the FID index from 29.48 to 24.67 and an increase in text recognition accuracy from 76.8% to 78.8%.