{"title":"InferEdit: An instruction-based system with a multimodal LLM for complex multi-target image editing","authors":"Zhiyong Huang, Yali She, MengLi Xiang, TuoJun Ding","doi":"10.1016/j.visinf.2025.100265","DOIUrl":null,"url":null,"abstract":"<div><div>To address the limitations of existing instruction-based image editing methods in handling complex Multi-target instructions and maintaining semantic consistency, we present InferEdit, a training-free image editing system driven by a Multimodal Large Language Model (MLLM). The system parses complex multi-target instructions into sequential subtasks and performs editing iteratively through target localization and semantic reasoning. Furthermore, to adaptively select the most suitable editing models, we construct the evaluation dataset InferDataset to evaluate various editing models on three types of tasks: object removal, object replacement, and local editing. Based on a comprehensive scoring mechanism, we build Binary Search Trees (BSTs) for different editing types to facilitate model scheduling. Experiments demonstrate that InferEdit outperforms existing methods in handling complex instructions while maintaining semantic consistency and visual quality.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100265"},"PeriodicalIF":3.8000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X25000488","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
To address the limitations of existing instruction-based image editing methods in handling complex Multi-target instructions and maintaining semantic consistency, we present InferEdit, a training-free image editing system driven by a Multimodal Large Language Model (MLLM). The system parses complex multi-target instructions into sequential subtasks and performs editing iteratively through target localization and semantic reasoning. Furthermore, to adaptively select the most suitable editing models, we construct the evaluation dataset InferDataset to evaluate various editing models on three types of tasks: object removal, object replacement, and local editing. Based on a comprehensive scoring mechanism, we build Binary Search Trees (BSTs) for different editing types to facilitate model scheduling. Experiments demonstrate that InferEdit outperforms existing methods in handling complex instructions while maintaining semantic consistency and visual quality.
为了解决现有基于指令的图像编辑方法在处理复杂的多目标指令和保持语义一致性方面的局限性,我们提出了一个由多模态大语言模型(Multimodal Large Language Model, MLLM)驱动的无需训练的图像编辑系统。该系统通过目标定位和语义推理,将复杂的多目标指令解析成顺序的子任务,并进行迭代编辑。此外,为了自适应地选择最合适的编辑模型,我们构建了评估数据集InferDataset,对对象移除、对象替换和局部编辑三种类型的编辑模型进行评估。基于综合评分机制,我们针对不同的编辑类型构建了二叉搜索树(BSTs),以方便模型调度。实验表明,在保持语义一致性和视觉质量的同时,InferEdit在处理复杂指令方面优于现有方法。