Peiyu Li, Xiaobao Huang, Yijun Tian, Nitesh V. Chawla
{"title":"ChefFusion: Multimodal Foundation Model Integrating Recipe and Food Image Generation","authors":"Peiyu Li, Xiaobao Huang, Yijun Tian, Nitesh V. Chawla","doi":"arxiv-2409.12010","DOIUrl":null,"url":null,"abstract":"Significant work has been conducted in the domain of food computing, yet\nthese studies typically focus on single tasks such as t2t (instruction\ngeneration from food titles and ingredients), i2t (recipe generation from food\nimages), or t2i (food image generation from recipes). None of these approaches\nintegrate all modalities simultaneously. To address this gap, we introduce a\nnovel food computing foundation model that achieves true multimodality,\nencompassing tasks such as t2t, t2i, i2t, it2t, and t2ti. By leveraging large\nlanguage models (LLMs) and pre-trained image encoder and decoder models, our\nmodel can perform a diverse array of food computing-related tasks, including\nfood understanding, food recognition, recipe generation, and food image\ngeneration. Compared to previous models, our foundation model demonstrates a\nsignificantly broader range of capabilities and exhibits superior performance,\nparticularly in food image generation and recipe generation tasks. We\nopen-sourced ChefFusion at GitHub.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Significant work has been conducted in the domain of food computing, yet
these studies typically focus on single tasks such as t2t (instruction
generation from food titles and ingredients), i2t (recipe generation from food
images), or t2i (food image generation from recipes). None of these approaches
integrate all modalities simultaneously. To address this gap, we introduce a
novel food computing foundation model that achieves true multimodality,
encompassing tasks such as t2t, t2i, i2t, it2t, and t2ti. By leveraging large
language models (LLMs) and pre-trained image encoder and decoder models, our
model can perform a diverse array of food computing-related tasks, including
food understanding, food recognition, recipe generation, and food image
generation. Compared to previous models, our foundation model demonstrates a
significantly broader range of capabilities and exhibits superior performance,
particularly in food image generation and recipe generation tasks. We
open-sourced ChefFusion at GitHub.