Large language models for ingredient substitution in food recipes using supervised fine-tuning and direct preference optimization

Thevin Senath , Kumuthu Athukorala , Ransika Costa , Surangika Ranathunga , Rishemjit Kaur
{"title":"Large language models for ingredient substitution in food recipes using supervised fine-tuning and direct preference optimization","authors":"Thevin Senath ,&nbsp;Kumuthu Athukorala ,&nbsp;Ransika Costa ,&nbsp;Surangika Ranathunga ,&nbsp;Rishemjit Kaur","doi":"10.1016/j.nlp.2025.100177","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we address the challenge of recipe personalization through ingredient substitution. We make use of Large Language Models (LLMs) to build an ingredient substitution system designed to predict plausible substitute ingredients within a given recipe context. Given that the use of LLMs for this task has been barely done, we carry out an extensive set of experiments to determine the best LLM, prompt, and the fine-tuning setups. We further experiment with methods such as multi-task learning, two-stage fine-tuning, and Direct Preference Optimization (DPO). The experiments are conducted using the publicly available Recipe1MSub corpus. The best results are produced by the Mistral7-Base LLM after fine-tuning and DPO. This result outperforms the strong baseline available for the same corpus with a Hit@1 score of 22.04. Although LLM results lag behind the baseline with respect to other metrics such as Hit@3 and Hit@10, we believe that this research represents a promising step towards enabling personalized and creative culinary experiences by utilizing LLM-based ingredient substitution.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"12 ","pages":"Article 100177"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719125000536","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we address the challenge of recipe personalization through ingredient substitution. We make use of Large Language Models (LLMs) to build an ingredient substitution system designed to predict plausible substitute ingredients within a given recipe context. Given that the use of LLMs for this task has been barely done, we carry out an extensive set of experiments to determine the best LLM, prompt, and the fine-tuning setups. We further experiment with methods such as multi-task learning, two-stage fine-tuning, and Direct Preference Optimization (DPO). The experiments are conducted using the publicly available Recipe1MSub corpus. The best results are produced by the Mistral7-Base LLM after fine-tuning and DPO. This result outperforms the strong baseline available for the same corpus with a Hit@1 score of 22.04. Although LLM results lag behind the baseline with respect to other metrics such as Hit@3 and Hit@10, we believe that this research represents a promising step towards enabling personalized and creative culinary experiences by utilizing LLM-based ingredient substitution.
使用监督微调和直接偏好优化的食品配方成分替代的大型语言模型
在本文中,我们通过成分替代来解决配方个性化的挑战。我们利用大型语言模型(LLMs)来构建一个成分替代系统,旨在预测给定配方上下文中合理的替代成分。考虑到在这项任务中很少使用LLM,我们进行了一组广泛的实验,以确定最佳的LLM、提示和微调设置。我们进一步实验了多任务学习、两阶段微调和直接偏好优化(DPO)等方法。实验是使用公开可用的Recipe1MSub语料库进行的。经过微调和DPO后,Mistral7-Base LLM获得了最好的效果。该结果优于相同语料库的强大基线,得分为Hit@1 22.04。虽然法学硕士的研究结果在Hit@3和Hit@10等其他指标方面落后于基线,但我们相信,这项研究代表了利用法学硕士为基础的成分替代实现个性化和创造性烹饪体验的有希望的一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信