{"title":"Developing large language models for display industrial knowledge: Data augmentation, training techniques, and evaluation strategies","authors":"Bingqian Wang, Lixin Wang, Qingqing Sun, Yulan Hu, Yuyu Liu, Xingqun Jiang","doi":"10.1002/jsid.2064","DOIUrl":null,"url":null,"abstract":"<p>Large Language Models (LLMs) can be applied to many fields in the display industry. However, general LLMs lack domain-specific knowledge and specialized terminology understanding, which results in inaccurate responses when applied to industrial question-answering(Q&A) scenarios. To address this issue, this work introduces a framework of Large Language Model training to effectively import the Display Industry Knowledge. This framework is specifically designed to enhance the comprehension ability of LLMs on the knowledge from the display industry field by improving specialized data governance, knowledge distillation techniques, data augmentation strategies, and continual pre-training mechanisms. This approach not only significantly improves the model's performance in Q&A applications within the display industry but also prevents catastrophic forgetting of common knowledge. Experimental results demonstrate the effectiveness of these techniques. We hope that this work can be also helpful for the customization of LLMs in other specialized domains.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"33 5","pages":"380-389"},"PeriodicalIF":2.2000,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Society for Information Display","FirstCategoryId":"5","ListUrlMain":"https://sid.onlinelibrary.wiley.com/doi/10.1002/jsid.2064","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Large Language Models (LLMs) can be applied to many fields in the display industry. However, general LLMs lack domain-specific knowledge and specialized terminology understanding, which results in inaccurate responses when applied to industrial question-answering(Q&A) scenarios. To address this issue, this work introduces a framework of Large Language Model training to effectively import the Display Industry Knowledge. This framework is specifically designed to enhance the comprehension ability of LLMs on the knowledge from the display industry field by improving specialized data governance, knowledge distillation techniques, data augmentation strategies, and continual pre-training mechanisms. This approach not only significantly improves the model's performance in Q&A applications within the display industry but also prevents catastrophic forgetting of common knowledge. Experimental results demonstrate the effectiveness of these techniques. We hope that this work can be also helpful for the customization of LLMs in other specialized domains.
期刊介绍:
The Journal of the Society for Information Display publishes original works dealing with the theory and practice of information display. Coverage includes materials, devices and systems; the underlying chemistry, physics, physiology and psychology; measurement techniques, manufacturing technologies; and all aspects of the interaction between equipment and its users. Review articles are also published in all of these areas. Occasional special issues or sections consist of collections of papers on specific topical areas or collections of full length papers based in part on oral or poster presentations given at SID sponsored conferences.