T. R. Mahesh;R. Sivakami;Arastu Thakur;Achyut Shankar;Fayez Alqahtani
{"title":"微调LLM与Lora-Q提高健康素养","authors":"T. R. Mahesh;R. Sivakami;Arastu Thakur;Achyut Shankar;Fayez Alqahtani","doi":"10.1109/TCE.2025.3571010","DOIUrl":null,"url":null,"abstract":"This study describes the implementation of sophisticated parameter-efficient strategies for fine-tuning the LLaMA-2-7b model on a carefully selected, Web-scraped medical dataset targeted at increasing health literacy. Designed to improve the contextual accuracy of medical dataset, the dataset consists of important fields: “question,” “answer,” “source,” and “focus area.” Using 4-bit quantization and Low-Rank Adaptation (LoRA), the model was tuned for low computational overhead and high-performance deployment. Post-optimization, the model showed a notable rise in linguistic metrics: the BLEU score rose from 0.1397 to 0.1486, the ROUGE score improved from 0.0510 to 0.0599, and the Translation Edit Rate (TER) dropped from 0.8714 to 0.8440, so highlighting the model’s increased capacity in producing accurate and contextually relevant medical information. The results highlight the effectiveness of using innovative NLP techniques to increase the accessibility and understanding of medical knowledge, therefore supporting the main objective of higher global health literacy.","PeriodicalId":13208,"journal":{"name":"IEEE Transactions on Consumer Electronics","volume":"71 2","pages":"3531-3539"},"PeriodicalIF":10.9000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fine Tuned LLM With Lora-Q for Enhanced Health Literacy\",\"authors\":\"T. R. Mahesh;R. Sivakami;Arastu Thakur;Achyut Shankar;Fayez Alqahtani\",\"doi\":\"10.1109/TCE.2025.3571010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study describes the implementation of sophisticated parameter-efficient strategies for fine-tuning the LLaMA-2-7b model on a carefully selected, Web-scraped medical dataset targeted at increasing health literacy. Designed to improve the contextual accuracy of medical dataset, the dataset consists of important fields: “question,” “answer,” “source,” and “focus area.” Using 4-bit quantization and Low-Rank Adaptation (LoRA), the model was tuned for low computational overhead and high-performance deployment. Post-optimization, the model showed a notable rise in linguistic metrics: the BLEU score rose from 0.1397 to 0.1486, the ROUGE score improved from 0.0510 to 0.0599, and the Translation Edit Rate (TER) dropped from 0.8714 to 0.8440, so highlighting the model’s increased capacity in producing accurate and contextually relevant medical information. The results highlight the effectiveness of using innovative NLP techniques to increase the accessibility and understanding of medical knowledge, therefore supporting the main objective of higher global health literacy.\",\"PeriodicalId\":13208,\"journal\":{\"name\":\"IEEE Transactions on Consumer Electronics\",\"volume\":\"71 2\",\"pages\":\"3531-3539\"},\"PeriodicalIF\":10.9000,\"publicationDate\":\"2025-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Consumer Electronics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11023605/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Consumer Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11023605/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Fine Tuned LLM With Lora-Q for Enhanced Health Literacy
This study describes the implementation of sophisticated parameter-efficient strategies for fine-tuning the LLaMA-2-7b model on a carefully selected, Web-scraped medical dataset targeted at increasing health literacy. Designed to improve the contextual accuracy of medical dataset, the dataset consists of important fields: “question,” “answer,” “source,” and “focus area.” Using 4-bit quantization and Low-Rank Adaptation (LoRA), the model was tuned for low computational overhead and high-performance deployment. Post-optimization, the model showed a notable rise in linguistic metrics: the BLEU score rose from 0.1397 to 0.1486, the ROUGE score improved from 0.0510 to 0.0599, and the Translation Edit Rate (TER) dropped from 0.8714 to 0.8440, so highlighting the model’s increased capacity in producing accurate and contextually relevant medical information. The results highlight the effectiveness of using innovative NLP techniques to increase the accessibility and understanding of medical knowledge, therefore supporting the main objective of higher global health literacy.
期刊介绍:
The main focus for the IEEE Transactions on Consumer Electronics is the engineering and research aspects of the theory, design, construction, manufacture or end use of mass market electronics, systems, software and services for consumers.