Mohamad Saleh Torkestani, Ali Alameer, Shivakumara Palaiahnakote, Taha Manosuri
{"title":"大型语言模型的包容性提示工程:道德、结构化和自适应人工智能的模块化框架","authors":"Mohamad Saleh Torkestani, Ali Alameer, Shivakumara Palaiahnakote, Taha Manosuri","doi":"10.1007/s10462-025-11330-7","DOIUrl":null,"url":null,"abstract":"<div><p>Large language models have achieved impressive results across various tasks but remain limited in their ability to adapt ethically and structurally across diverse domains without retraining. This paper presents the Inclusive Prompt Engineering Model (IPEM), a modular framework designed to enhance LLM performance, adaptability, and ethical alignment through prompt-level strategies alone. IPEM integrates four components: Memory-of-Thought for multi-turn consistency, Enhanced Chain-of-Thought prompting for logical verification, Structured and Analogical Reasoning modules for tabular and cross-domain tasks, and Evaluation and Feedback Loops that incorporate uncertainty-aware selection and bias mitigation mechanisms. Evaluated across tasks in arithmetic reasoning, healthcare triage, financial forecasting, and inclusive question answering, IPEM consistently improves model outputs over a GPT-4 baseline. Notable outcomes include up to twenty percentage points in accuracy gains, a 25 percent reduction in logical errors, and nearly 20 percent reduction in social bias scores, all without modifying model weights. Moreover, IPEM reduces annotation demands by one-third while preserving performance, demonstrating its utility in low-resource environments. By unifying ethical safeguards and reasoning mechanisms in a prompt-based system, IPEM offers a reproducible and auditable pathway for deploying adaptable and fair AI systems. The framework contributes both practical solutions and theoretical insights to the evolving field of prompt engineering.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 11","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11330-7.pdf","citationCount":"0","resultStr":"{\"title\":\"Inclusive prompt engineering for large language models: a modular framework for ethical, structured, and adaptive AI\",\"authors\":\"Mohamad Saleh Torkestani, Ali Alameer, Shivakumara Palaiahnakote, Taha Manosuri\",\"doi\":\"10.1007/s10462-025-11330-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Large language models have achieved impressive results across various tasks but remain limited in their ability to adapt ethically and structurally across diverse domains without retraining. This paper presents the Inclusive Prompt Engineering Model (IPEM), a modular framework designed to enhance LLM performance, adaptability, and ethical alignment through prompt-level strategies alone. IPEM integrates four components: Memory-of-Thought for multi-turn consistency, Enhanced Chain-of-Thought prompting for logical verification, Structured and Analogical Reasoning modules for tabular and cross-domain tasks, and Evaluation and Feedback Loops that incorporate uncertainty-aware selection and bias mitigation mechanisms. Evaluated across tasks in arithmetic reasoning, healthcare triage, financial forecasting, and inclusive question answering, IPEM consistently improves model outputs over a GPT-4 baseline. Notable outcomes include up to twenty percentage points in accuracy gains, a 25 percent reduction in logical errors, and nearly 20 percent reduction in social bias scores, all without modifying model weights. Moreover, IPEM reduces annotation demands by one-third while preserving performance, demonstrating its utility in low-resource environments. By unifying ethical safeguards and reasoning mechanisms in a prompt-based system, IPEM offers a reproducible and auditable pathway for deploying adaptable and fair AI systems. The framework contributes both practical solutions and theoretical insights to the evolving field of prompt engineering.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 11\",\"pages\":\"\"},\"PeriodicalIF\":13.9000,\"publicationDate\":\"2025-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-025-11330-7.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-025-11330-7\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11330-7","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Inclusive prompt engineering for large language models: a modular framework for ethical, structured, and adaptive AI
Large language models have achieved impressive results across various tasks but remain limited in their ability to adapt ethically and structurally across diverse domains without retraining. This paper presents the Inclusive Prompt Engineering Model (IPEM), a modular framework designed to enhance LLM performance, adaptability, and ethical alignment through prompt-level strategies alone. IPEM integrates four components: Memory-of-Thought for multi-turn consistency, Enhanced Chain-of-Thought prompting for logical verification, Structured and Analogical Reasoning modules for tabular and cross-domain tasks, and Evaluation and Feedback Loops that incorporate uncertainty-aware selection and bias mitigation mechanisms. Evaluated across tasks in arithmetic reasoning, healthcare triage, financial forecasting, and inclusive question answering, IPEM consistently improves model outputs over a GPT-4 baseline. Notable outcomes include up to twenty percentage points in accuracy gains, a 25 percent reduction in logical errors, and nearly 20 percent reduction in social bias scores, all without modifying model weights. Moreover, IPEM reduces annotation demands by one-third while preserving performance, demonstrating its utility in low-resource environments. By unifying ethical safeguards and reasoning mechanisms in a prompt-based system, IPEM offers a reproducible and auditable pathway for deploying adaptable and fair AI systems. The framework contributes both practical solutions and theoretical insights to the evolving field of prompt engineering.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.