{"title":"用GPT模型求解有关方程组的数学问题","authors":"Mingyu Zong, Bhaskar Krishnamachari","doi":"10.1016/j.mlwa.2023.100506","DOIUrl":null,"url":null,"abstract":"<div><p>Researchers have been interested in developing AI tools to help students learn various mathematical subjects. One challenging set of tasks for school students is learning to solve math word problems. We explore how recent advances in natural language processing, specifically the rise of powerful transformer based models, can be applied to help math learners with such problems. Concretely, we evaluate the use of GPT-3, GPT-3.5, and GPT-4, all transformer models with billions of parameters recently released by OpenAI, for three related challenges pertaining to math word problems corresponding to systems of two linear equations. The three challenges are classifying word problems, extracting equations from word problems, and generating word problems. For the first challenge, we define a set of problem classes and find that GPT models generally result in classifying word problems with an overall accuracy around 70%. There is one class that all models struggle about, namely the “item and property” class, which significantly lowered the value. For the second challenge, our findings align with researchers’ expectation: newer models are better at extracting equations from word problems. The highest accuracy we get from fine-tuning GPT-3 with 1000 examples (78%) is surpassed by GPT-4 given only 20 examples (79%). For the third challenge, we again find that GPT-4 outperforms the other two models. It is able to generate problems with accuracy ranging from 76.7% to 100%, depending on the problem type.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"14 ","pages":"Article 100506"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827023000592/pdfft?md5=3e9a69094ef1d1b06354c3533f164953&pid=1-s2.0-S2666827023000592-main.pdf","citationCount":"1","resultStr":"{\"title\":\"Solving math word problems concerning systems of equations with GPT models\",\"authors\":\"Mingyu Zong, Bhaskar Krishnamachari\",\"doi\":\"10.1016/j.mlwa.2023.100506\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Researchers have been interested in developing AI tools to help students learn various mathematical subjects. One challenging set of tasks for school students is learning to solve math word problems. We explore how recent advances in natural language processing, specifically the rise of powerful transformer based models, can be applied to help math learners with such problems. Concretely, we evaluate the use of GPT-3, GPT-3.5, and GPT-4, all transformer models with billions of parameters recently released by OpenAI, for three related challenges pertaining to math word problems corresponding to systems of two linear equations. The three challenges are classifying word problems, extracting equations from word problems, and generating word problems. For the first challenge, we define a set of problem classes and find that GPT models generally result in classifying word problems with an overall accuracy around 70%. There is one class that all models struggle about, namely the “item and property” class, which significantly lowered the value. For the second challenge, our findings align with researchers’ expectation: newer models are better at extracting equations from word problems. The highest accuracy we get from fine-tuning GPT-3 with 1000 examples (78%) is surpassed by GPT-4 given only 20 examples (79%). For the third challenge, we again find that GPT-4 outperforms the other two models. It is able to generate problems with accuracy ranging from 76.7% to 100%, depending on the problem type.</p></div>\",\"PeriodicalId\":74093,\"journal\":{\"name\":\"Machine learning with applications\",\"volume\":\"14 \",\"pages\":\"Article 100506\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666827023000592/pdfft?md5=3e9a69094ef1d1b06354c3533f164953&pid=1-s2.0-S2666827023000592-main.pdf\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning with applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666827023000592\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827023000592","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Solving math word problems concerning systems of equations with GPT models
Researchers have been interested in developing AI tools to help students learn various mathematical subjects. One challenging set of tasks for school students is learning to solve math word problems. We explore how recent advances in natural language processing, specifically the rise of powerful transformer based models, can be applied to help math learners with such problems. Concretely, we evaluate the use of GPT-3, GPT-3.5, and GPT-4, all transformer models with billions of parameters recently released by OpenAI, for three related challenges pertaining to math word problems corresponding to systems of two linear equations. The three challenges are classifying word problems, extracting equations from word problems, and generating word problems. For the first challenge, we define a set of problem classes and find that GPT models generally result in classifying word problems with an overall accuracy around 70%. There is one class that all models struggle about, namely the “item and property” class, which significantly lowered the value. For the second challenge, our findings align with researchers’ expectation: newer models are better at extracting equations from word problems. The highest accuracy we get from fine-tuning GPT-3 with 1000 examples (78%) is surpassed by GPT-4 given only 20 examples (79%). For the third challenge, we again find that GPT-4 outperforms the other two models. It is able to generate problems with accuracy ranging from 76.7% to 100%, depending on the problem type.