Hannah E. Shear, Logan L. Britton, K. Aleks Schaefer, Bhawna Thapa, Jason S. Bergtold
{"title":"人工智能与农业和应用经济学学习和评估的未来","authors":"Hannah E. Shear, Logan L. Britton, K. Aleks Schaefer, Bhawna Thapa, Jason S. Bergtold","doi":"10.1002/jaa2.98","DOIUrl":null,"url":null,"abstract":"<p>Using a repository of historical student responses to an actual course-assigned essay prompt and a series of artificial intelligence (AI)-generated responses to the same prompt, we conduct a single-blind, randomized experiment to evaluate the performance of AI in agricultural and applied economics education. Further, we assess instructors' ability to detect the use of AI. We find that AI-generated responses to the essay received statistically significantly higher scores than those of the average student. Instructors who had previous exposure to dialog-based AI were 13 times more likely to accurately detect AI-generated essays than instructors without previous exposure to the technology.</p>","PeriodicalId":93789,"journal":{"name":"Journal of the Agricultural and Applied Economics Association","volume":"2 4","pages":"838-850"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jaa2.98","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence and the future of learning and assessment in agricultural and applied economics\",\"authors\":\"Hannah E. Shear, Logan L. Britton, K. Aleks Schaefer, Bhawna Thapa, Jason S. Bergtold\",\"doi\":\"10.1002/jaa2.98\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Using a repository of historical student responses to an actual course-assigned essay prompt and a series of artificial intelligence (AI)-generated responses to the same prompt, we conduct a single-blind, randomized experiment to evaluate the performance of AI in agricultural and applied economics education. Further, we assess instructors' ability to detect the use of AI. We find that AI-generated responses to the essay received statistically significantly higher scores than those of the average student. Instructors who had previous exposure to dialog-based AI were 13 times more likely to accurately detect AI-generated essays than instructors without previous exposure to the technology.</p>\",\"PeriodicalId\":93789,\"journal\":{\"name\":\"Journal of the Agricultural and Applied Economics Association\",\"volume\":\"2 4\",\"pages\":\"838-850\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jaa2.98\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the Agricultural and Applied Economics Association\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/jaa2.98\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Agricultural and Applied Economics Association","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jaa2.98","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial intelligence and the future of learning and assessment in agricultural and applied economics
Using a repository of historical student responses to an actual course-assigned essay prompt and a series of artificial intelligence (AI)-generated responses to the same prompt, we conduct a single-blind, randomized experiment to evaluate the performance of AI in agricultural and applied economics education. Further, we assess instructors' ability to detect the use of AI. We find that AI-generated responses to the essay received statistically significantly higher scores than those of the average student. Instructors who had previous exposure to dialog-based AI were 13 times more likely to accurately detect AI-generated essays than instructors without previous exposure to the technology.