David A. Wood, M. P. Achhpilia, Mollie T. Adams, Sanaz Aghazadeh, Kazeem O. Akinyele, Mfon Akpan, Kristian D. Allee, Abigail Allen, E. D. Almer, Daniel Ames, Viktor Arity, Dereck Barr‐Pulliam, K. A. Basoglu, Andrew Belnap, Jeremiah W. Bentley, T. Berg, Nathan R. Berglund, Erica Berry, Avishek Bhandari, Md Nazmul Hasan Bhuyan, Paulette Black, Eva Blondeel, David Bond, Annika Bonrath, A. F. Borthick, E. S. Boyle, M. Bradford, D. M. Brandon, Joseph F. Brazel, Bryan G. Brockbank, Marcus Burger, Dmitri Byzalov, James N. Cannon, Cecile Q. Caro, Abraham H. Carr, Jack M. Cathey, Ryan Cating, K. Charron, Stacy Chavez, Jason Chen, Jennifer C. Chen, Jennifer W. Chen, Christine Cheng, Xu Cheng, Brant E. Christensen, K. Church, N. J. Cicone, Patience Constance, Lauren A. Cooper, Candice L. Correia, Joshua G. Coyne, W. Cram, Asher Curtis, Ronald J. Daigle, Steven Dannemiller, Stephan A. Davenport, Gregory S. Dawson, Karen J. De Meyst, Scott Dell, Sebahattin Demirkan, Christine A. Denison, Hrishikesh Desai, S. DeSimone,
{"title":"The ChatGPT Artificial Intelligence Chatbot: How Well Does It Answer Accounting Assessment Questions?","authors":"David A. Wood, M. P. Achhpilia, Mollie T. Adams, Sanaz Aghazadeh, Kazeem O. Akinyele, Mfon Akpan, Kristian D. Allee, Abigail Allen, E. D. Almer, Daniel Ames, Viktor Arity, Dereck Barr‐Pulliam, K. A. Basoglu, Andrew Belnap, Jeremiah W. Bentley, T. Berg, Nathan R. Berglund, Erica Berry, Avishek Bhandari, Md Nazmul Hasan Bhuyan, Paulette Black, Eva Blondeel, David Bond, Annika Bonrath, A. F. Borthick, E. S. Boyle, M. Bradford, D. M. Brandon, Joseph F. Brazel, Bryan G. Brockbank, Marcus Burger, Dmitri Byzalov, James N. Cannon, Cecile Q. Caro, Abraham H. Carr, Jack M. Cathey, Ryan Cating, K. Charron, Stacy Chavez, Jason Chen, Jennifer C. Chen, Jennifer W. Chen, Christine Cheng, Xu Cheng, Brant E. Christensen, K. Church, N. J. Cicone, Patience Constance, Lauren A. Cooper, Candice L. Correia, Joshua G. Coyne, W. Cram, Asher Curtis, Ronald J. Daigle, Steven Dannemiller, Stephan A. Davenport, Gregory S. Dawson, Karen J. De Meyst, Scott Dell, Sebahattin Demirkan, Christine A. Denison, Hrishikesh Desai, S. DeSimone,","doi":"10.2308/issues-2023-013","DOIUrl":null,"url":null,"abstract":"\n ChatGPT, a language-learning model chatbot, has garnered considerable attention for its ability to respond to users’ questions. Using data from 14 countries and 186 institutions, we compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provides correct answers for 56.5 percent of questions and partially correct answers for an additional 9.4 percent of questions. When considering point values for questions, students significantly outperform ChatGPT with a 76.7 percent average on assessments compared to 47.5 percent for ChatGPT if no partial credit is awarded and 56.5 percent if partial credit is awarded. Still, ChatGPT performs better than the student average for 15.8 percent of assessments when we include partial credit. We provide evidence of how ChatGPT performs on different question types, accounting topics, class levels, open/closed assessments, and test bank questions. We also discuss implications for accounting education and research.","PeriodicalId":46324,"journal":{"name":"ISSUES IN ACCOUNTING EDUCATION","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISSUES IN ACCOUNTING EDUCATION","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2308/issues-2023-013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 7
Abstract
ChatGPT, a language-learning model chatbot, has garnered considerable attention for its ability to respond to users’ questions. Using data from 14 countries and 186 institutions, we compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provides correct answers for 56.5 percent of questions and partially correct answers for an additional 9.4 percent of questions. When considering point values for questions, students significantly outperform ChatGPT with a 76.7 percent average on assessments compared to 47.5 percent for ChatGPT if no partial credit is awarded and 56.5 percent if partial credit is awarded. Still, ChatGPT performs better than the student average for 15.8 percent of assessments when we include partial credit. We provide evidence of how ChatGPT performs on different question types, accounting topics, class levels, open/closed assessments, and test bank questions. We also discuss implications for accounting education and research.
期刊介绍:
The mission of Issues in Accounting Education is to publish research, commentaries, instructional resources, and book reviews that assist accounting faculty in teaching and that address important issues in accounting education. The journal will consist of two major sections, “Research and Commentary” and “Instructional Resources”.