{"title":"Quantifying learning from web-based course materials using different pre and post tests","authors":"P. Steif, M. Lovett, A. Dollár","doi":"10.1109/FIE.2012.6462255","DOIUrl":null,"url":null,"abstract":"Engineering instructors seek to gauge the effectiveness of their instruction. One gauge has been to use standardized tests, such as concept inventories and to quantify learning as the change in score over the semester. Here we question whether that approach is always the best practice for gauging the effect of instruction, and we propose an alternative of administering different tests at the start and end of the semester. In particular, to gauge the influence of one aspect of instruction, the use of interactive web-based course materials that had been developed for Statics, we administered the Force Concept Inventory at the start of the course, and the Statics Concept Inventory at the end of the course. Correlations and then linear regression were applied to study how conceptual knowledge measured at the end of the course depended on conceptual knowledge measured at the start and the amount of use of the web-based courseware. Usage of the web-based courseware was found to promote conceptual knowledge at the end of the course in a statistically significant way only after accounting for initial knowledge as judged by the different conceptual test administered at the start of the course. Thus, it is not necessary to measure gain on one test; instead each test should capture well the variation in relevant ability across students at the time the test is administered.","PeriodicalId":120268,"journal":{"name":"2012 Frontiers in Education Conference Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2012-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Frontiers in Education Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FIE.2012.6462255","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Engineering instructors seek to gauge the effectiveness of their instruction. One gauge has been to use standardized tests, such as concept inventories and to quantify learning as the change in score over the semester. Here we question whether that approach is always the best practice for gauging the effect of instruction, and we propose an alternative of administering different tests at the start and end of the semester. In particular, to gauge the influence of one aspect of instruction, the use of interactive web-based course materials that had been developed for Statics, we administered the Force Concept Inventory at the start of the course, and the Statics Concept Inventory at the end of the course. Correlations and then linear regression were applied to study how conceptual knowledge measured at the end of the course depended on conceptual knowledge measured at the start and the amount of use of the web-based courseware. Usage of the web-based courseware was found to promote conceptual knowledge at the end of the course in a statistically significant way only after accounting for initial knowledge as judged by the different conceptual test administered at the start of the course. Thus, it is not necessary to measure gain on one test; instead each test should capture well the variation in relevant ability across students at the time the test is administered.