{"title":"A contemporary quantitative model for continuous choice under reinforcing and punishing contingencies","authors":"Bryan Klapes, J. J McDowell","doi":"10.1002/jeab.70009","DOIUrl":null,"url":null,"abstract":"<p>We developed five novel quantitative models of punishment based on the generalized matching law (GML). Two of the new models were based on Deluty's additive theory of punishment, two were based on de Villiers's subtractive theory of punishment, and the last was based on the concatenated GML (cGML). Using information criteria, we compared the descriptive accuracies of these models against each other and against the GML. To obtain a data set that fairly compared these complex models, we exposed 30 human participants to 36 concurrent random-interval random-interval reinforcement schedules via a recently developed rapid-acquisition operant procedure (procedure for rapidly establishing steady-state behavior). This experimental design allowed us to fit the models to 30 data sets ranging from 22 to 36 data points each, comparing the models' descriptive accuracy using Akaike information criteria, corrected for small samples (AICc). The punishment model based on the cGML had the lowest AICc value of the set, with an Akaike weight of 0.99. Thus, this cGML-based punishment model is presumed to be the best contemporary quantitative model of punishment. We discuss the theoretical strengths and weaknesses of these models and future directions of GML-based punishment model development.</p>","PeriodicalId":17411,"journal":{"name":"Journal of the experimental analysis of behavior","volume":"123 3","pages":"435-454"},"PeriodicalIF":1.4000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the experimental analysis of behavior","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jeab.70009","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
We developed five novel quantitative models of punishment based on the generalized matching law (GML). Two of the new models were based on Deluty's additive theory of punishment, two were based on de Villiers's subtractive theory of punishment, and the last was based on the concatenated GML (cGML). Using information criteria, we compared the descriptive accuracies of these models against each other and against the GML. To obtain a data set that fairly compared these complex models, we exposed 30 human participants to 36 concurrent random-interval random-interval reinforcement schedules via a recently developed rapid-acquisition operant procedure (procedure for rapidly establishing steady-state behavior). This experimental design allowed us to fit the models to 30 data sets ranging from 22 to 36 data points each, comparing the models' descriptive accuracy using Akaike information criteria, corrected for small samples (AICc). The punishment model based on the cGML had the lowest AICc value of the set, with an Akaike weight of 0.99. Thus, this cGML-based punishment model is presumed to be the best contemporary quantitative model of punishment. We discuss the theoretical strengths and weaknesses of these models and future directions of GML-based punishment model development.
期刊介绍:
Journal of the Experimental Analysis of Behavior is primarily for the original publication of experiments relevant to the behavior of individual organisms.