A contemporary quantitative model for continuous choice under reinforcing and punishing contingencies

IF 1.4 3区 心理学 Q4 BEHAVIORAL SCIENCES
Bryan Klapes, J. J McDowell
{"title":"A contemporary quantitative model for continuous choice under reinforcing and punishing contingencies","authors":"Bryan Klapes,&nbsp;J. J McDowell","doi":"10.1002/jeab.70009","DOIUrl":null,"url":null,"abstract":"<p>We developed five novel quantitative models of punishment based on the generalized matching law (GML). Two of the new models were based on Deluty's additive theory of punishment, two were based on de Villiers's subtractive theory of punishment, and the last was based on the concatenated GML (cGML). Using information criteria, we compared the descriptive accuracies of these models against each other and against the GML. To obtain a data set that fairly compared these complex models, we exposed 30 human participants to 36 concurrent random-interval random-interval reinforcement schedules via a recently developed rapid-acquisition operant procedure (procedure for rapidly establishing steady-state behavior). This experimental design allowed us to fit the models to 30 data sets ranging from 22 to 36 data points each, comparing the models' descriptive accuracy using Akaike information criteria, corrected for small samples (AICc). The punishment model based on the cGML had the lowest AICc value of the set, with an Akaike weight of 0.99. Thus, this cGML-based punishment model is presumed to be the best contemporary quantitative model of punishment. We discuss the theoretical strengths and weaknesses of these models and future directions of GML-based punishment model development.</p>","PeriodicalId":17411,"journal":{"name":"Journal of the experimental analysis of behavior","volume":"123 3","pages":"435-454"},"PeriodicalIF":1.4000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the experimental analysis of behavior","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jeab.70009","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

We developed five novel quantitative models of punishment based on the generalized matching law (GML). Two of the new models were based on Deluty's additive theory of punishment, two were based on de Villiers's subtractive theory of punishment, and the last was based on the concatenated GML (cGML). Using information criteria, we compared the descriptive accuracies of these models against each other and against the GML. To obtain a data set that fairly compared these complex models, we exposed 30 human participants to 36 concurrent random-interval random-interval reinforcement schedules via a recently developed rapid-acquisition operant procedure (procedure for rapidly establishing steady-state behavior). This experimental design allowed us to fit the models to 30 data sets ranging from 22 to 36 data points each, comparing the models' descriptive accuracy using Akaike information criteria, corrected for small samples (AICc). The punishment model based on the cGML had the lowest AICc value of the set, with an Akaike weight of 0.99. Thus, this cGML-based punishment model is presumed to be the best contemporary quantitative model of punishment. We discuss the theoretical strengths and weaknesses of these models and future directions of GML-based punishment model development.

强化和惩罚偶然性下连续选择的当代定量模型。
基于广义匹配律(GML)建立了5个新的惩罚定量模型。其中两个模型是基于deldety的加性惩罚理论,两个模型是基于de Villiers的减法惩罚理论,最后一个模型是基于串联GML (cGML)。使用信息标准,我们比较了这些模型彼此之间以及与GML之间的描述准确性。为了获得与这些复杂模型进行公平比较的数据集,我们通过最近开发的快速获取操作程序(快速建立稳态行为程序),将30名人类参与者暴露于36个并发随机间隔随机间隔强化计划中。该实验设计使我们能够将模型拟合到30个数据集,每个数据集从22到36个数据点不等,并使用Akaike信息标准对小样本(AICc)进行校正,比较模型的描述准确性。基于cGML的惩罚模型的AICc值最低,赤池权值为0.99。因此,这个基于cgml的惩罚模型被认为是当代最好的惩罚定量模型。讨论了这些模型的理论优缺点以及基于gml的惩罚模型的未来发展方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.90
自引率
14.80%
发文量
83
审稿时长
>12 weeks
期刊介绍: Journal of the Experimental Analysis of Behavior is primarily for the original publication of experiments relevant to the behavior of individual organisms.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信