{"title":"从基于判断的模拟反应到基于分析的分数:一个过程模型,案例研究,以及管理者样本中管理者反应的实证评估","authors":"Diana R. Sanchez, Saar Van Lysebetten, A. Gibbons","doi":"10.1037/mgr0000049","DOIUrl":null,"url":null,"abstract":"Workplace simulations, often used to assess or train employees, historically rely on human raters who use judgment to evaluate and score the behavior they observe (judgment-based scoring). Such judgments are often complex and holistic, raising concerns about their reliability and susceptibility to bias. Human raters are also resource-intensive; thus, organizations are interested in strategies for reducing the role of human judgment in simulations. For example, using a checklist of discrete, clearly observable behaviors with predefined point values (analytic scoring) might be expected to simplify the rating process and produce more consistent scores. With the use of good text- or voice-recognition software, such a checklist might even be amenable to automation, eliminating the need for human raters altogether. Although the possibility of such potential benefits may appeal to organizations, it is unclear how changing the scoring method in this way may affect the meaning of scores. The authors developed a framework for converting judgment-based scores to analytic scores, using the automated scoring and qualitative content analysis literatures, and applied this framework to the original constructed responses of 84 managers in a workplace simulation. The responses were adapted into discrete behaviors and scored analytically. Results indicated that responses could be adequately summarized using a reasonable number of discrete behaviors, and that analytic scores converged significantly but not strongly with the original judgment-based scores from human raters. We discuss implications for future research and provide recommendations for practitioners considering automated scores in workplace simulations.","PeriodicalId":44734,"journal":{"name":"Psychologist-Manager Journal","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Adapting Simulation Responses From Judgment-Based to Analytic-Based Scores: A Process Model, Case Study, and Empirical Evaluation of Managers’ Responses Among a Sample of Managers\",\"authors\":\"Diana R. Sanchez, Saar Van Lysebetten, A. Gibbons\",\"doi\":\"10.1037/mgr0000049\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Workplace simulations, often used to assess or train employees, historically rely on human raters who use judgment to evaluate and score the behavior they observe (judgment-based scoring). Such judgments are often complex and holistic, raising concerns about their reliability and susceptibility to bias. Human raters are also resource-intensive; thus, organizations are interested in strategies for reducing the role of human judgment in simulations. For example, using a checklist of discrete, clearly observable behaviors with predefined point values (analytic scoring) might be expected to simplify the rating process and produce more consistent scores. With the use of good text- or voice-recognition software, such a checklist might even be amenable to automation, eliminating the need for human raters altogether. Although the possibility of such potential benefits may appeal to organizations, it is unclear how changing the scoring method in this way may affect the meaning of scores. The authors developed a framework for converting judgment-based scores to analytic scores, using the automated scoring and qualitative content analysis literatures, and applied this framework to the original constructed responses of 84 managers in a workplace simulation. The responses were adapted into discrete behaviors and scored analytically. Results indicated that responses could be adequately summarized using a reasonable number of discrete behaviors, and that analytic scores converged significantly but not strongly with the original judgment-based scores from human raters. We discuss implications for future research and provide recommendations for practitioners considering automated scores in workplace simulations.\",\"PeriodicalId\":44734,\"journal\":{\"name\":\"Psychologist-Manager Journal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2017-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychologist-Manager Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1037/mgr0000049\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Business, Management and Accounting\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychologist-Manager Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1037/mgr0000049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Business, Management and Accounting","Score":null,"Total":0}
Adapting Simulation Responses From Judgment-Based to Analytic-Based Scores: A Process Model, Case Study, and Empirical Evaluation of Managers’ Responses Among a Sample of Managers
Workplace simulations, often used to assess or train employees, historically rely on human raters who use judgment to evaluate and score the behavior they observe (judgment-based scoring). Such judgments are often complex and holistic, raising concerns about their reliability and susceptibility to bias. Human raters are also resource-intensive; thus, organizations are interested in strategies for reducing the role of human judgment in simulations. For example, using a checklist of discrete, clearly observable behaviors with predefined point values (analytic scoring) might be expected to simplify the rating process and produce more consistent scores. With the use of good text- or voice-recognition software, such a checklist might even be amenable to automation, eliminating the need for human raters altogether. Although the possibility of such potential benefits may appeal to organizations, it is unclear how changing the scoring method in this way may affect the meaning of scores. The authors developed a framework for converting judgment-based scores to analytic scores, using the automated scoring and qualitative content analysis literatures, and applied this framework to the original constructed responses of 84 managers in a workplace simulation. The responses were adapted into discrete behaviors and scored analytically. Results indicated that responses could be adequately summarized using a reasonable number of discrete behaviors, and that analytic scores converged significantly but not strongly with the original judgment-based scores from human raters. We discuss implications for future research and provide recommendations for practitioners considering automated scores in workplace simulations.