Journal of applied measurement最新文献

筛选
英文 中文
Psychometric Properties and Convergent Validity of the Chinese Version of the Rosenberg Self-Esteem Scale. 中文版罗森博格自尊量表的心理测量特征及其收敛效度。
Journal of applied measurement Pub Date : 2018-01-01
Meng-Ting Lo, Ssu-Kuang Chen, Ann A O'Connell
{"title":"Psychometric Properties and Convergent Validity of the Chinese Version of the Rosenberg Self-Esteem Scale.","authors":"Meng-Ting Lo,&nbsp;Ssu-Kuang Chen,&nbsp;Ann A O'Connell","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The present study used the Rasch rating scale model (RSM) to reassess the psychometric properties of the Chinese version of Rosenberg self-esteem scale (RSES) among 501 Grade 10 students in Taiwan. The reliability, dimensionality, and differential item functioning were examined. The dimensionality assumption was met after excluding item 8 (\"I wish I could have more respect for myself.\"). The successive response categories for item 7 (\"I feel that I am a person of worth, at least on an equal plane with others.\") were not located in an expected order. After eliminating items 7 and 8 from analysis, the remaining 8-item RSES had acceptable fit statistics, good content coverage and high categorical omega, Rasch person and item reliability. The five response categories performed well; evidence for convergent validity was established through the high correlation between RSES and psychological being scores. Implications and recommendations for instrument users are discussed.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 4","pages":"413-427"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36729348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Rasch Model Analysis of the Emotion Regulation Questionnaire. 情绪调节问卷的Rasch模型分析。
Journal of applied measurement Pub Date : 2018-01-01
Michael J Ireland, Hong Eng Goh, Ida Marais
{"title":"A Rasch Model Analysis of the Emotion Regulation Questionnaire.","authors":"Michael J Ireland,&nbsp;Hong Eng Goh,&nbsp;Ida Marais","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The 10-item Emotion Regulation Questionnaire (ERQ) was developed to measure individual differences in the tendency to use two common emotion regulation strategies: cognitive reappraisal and suppression. The current study examined the psychometric properties of the ERQ in a heterogeneous mixed sample of 713 (64.9% female) community residents using the polytomous Rasch model. The results showed that the 10-item ERQ was multidimensional and supported the two distinct factors. The reappraisal and suppression subscales were both found to be unidimensional and fit the Rasch model. No evidence of local dependence was observed. The five response categories also functioned as intended. Differential item functioning (DIF) was assessed across sub-samples defined by gender, self-report experiencing symptoms of mental illness, regular meditation practice, and age groupings. No evidence emerged of items functioning differently across any of these groups. Using Rasch measure scores, a number of meaningful group differences in person location emerged. Less use of reappraisal was reported by younger adults, non-meditators, and those reporting experiencing symptoms of mental illness. Non-meditators also reported greater use of suppression compared with regular meditators; no other age group, gender, or symptomatic group differences emerged on suppression.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 3","pages":"258-270"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36451691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equating Errors and Scale Drift in Linked-Chain IRT Equating with Mixed-Format Tests. 链链IRT混合格式等价测试中的等价误差和尺度漂移。
Journal of applied measurement Pub Date : 2018-01-01
Bo Hu
{"title":"Equating Errors and Scale Drift in Linked-Chain IRT Equating with Mixed-Format Tests.","authors":"Bo Hu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In linked-chain equating, equating errors may accumulate and cause scale drift. This simulation study extends the investigation on scale drift in linked-chain equating to mixed-format test. Specifically, the impact of equating method and the characteristics of anchor test and equating chain on equating errors and scale drift in IRT true score equating is examined. To evaluate equating results, a new method is used to derive true linking coefficients. The results indicate that the characteristic curve methods produce more accurate and reliable equating results than the moment methods. Although using more anchor items or an anchor test configuration with more IRT parameters can lower the variability of equating results, neither of them help control equating bias. Additionally, scale drift increases when an equating chain runs longer or poorly calibrated test forms are added to the chain. The role of calibration precision in evaluating equating results is highlighted.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 1","pages":"41-58"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35932759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of Response Similarity Analysis for the Detection of Academic Cheating: An Experimental Study. 响应相似度分析在学术作弊检测中的验证:一项实验研究。
Journal of applied measurement Pub Date : 2018-01-01
Georgios D Sideridis, Cengiz Zopluoglu
{"title":"Validation of Response Similarity Analysis for the Detection of Academic Cheating: An Experimental Study.","authors":"Georgios D Sideridis,&nbsp;Cengiz Zopluoglu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The purpose of the present study was to evaluate various analytical means to detect academic cheating in an experimental setting. The omega index was compared and contrasted given a gold criterion of academic cheating which entailed a discrepant score between two administrations using an experimental study with real test takers. Participants were 164 elementary school students who were administered a mathematics exam followed by an equivalent mock exam under conditions of strict and relaxed, invigilation, respectively. Discrepant scores were defined as exceeding 7 responses in any direction (correct or incorrect), based on what was expected due to chance. Results indicated that the omega index was successful in capturing more than 39% of the cases who exceeded the conventional plus or minus 7 discrepancy criteria. It is concluded that the response similarity analysis may be an important tool in detecting academic cheating.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 1","pages":"59-75"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35932760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person-Level Analysis of the Effect of Cognitive Loading by Question Difficulty and Question Time Intensity on Didactic Examination Fluency (Speed-Accuracy Tradeoff). 题目难度和题目时间强度的认知负荷对教学考试流畅性影响的个人水平分析(速度-准确性权衡)。
Journal of applied measurement Pub Date : 2018-01-01
James J Thompson
{"title":"Person-Level Analysis of the Effect of Cognitive Loading by Question Difficulty and Question Time Intensity on Didactic Examination Fluency (Speed-Accuracy Tradeoff).","authors":"James J Thompson","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Fluency may be considered as a conjoint measure of work product quality and speed. It is especially useful in educational and medical settings to evaluate expertise and/or competence. In this paper, didactic exams were used to model fluency. Binned propensity matching with question difficulty and time intensity was used to define a 'load' variable and construct fluency (sum correct/ elapsed response time). Response surfaces as speed-accuracy tradeoffs resulted from the analysis. Person by load fluency matrices behaved well in Rasch analysis and warranted the definition of a person fluency variable ('skill'). A path model with skill and load as mediators substantially described the fluency data. The indirect paths through skill and load dominated direct variable effects. This is supportive evidence that skill and load have stand-alone merit. Therefore, it appears that the constructs of skill, load, and fluency could provide psychometrically defensible descriptors when utilized in appropriate contexts.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 3","pages":"229-242"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36451136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing and Validating a Scientific Multi-Text Reading Comprehension Assessment: In the Text Case of the Dispute of whether to Continue the Fourth Nuclear Power Plant Construction in Taiwan. 建立与验证科学的多文本阅读理解评估:以台湾第四核电站是否继续建设之争为文本案例。
Journal of applied measurement Pub Date : 2018-01-01
Lin Hsiao-Hui, Yuh-Tsuen Tzeng
{"title":"Developing and Validating a Scientific Multi-Text Reading Comprehension Assessment: In the Text Case of the Dispute of whether to Continue the Fourth Nuclear Power Plant Construction in Taiwan.","authors":"Lin Hsiao-Hui,&nbsp;Yuh-Tsuen Tzeng","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study aimed to advance the Scientific Multi-Text Reading Comprehension Assessment (SMTRCA) by developing a rubric which consisted of 4 subscales: information retrieval, information generalization, information interpretation, and information integration. The assessment tool included 11 close-ended and 8 open-ended items and its rubric. Two texts describing opposing views of the dispute of whether to continue the Fourth Nuclear Power Plant construction in Taiwan were developed and 1535 grade 5-9 students read these two texts in a counterbalanced order and answered the test items. First, the results showed that the Cronbach's values were more than .9, indicating very good intra-rater consistency. The Kendall coefficient of concordance of the inter-rater reliability was larger than .8, denoting a consistent scoring pattern between raters. Second, the analysis of many-facet Rasch measurement showed that there were significant difference in rater severity, and both severe and lenient raters could distinguish high versus low-ability students effectively. The comparison of the rating scale model and the partial credit model indicated that each rater had a unique rating scale structure, meaning that the rating procedures involve human interpretation and evaluation during the scoring processes so that it is difficult to reach a machine-like consistency level. However, this is in line with expectations of typical human judgment processes. Third, the Cronbach's coefficient of the full assessment were above .85, denoting that the SMTRCA has high internal-consistency. Finally, confirmatory factory analysis showed that there was an acceptable goodness-of-fit among the SMTRCA. These results suggest that the SMTRCA was a useful tool for measuring multi-text reading comprehension abilities.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 3","pages":"320-337"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36451142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Psychometric Properties and Differential Item Functioning of a Web-Based Assessment of Children's Social Perspective-Taking. 基于网络的儿童社会视角采取评估的心理测量特征和差异项目功能。
Journal of applied measurement Pub Date : 2018-01-01
Beyza Aksu Dunya, Clark McKown, Everett V Smith
{"title":"Psychometric Properties and Differential Item Functioning of a Web-Based Assessment of Children's Social Perspective-Taking.","authors":"Beyza Aksu Dunya,&nbsp;Clark McKown,&nbsp;Everett V Smith","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Social perspective-taking (SPT), which involves the ability infer others' intentions, is a consequential social cognitive process. The purpose of this study is to evaluate the psychometric properties of a web-based social perspective-taking (SELweb SPT) assessment designed for children in kindergarten through third grade. Data were collected from two separate samples of children. The first sample included 3224 children and the second sample included 4419 children. Data were calibrated using Rasch dichotomous model (Rasch, 1960). Differential item and test functioning were also evaluated across gender and ethnicity groups. Across both samples, we found: evidence of consistent item fit; unidimensional item structure; and adequate item targeting. Poor item targeting at high and low ability levels suggests that more items are needed to distinguish low and high ability respondents. Analyses of DIF found some significant item-level DIF across gender, but no DIF across ethnicity. The analyses of person measure calibrations with and without DIF items evidenced negligible differential test functioning (DTF) across gender and ethnicity groups in both samples.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 1","pages":"93-105"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35932762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Calibration of Chemistry Items to Create an Item Bank, using the Rasch Measurement Model. 开发和校准化学项目,以创建一个题库,使用拉希测量模型。
Journal of applied measurement Pub Date : 2018-01-01
Joseph N Njiru, Joseph T Romanoski
{"title":"Development and Calibration of Chemistry Items to Create an Item Bank, using the Rasch Measurement Model.","authors":"Joseph N Njiru,&nbsp;Joseph T Romanoski","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This article describes the development and calibration of items from the 1997 to 2006 Tertiary Entrance Exams (TEE) in Chemistry conducted by the Curriculum Council of Western Australia for the purposes of establishing a Chemistry item bank. Only items that met the strict Rasch measurement criterion of ordered thresholds were included. Item Residuals and Chi-square conformity of the items were likewise scrutinized. Further, specialist experts in chemistry were employed to ascertain the qualitative properties of the items, particularly the item wording, so as to provide accurate item descriptors. An item bank of 174 items was created. This item bank may now be accurately used by teachers in their classrooms for the purposes of developing class assessments in Chemistry and/or for classroom diagnostic purposes.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 2","pages":"192-200"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36215372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Missing Values and Single Imputation upon Rasch Analysis Outcomes: A Simulation Study. 缺失值和单一输入对Rasch分析结果的影响:模拟研究。
Journal of applied measurement Pub Date : 2018-01-01
Carolina Saskia Fellinghauer, Birgit Prodinger, Alan Tennant
{"title":"The Impact of Missing Values and Single Imputation upon Rasch Analysis Outcomes: A Simulation Study.","authors":"Carolina Saskia Fellinghauer,&nbsp;Birgit Prodinger,&nbsp;Alan Tennant","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Imputation becomes common practice through availability of easy-to-use algorithms and software. This study aims to determine if different imputation strategies are robust to the extent and type of missingness, local item dependencies (LID), differential item functioning (DIF), and misfit when doing a Rasch analysis. Four samples were simulated and represented a sample with good metric properties, a sample with LID, a sample with DIF, and a sample with LID and DIF. Missing values were generated with increasing proportion and were either missing at random or completely at random. Four imputation techniques were applied before Rasch analysis and deviation of the results and the quality of fit compared. Imputation strategies showed good performance with less than 15% of missingness. The analysis with missing values performed best in recovering statistical estimates. The best strategy, when doing a Rasch analysis, is the analysis with missing values. If for some reason imputation is necessary, we recommend using the expectation-maximization algorithm.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 1","pages":"1-25"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35932758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Levels of Discrimination on Vertical Equating in the Rasch Model. Rasch模型中歧视程度对垂直等值的影响。
Journal of applied measurement Pub Date : 2018-01-01
Stephen N Humphrey
{"title":"The Impact of Levels of Discrimination on Vertical Equating in the Rasch Model.","authors":"Stephen N Humphrey","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Aligning scales in vertical equating carries a number of challenges for practitioners in contexts such as large-scale testing. This paper examines the impact of high and low discrimination on the results of vertical equating when the Rasch model is applied. A simulation study is used to show that different levels of discrimination introduce systematic error into estimates. A second simulation study shows that for the purpose of vertical equating, items with high or low discrimination contain information about translation constants that contains systematic error. The impact of differential item discrimination on vertical equating is examined and subsequently illustrated in terms of a real data set from a large-scale testing program, with vertical links between grade 3 and 5 numeracy tests. Implications of the results for practitioners conducting vertical equating with the Rasch model are identified, including monitoring progress over time. Implications for other item response models are also discussed.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"19 3","pages":"216-228"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36451686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信