Methodology: European Journal of Research Methods for The Behavioral and Social Sciences最新文献

筛选
英文 中文
The Cognitive Interviewing Reporting Framework (CIRF): towards the harmonization of cognitive testing reports. 认知访谈报告框架(CIRF):迈向认知测试报告的统一。
IF 3.1 3区 心理学
H. Boeije, Gordon B. Willis
{"title":"The Cognitive Interviewing Reporting Framework (CIRF): towards the harmonization of cognitive testing reports.","authors":"H. Boeije, Gordon B. Willis","doi":"10.1027/1614-2241/A000075","DOIUrl":"https://doi.org/10.1027/1614-2241/A000075","url":null,"abstract":"Cognitive interviewing is an important qualitative tool for the testing, development, and evaluation of survey questionnaires. Despite the widespread adoption of cognitive testing, there remain large variations in the manner in which specific procedures are implemented, and it is not clear from reports and publications that have utilized cognitive interviewing exactly what procedures have been used, as critical details are often missing. Especially for establishing the effectiveness of procedural variants, it is essential that cognitive interviewing reports contain a comprehensive description of the methods used. One approach to working toward more complete reporting would be to develop and adhere to a common framework for reporting these results. In this article we introduce the Cognitive Interviewing Reporting Framework (CIRF), which applies a checklist approach, and which is based on several existing checklists for reviewing and reporting qualitative research. We propose that researchers apply the CIRF in order to test its usability and to suggest potential adjustments. Over the longer term, the CIRF can be evaluated with respect to its utility in improving the quality of cognitive interviewing reports.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"87-95"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Reflections on the Cognitive Interviewing Reporting Framework: Efficacy, expectations, and promise for the future. 对认知访谈报告框架的反思:效能、期望和对未来的承诺。
IF 3.1 3区 心理学
Gordon B. Willis, H. Boeije
{"title":"Reflections on the Cognitive Interviewing Reporting Framework: Efficacy, expectations, and promise for the future.","authors":"Gordon B. Willis, H. Boeije","doi":"10.1027/1614-2241/A000074","DOIUrl":"https://doi.org/10.1027/1614-2241/A000074","url":null,"abstract":"Based on the experiences of three research groups using and evaluating the Cognitive Interviewing Reporting Framework (CIRF), we draw conclusions about the utility of the CIRF as a guide to creating cognitive testing reports. Authors generally found the CIRF checklist to be usable, and that it led to a more complete description of key steps involved. However, despite the explicit direction by the CIRF to include a full explanation of major steps and features (e.g., research objectives and research design), the three cognitive testing reports tended to simply state what was done, without further justification. Authors varied in their judgments concerning whether the CIRF requires the appropriate level of detail. Overall, we believe that current cognitive interviewing practice will benefit from including, within cognitive testing reports, the 10 categories of information specified by the CIRF. Future use of the CIRF may serve to direct the overall research project from the start, and to further the goal of ...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"123-128"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Analyzing observed composite differences across groups: Is partial measurement invariance enough? 分析观察到的组间综合差异:部分测量不变性是否足够?
IF 3.1 3区 心理学
Holger Steinmetz
{"title":"Analyzing observed composite differences across groups: Is partial measurement invariance enough?","authors":"Holger Steinmetz","doi":"10.1027/1614-2241/A000049","DOIUrl":"https://doi.org/10.1027/1614-2241/A000049","url":null,"abstract":"Although the use of structural equation modeling has increased during the last decades, the typical procedure to investigate mean differences across groups is still to create an observed composite score from several indicators and to compare the composite’s mean across the groups. Whereas the structural equation modeling literature has emphasized that a comparison of latent means presupposes equal factor loadings and indicator intercepts for most of the indicators (i.e., partial invariance), it is still unknown if partial invariance is sufficient when relying on observed composites. This Monte-Carlo study investigated whether one or two unequal factor loadings and indicator intercepts in a composite can lead to wrong conclusions regarding latent mean differences. Results show that unequal indicator intercepts substantially affect the composite mean difference and the probability of a significant composite difference. In contrast, unequal factor loadings demonstrate only small effects. It is concluded that...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"1-12"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 228
Non-Graphical Solutions for Cattell’s Scree Test Cattell的屏幕测试的非图形解决方案
IF 3.1 3区 心理学
Gilles Raîche, Theodore A. Walls, D. Magis, Martin Riopel, J. Blais
{"title":"Non-Graphical Solutions for Cattell’s Scree Test","authors":"Gilles Raîche, Theodore A. Walls, D. Magis, Martin Riopel, J. Blais","doi":"10.1027/1614-2241/A000051","DOIUrl":"https://doi.org/10.1027/1614-2241/A000051","url":null,"abstract":"Most of the strategies that have been proposed to determine the number of components that account for the most variation in a principal components analysis of a correlation matrix rely on the analysis of the eigenvalues and on numerical solutions. The Cattell's scree test is a graphical strategy with a nonnumerical solution to determine the number of components to retain. Like Kaiser's rule, this test is one of the most frequently used strategies for determining the number of components to retain. However, the graphical nature of the scree test does not definitively establish the number of components to retain. To circumvent this issue, some numerical solutions are proposed, one in the spirit of Cattell's work and dealing with the scree part of the eigenvalues plot, and one focusing on the elbow part of this plot. A simulation study compares the efficiency of these solutions to those of other previously proposed methods. Extensions to factor analysis are possible and may be particularly useful with many low-dimensional components. Several strategies have been proposed to determine the num- ber of components that account for the most variation in a principal components analysis of a correlation matrix. Most of these rely on the analysis of the eigenvalues of the corre- lation matrix and on numerical solutions. For example, Kaiser's eigenvalue greater than one rule (Guttman, 1954; Kaiser, 1960), parallel analysis (Buja & Eyuboglu, 1992; Horn, 1965; Hoyle & Duvall, 2004), or hypothesis signifi- cance tests, like Bartlett's test (1950), make use of numerical criteria for comparison or statistical significance criteria. Independently of these numerical solutions, Cattell (1966) proposed the scree test, a graphical strategy to determine the number of components to retain. Along with the Kaiser's rule, the scree test is probably the most used strategy and it is included in almost all statistical software dealing with principal components analysis. Unfortunately, it is generally recognized that the graphical nature of the Cattell's scree test does not enable clear decision-making about the number of components to retain. The previously proposed non-graphical solutions for","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"23-29"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1027/1614-2241/A000051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 280
The Survey Field Needs a Framework for the Systematic Reporting of Questionnaire Development and Pretesting 调查领域需要一个系统报告问卷开发和预测的框架
IF 3.1 3区 心理学
Gordon B. Willis, H. Boeije
{"title":"The Survey Field Needs a Framework for the Systematic Reporting of Questionnaire Development and Pretesting","authors":"Gordon B. Willis, H. Boeije","doi":"10.1027/1614-2241/A000070","DOIUrl":"https://doi.org/10.1027/1614-2241/A000070","url":null,"abstract":"","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"9 1","pages":"85-86"},"PeriodicalIF":3.1,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57293190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Improved Model for Evaluating Change in Randomized Pretest, Posttest, Follow-Up Designs 评价随机前测、后测、随访设计变化的改进模型
IF 3.1 3区 心理学
C. Mara, R. Cribbie, D. Flora, Cathy Labrish, Laura Mills, L. Fiksenbaum
{"title":"An Improved Model for Evaluating Change in Randomized Pretest, Posttest, Follow-Up Designs","authors":"C. Mara, R. Cribbie, D. Flora, Cathy Labrish, Laura Mills, L. Fiksenbaum","doi":"10.1027/1614-2241/A000041","DOIUrl":"https://doi.org/10.1027/1614-2241/A000041","url":null,"abstract":"Randomized pretest, posttest, follow-up (RPPF) designs are often used for evaluating the effectiveness of an intervention. These designs typically address two primary research questions: (1) Do the treatment and control groups differ in the amount of change from pretest to posttest? and (2) Do the treatment and control groups differ in the amount of change from posttest to follow-up? This study presents a model for answering these questions and compares it to recently proposed models for analyzing RPPF designs due to Mun, von Eye, and White (2009) using Monte Carlo simulation. The proposed model provides increased power over previous models for evaluating group differences in RPPF designs.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"97-103"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Estimation of and Confidence Interval Formation for Reliability Coefficients of Homogeneous Measurement Instruments 同质测量仪器可靠性系数的估计及置信区间的形成
IF 3.1 3区 心理学
Ken Kelley, Ying Cheng
{"title":"Estimation of and Confidence Interval Formation for Reliability Coefficients of Homogeneous Measurement Instruments","authors":"Ken Kelley, Ying Cheng","doi":"10.1027/1614-2241/A000036","DOIUrl":"https://doi.org/10.1027/1614-2241/A000036","url":null,"abstract":"The reliability of a composite score is a fundamental and important topic in the social and behavioral sciences. The most commonly used reliability estimate of a composite score is coefficient a. However, under regularity conditions, the population value of coefficient a is only a lower bound on the population reliability, unless the items are essentially s-equivalent, an assumption that is likely violated in most applications. A generalization of coefficient a, termed x, is discussed and generally recommended. Furthermore, a point estimate itself almost certainly differs from the population value. Therefore, it is important to provide confidence interval limits so as not to overinterpret the point estimate. Analytic and bootstrap methods are described in detail for confidence interval construction for x .W e go on to recommend the bias-corrected bootstrap approach for x and provide open source and freely available R functions via the MBESS package to implement the methods discussed.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"39-50"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Assessing Content Validity Through Correlation and Relevance Tools A Bayesian Randomized Equivalence Experiment 通过相关和关联工具评估内容效度:贝叶斯随机等效实验
IF 3.1 3区 心理学
B. Gajewski, Valorie Coffland, D. Boyle, M. Bott, L. Price, Jamie Leopold, N. Dunton
{"title":"Assessing Content Validity Through Correlation and Relevance Tools A Bayesian Randomized Equivalence Experiment","authors":"B. Gajewski, Valorie Coffland, D. Boyle, M. Bott, L. Price, Jamie Leopold, N. Dunton","doi":"10.1027/1614-2241/A000040","DOIUrl":"https://doi.org/10.1027/1614-2241/A000040","url":null,"abstract":"Content validity elicits expert opinion regarding items of a psychometric instrument. Expert opinion can be elicited in many forms: for example, how essential an item is or its relevancy to a domain. This study developed an alternative tool that elicits expert opinion regarding correlations between each item and its respective domain. With 109 Registered Nurse (RN) site coordinators from National Database of Nursing Quality Indicators, we implemented a randomized Bayesian equivalence trial with coordinators completing ''relevance'' or ''correlation'' content tools regarding the RN Job Enjoyment Scale. We confirmed our hypothesis that the two tools would result in equivalent content information. A Bayesian ordered analysis model supported the results, suggesting that evidence for traditional content validity indices can be justified using correlation arguments.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"81-96"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Exploiting Prior Information in Stochastic Knowledge Assessment 先验信息在随机知识评估中的应用
IF 3.1 3区 心理学
J. Heller, Claudia Repitsch
{"title":"Exploiting Prior Information in Stochastic Knowledge Assessment","authors":"J. Heller, Claudia Repitsch","doi":"10.1027/1614-2241/A000035","DOIUrl":"https://doi.org/10.1027/1614-2241/A000035","url":null,"abstract":"Various adaptive procedures for efficiently assessing the knowledge state of an individual have been developed within the theory of knowledge structures. These procedures set out to draw a detailed picture of an individual’s knowledge in a certain field by posing a minimal number of questions. While research so far mostly emphasized theoretical issues, the present paper focuses on an empirical evaluation of probabilistic assessment. It reports on simulation data showing that both efficiency and accuracy of the assessment exhibit considerable sensitivity to the choice of parameters and prior information as captured by the initial likelihood of the knowledge states. In order to deal with problems that arise from incorrect prior information, an extension of the probabilistic assessment is proposed. Systematic simulations provide evidence for the efficiency and robustness of the proposed extension, as well as its feasibility in terms of computational costs.","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"12-22"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The Impact of Controlling for Extreme Responding on Measurement Equivalence in Cross-Cultural Research 跨文化研究中极端反应控制对测量等值的影响
IF 3.1 3区 心理学
M. Morren, J. Gelissen, J. Vermunt
{"title":"The Impact of Controlling for Extreme Responding on Measurement Equivalence in Cross-Cultural Research","authors":"M. Morren, J. Gelissen, J. Vermunt","doi":"10.1027/1614-2241/A000048","DOIUrl":"https://doi.org/10.1027/1614-2241/A000048","url":null,"abstract":"Prior research has shown that extreme response style can seriously bias responses to survey questions and that this response style may differ across culturally diverse groups. Consequently, cross-cultural differences in extreme responding may yield incomparable responses when not controlled for. To examine how extreme responding affects the cross-cultural comparability of survey responses, we propose and apply a multiple-group latent class approach where groups are compared on basis of the factor loadings, intercepts, and factor means in a Latent Class Factor Model. In this approach a latent factor measuring the response style is explicitly included as an explanation for group differences found in the data. Findings from two empirical applications that examine the cross-cultural comparability of measurements show that group differences in responding import inequivalence in measurements among groups. Controlling for the response style yields more equivalent measurements. This finding emphasizes the importa...","PeriodicalId":18476,"journal":{"name":"Methodology: European Journal of Research Methods for The Behavioral and Social Sciences","volume":"8 1","pages":"159-170"},"PeriodicalIF":3.1,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57292958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信