Applied Psychological Measurement最新文献

筛选
英文 中文
Impact of Sampling Variability When Estimating the Explained Common Variance 抽样变异性在估计解释的共同方差时的影响
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-04-15 DOI: 10.1177/01466216221084215
Björn Andersson, Hao Luo
{"title":"Impact of Sampling Variability When Estimating the Explained Common Variance","authors":"Björn Andersson, Hao Luo","doi":"10.1177/01466216221084215","DOIUrl":"https://doi.org/10.1177/01466216221084215","url":null,"abstract":"Assessing multidimensionality of a scale or test is a staple of educational and psychological measurement. One approach to evaluate approximate unidimensionality is to fit a bifactor model where the subfactors are determined by substantive theory and estimate the explained common variance (ECV) of the general factor. The ECV says to what extent the explained variance is dominated by the general factor over the specific factors, and has been used, together with other methods and statistics, to determine if a single factor model is sufficient for analyzing a scale or test (Rodriguez et al., 2016). In addition, the individual item-ECV (I-ECV) has been used to assess approximate unidimensionality of individual items (Carnovale et al., 2021; Stucky et al., 2013). However, the ECVand I-ECVare subject to random estimation error which previous studies have not considered. Not accounting for the error in estimation can lead to conclusions regarding the dimensionality of a scale or item that are inaccurate, especially when an estimate of ECVor I-ECV is compared to a pre-specified cut-off value to evaluate unidimensionality. The objective of the present study is to derive standard errors of the estimators of ECV and I-ECV with linear confirmatory factor analysis (CFA) models to enable the assessment of random estimation error and the computation of confidence intervals for the parameters. We use Monte-Carlo simulation to assess the accuracy of the derived standard errors and evaluate the impact of sampling variability on the estimation of the ECV and I-ECV. In a bifactor model for J items, denote Xj, j 1⁄4 1, ..., J , as the observed variable and let G denote the general factor. We define the S subfactors Fs, s2f1,..., Sg, and Js as the set of indicators for each subfactor. Each observed indicator Xj is then defined by the multiple factor model (McDonald, 2013)","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 1","pages":"338 - 341"},"PeriodicalIF":1.2,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42137052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Standard Errors of Kernel Equating: Accounting for Bandwidth Estimation 核方程的标准误差:考虑带宽估计
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-03-07 DOI: 10.1177/01466216211066601
Kseniia Marcq, Björn Andersson
{"title":"Standard Errors of Kernel Equating: Accounting for Bandwidth Estimation","authors":"Kseniia Marcq, Björn Andersson","doi":"10.1177/01466216211066601","DOIUrl":"https://doi.org/10.1177/01466216211066601","url":null,"abstract":"In standardized testing, equating is used to ensure comparability of test scores across multiple test administrations. One equipercentile observed-score equating method is kernel equating, where an essential step is to obtain continuous approximations to the discrete score distributions by applying a kernel with a smoothing bandwidth parameter. When estimating the bandwidth, additional variability is introduced which is currently not accounted for when calculating the standard errors of equating. This poses a threat to the accuracy of the standard errors of equating. In this study, the asymptotic variance of the bandwidth parameter estimator is derived and a modified method for calculating the standard error of equating that accounts for the bandwidth estimation variability is introduced for the equivalent groups design. A simulation study is used to verify the derivations and confirm the accuracy of the modified method across several sample sizes and test lengths as compared to the existing method and the Monte Carlo standard error of equating estimates. The results show that the modified standard errors of equating are accurate under the considered conditions. Furthermore, the modified and the existing methods produce similar results which suggest that the bandwidth variability impact on the standard error of equating is minimal.","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 1","pages":"200 - 218"},"PeriodicalIF":1.2,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49283258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEMsens: An R Package for Sensitivity Analysis of Structural Equation Models With the Ant Colony Optimization Algorithm. SEMsens:利用蚁群优化算法对结构方程模型进行灵敏度分析的 R 软件包
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2022-03-01 Epub Date: 2022-01-09 DOI: 10.1177/01466216211063233
Zuchao Shen, Walter L Leite
{"title":"SEMsens: An R Package for Sensitivity Analysis of Structural Equation Models With the Ant Colony Optimization Algorithm.","authors":"Zuchao Shen, Walter L Leite","doi":"10.1177/01466216211063233","DOIUrl":"10.1177/01466216211063233","url":null,"abstract":"","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 2","pages":"159-161"},"PeriodicalIF":1.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8908408/pdf/10.1177_01466216211063233.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10810177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Fit Metrics for Item Response Models. 项目反应模型的预测拟合度量。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2022-03-01 Epub Date: 2022-02-13 DOI: 10.1177/01466216211066603
Benjamin A Stenhaug, Benjamin W Domingue
{"title":"Predictive Fit Metrics for Item Response Models.","authors":"Benjamin A Stenhaug, Benjamin W Domingue","doi":"10.1177/01466216211066603","DOIUrl":"10.1177/01466216211066603","url":null,"abstract":"<p><p>The fit of an item response model is typically conceptualized as whether a given model could have generated the data. In this study, for an alternative view of fit, \"predictive fit,\" based on the model's ability to predict new data is advocated. The authors define two prediction tasks: \"missing responses prediction\"-where the goal is to predict an in-sample person's response to an in-sample item-and \"missing persons prediction\"-where the goal is to predict an out-of-sample person's string of responses. Based on these prediction tasks, two predictive fit metrics are derived for item response models that assess how well an estimated item response model fits the data-generating model. These metrics are based on long-run out-of-sample predictive performance (i.e., if the data-generating model produced infinite amounts of data, what is the quality of a \"model's predictions on average?\"). Simulation studies are conducted to identify the prediction-maximizing model across a variety of conditions. For example, defining prediction in terms of missing responses, greater average person ability, and greater item discrimination are all associated with the 3PL model producing relatively worse predictions, and thus lead to greater minimum sample sizes for the 3PL model. In each simulation, the prediction-maximizing model to the model selected by Akaike's information criterion, Bayesian information criterion (BIC), and likelihood ratio tests are compared. It is found that performance of these methods depends on the prediction task of interest. In general, likelihood ratio tests often select overly flexible models, while BIC selects overly parsimonious models. The authors use Programme for International Student Assessment data to demonstrate how to use cross-validation to directly estimate the predictive fit metrics in practice. The implications for item response model selection in operational settings are discussed.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 2","pages":"136-155"},"PeriodicalIF":1.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8908407/pdf/10.1177_01466216211066603.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10810179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Considerations for Fitting Dynamic Bayesian Networks With Latent Variables: A Monte Carlo Study. 具有潜在变量的动态贝叶斯网络拟合的考虑:蒙特卡罗研究。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-03-01 DOI: 10.1177/01466216211066609
Ray E Reichenberg, Roy Levy, Adam Clark
{"title":"Considerations for Fitting Dynamic Bayesian Networks With Latent Variables: A Monte Carlo Study.","authors":"Ray E Reichenberg,&nbsp;Roy Levy,&nbsp;Adam Clark","doi":"10.1177/01466216211066609","DOIUrl":"https://doi.org/10.1177/01466216211066609","url":null,"abstract":"<p><p>Dynamic Bayesian networks (DBNs; Reye, 2004) are a promising tool for modeling student proficiency under rich measurement scenarios (Reichenberg, 2018). These scenarios often present assessment conditions far more complex than what is seen with more traditional assessments and require assessment arguments and psychometric models capable of integrating those complexities. Unfortunately, DBNs remain understudied and their psychometric properties relatively unknown. The current work aimed at exploring the properties of DBNs under a variety of realistic psychometric conditions. A Monte Carlo simulation study was conducted in order to evaluate parameter recovery for DBNs using maximum likelihood estimation. Manipulated factors included sample size, measurement quality, test length, the number of measurement occasions. Results suggested that measurement quality has the most prominent impact on estimation quality with more distinct performance categories yielding better estimation. From a practical perspective, parameter recovery appeared to be sufficient with samples as low as <i>N</i> = 400 as long as measurement quality was not poor and at least three items were present at each measurement occasion. Tests consisting of only a single item required exceptional measurement quality in order to adequately recover model parameters.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 2","pages":"116-135"},"PeriodicalIF":1.2,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8908410/pdf/10.1177_01466216211066609.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10615071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bayesian Approaches for Detecting Differential Item Functioning Using the Generalized Graded Unfolding Model. 用广义梯度展开模型检测项目微分功能的贝叶斯方法。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-03-01 DOI: 10.1177/01466216211066606
Seang-Hwane Joo, Philseok Lee, Stephen Stark
{"title":"Bayesian Approaches for Detecting Differential Item Functioning Using the Generalized Graded Unfolding Model.","authors":"Seang-Hwane Joo,&nbsp;Philseok Lee,&nbsp;Stephen Stark","doi":"10.1177/01466216211066606","DOIUrl":"https://doi.org/10.1177/01466216211066606","url":null,"abstract":"<p><p>Differential item functioning (DIF) analysis is one of the most important applications of item response theory (IRT) in psychological assessment. This study examined the performance of two Bayesian DIF methods, Bayes factor (BF) and deviance information criterion (DIC), with the generalized graded unfolding model (GGUM). The Type I error and power were investigated in a Monte Carlo simulation that manipulated sample size, DIF source, DIF size, DIF location, subpopulation trait distribution, and type of baseline model. We also examined the performance of two likelihood-based methods, the likelihood ratio (LR) test and Akaike information criterion (AIC), using marginal maximum likelihood (MML) estimation for comparison with past DIF research. The results indicated that the proposed BF and DIC methods provided well-controlled Type I error and high power using a free-baseline model implementation, their performance was superior to LR and AIC in terms of Type I error rates when the reference and focal group trait distributions differed. The implications and recommendations for applied research are discussed.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 2","pages":"98-115"},"PeriodicalIF":1.2,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8908411/pdf/10.1177_01466216211066606.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10800335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Scale Linking for the Testlet Item Response Theory Model. 测验项目反应理论模型的量表链接。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-03-01 DOI: 10.1177/01466216211063234
Seonghoon Kim, Michael J Kolen
{"title":"Scale Linking for the Testlet Item Response Theory Model.","authors":"Seonghoon Kim,&nbsp;Michael J Kolen","doi":"10.1177/01466216211063234","DOIUrl":"https://doi.org/10.1177/01466216211063234","url":null,"abstract":"<p><p>In their 2005 paper, Li and her colleagues proposed a test response function (TRF) linking method for a two-parameter testlet model and used a genetic algorithm to find minimization solutions for the linking coefficients. In the present paper the linking task for a three-parameter testlet model is formulated from the perspective of bi-factor modeling, and three linking methods for the model are presented: the TRF, mean/least squares (MLS), and item response function (IRF) methods. Simulations are conducted to compare the TRF method using a genetic algorithm with the TRF and IRF methods using a quasi-Newton algorithm and the MLS method. The results indicate that the IRF, MLS, and TRF methods perform very well, well, and poorly, respectively, in estimating the linking coefficients associated with testlet effects, that the use of genetic algorithms offers little improvement to the TRF method, and that the minimization function for the TRF method is not as well-structured as that for the IRF method.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 2","pages":"79-97"},"PeriodicalIF":1.2,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8908412/pdf/10.1177_01466216211063234.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10810181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Battery Factor Analysis in R. R中的多电池因素分析。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-03-01 DOI: 10.1177/01466216211066604
Niels G Waller, Casey Giordano
{"title":"Multi-Battery Factor Analysis in R.","authors":"Niels G Waller,&nbsp;Casey Giordano","doi":"10.1177/01466216211066604","DOIUrl":"https://doi.org/10.1177/01466216211066604","url":null,"abstract":"Inter-battery factor analysis (IBFA) is a multivariate technique for evaluating the stability of common factors across two test batteries that have been administered to the same individuals. Tucker (1958) introduced the model in the late 1950s and derived the least squares solution for estimating model parameters. Two decades later, Browne (1979) extended Tucker’s work by (a) deriving the maximum-likelihood (ML) model estimates and (b) enabling the model to accommodate two or more test batteries (Browne, 1980). Browne’s extended model is called multiple-battery factor analysis (MBFA). Influenced by Browne’s ideas, Cudeck (1980) produced a FORTRAN program for MBFA (Cudeck, 1982) and a readable account of the method’s underlying logic. For many years, this program was the primary vehicle for conducting MBFA in a Window’s environment (Brown, 2007; Finch & West, 1997; Finch et al., 1999, Waller et al., 1991). Unfortunately, until now, open-source software for conducting IBFA and MBFA on Windows, Mac OS, Linux, and Unix operating systems was not available. To introduce the ideas of Tucker (1958) and Browne (1979, 1980) to the broader research community, two open-source programs were developed in R (R Core Team, 2021) for obtaining ML estimates for the inter-battery and MBFA models. The programs are called faIB and faMB. Both programs are included in the R fungible (Waller, 2021) library and can be freely downloaded from the Comprehensive R Archive Network (CRAN; https://cran.r-project.org/package= fungible). faIB and faMB include a number of features that make them attractive choices for extracting common factors from two or more batteries. For instance, both programs include a wide range of rotation options by building upon functionality from the GPArotation package (Bernaards & Jennrich, 2005). This package provides routines for rotating factors by oblimin, geomin (orthogonal and oblique), infomax, simplimax, varimax, promax, and many other rotation algorithms. Both programs also allow users to initiate factor rotations from random starting configurations to facilitate the location of global and local solutions (for a discussion of why feature this is important, see Rozeboom, 1992). Prior to rotation, factors can be preconditioned (i.e., row standardized) by methods described by Kaiser (1958) or Cureton and Mulaik (1975). After rotation, factor loadings can be sorted within batteries to elucidate the structure of the","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 2","pages":"156-158"},"PeriodicalIF":1.2,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8908409/pdf/10.1177_01466216211066604.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10810180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the Performance of the Trifactor Model for Multiple Raters. 检验多重评分者的三因子模型的性能。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-01-01 DOI: 10.1177/01466216211051728
James Soland, Megan Kuhfeld
{"title":"Examining the Performance of the Trifactor Model for Multiple Raters.","authors":"James Soland,&nbsp;Megan Kuhfeld","doi":"10.1177/01466216211051728","DOIUrl":"https://doi.org/10.1177/01466216211051728","url":null,"abstract":"<p><p>Researchers in the social sciences often obtain ratings of a construct of interest provided by multiple raters. While using multiple raters provides a way to help avoid the subjectivity of any given person's responses, rater disagreement can be a problem. A variety of models exist to address rater disagreement in both structural equation modeling and item response theory frameworks. Recently, a model was developed by Bauer et al. (2013) and referred to as the \"trifactor model\" to provide applied researchers with a straightforward way of estimating scores that are purged of variance that is idiosyncratic by rater. Although the intent of the model is to be usable and interpretable, little is known about the circumstances under which it performs well, and those it does not. We conduct simulation studies to examine the performance of the trifactor model under a range of sample sizes and model specifications and then compare model fit, bias, and convergence rates.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 1","pages":"53-67"},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8655468/pdf/10.1177_01466216211051728.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10515110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Quantifying the Distorting Effect of Rapid Guessing on Estimates of Coefficient Αlpha. 量化快速猜测对系数 Αlpha 估计值的扭曲效应。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-01-01 Epub Date: 2021-10-11 DOI: 10.1177/01466216211051719
Joseph A Rios, Jiayi Deng
{"title":"Quantifying the Distorting Effect of Rapid Guessing on Estimates of Coefficient Αlpha.","authors":"Joseph A Rios, Jiayi Deng","doi":"10.1177/01466216211051719","DOIUrl":"10.1177/01466216211051719","url":null,"abstract":"<p><p>An underlying threat to the validity of reliability measures is the introduction of systematic variance in examinee scores from unintended constructs that differ from those assessed. One construct-irrelevant behavior that has gained increased attention in the literature is rapid guessing (RG), which occurs when examinees answer quickly with intentional disregard for item content. To examine the degree of distortion in coefficient alpha due to RG, this study compared alpha estimates between conditions in which simulees engaged in full solution (i.e., do not engage in RG) versus partial RG behavior. This was done by conducting a simulation study in which the percentage and ability characteristics of rapid responders as well as the percentage and pattern of RG were manipulated. After controlling for test length and difficulty, the average degree of distortion in estimates of coefficient alpha due to RG ranged from -.04 to .02 across 144 conditions. Although slight differences were noted between conditions differing in RG pattern and RG responder ability, the findings from this study suggest that estimates of coefficient alpha are largely robust to the presence of RG due to cognitive fatigue and a low perceived probability of success.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 1","pages":"40-52"},"PeriodicalIF":1.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8655465/pdf/10.1177_01466216211051719.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10515114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信