Applied Psychological Measurement最新文献

筛选
英文 中文
Multistage Testing in Heterogeneous Populations: Some Design and Implementation Considerations. 异质群体中的多阶段测试:一些设计和实现方面的考虑。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-09-01 DOI: 10.1177/01466216221108123
Leslie Rutkowski, Yuan-Ling Liaw, Dubravka Svetina, David Rutkowski
{"title":"Multistage Testing in Heterogeneous Populations: Some Design and Implementation Considerations.","authors":"Leslie Rutkowski,&nbsp;Yuan-Ling Liaw,&nbsp;Dubravka Svetina,&nbsp;David Rutkowski","doi":"10.1177/01466216221108123","DOIUrl":"https://doi.org/10.1177/01466216221108123","url":null,"abstract":"<p><p>A central challenge in international large-scale assessments is adequately measuring dozens of highly heterogeneous populations, many of which are low performers. To that end, multistage adaptive testing offers one possibility for better assessing across the achievement continuum. This study examines the way that several multistage test design and implementation choices can impact measurement performance in this setting. To attend to gaps in the knowledge base, we extended previous research to include multiple, linked panels, more appropriate estimates of achievement, and multiple populations of varied proficiency. Including achievement distributions from varied populations and associated item parameters, we design and execute a simulation study that mimics an established international assessment. We compare several routing schemes and varied module lengths in terms of item and person parameter recovery. Our findings suggest that, particularly for low performing populations, multistage testing offers precision advantages. Further, findings indicate that equal module lengths-desirable for controlling position effects-and classical routing methods, which lower the technological burden of implementing such a design, produce good results. Finally, probabilistic misrouting offers advantages over merit routing for controlling bias in item and person parameters. Overall, multistage testing shows promise for extending the scope of international assessments. We discuss the importance of our findings for operational work in the international assessment domain.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"494-508"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382094/pdf/10.1177_01466216221108123.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10189453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Characterizing Sampling Variability for Item Response Theory Scale Scores in a Fixed-Parameter Calibrated Projection Design. 固定参数校准投影设计中项目反应理论量表得分的抽样变异性特征。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-09-01 DOI: 10.1177/01466216221108136
Shuangshuang Xu, Yang Liu
{"title":"Characterizing Sampling Variability for Item Response Theory Scale Scores in a Fixed-Parameter Calibrated Projection Design.","authors":"Shuangshuang Xu,&nbsp;Yang Liu","doi":"10.1177/01466216221108136","DOIUrl":"https://doi.org/10.1177/01466216221108136","url":null,"abstract":"<p><p>A common practice of linking uses estimated item parameters to calculate projected scores. This procedure fails to account for the carry-over sampling variability. Neglecting sampling variability could consequently lead to understated uncertainty for Item Response Theory (IRT) scale scores. To address the issue, we apply a Multiple Imputation (MI) approach to adjust the Posterior Standard Deviations of IRT scale scores. The MI procedure involves drawing multiple sets of plausible values from an approximate sampling distribution of the estimated item parameters. When two scales to be linked were previously calibrated, item parameters can be fixed at their original published scales, and the latent variable means and covariances of the two scales can then be estimated conditional on the fixed item parameters. The conditional estimation procedure is a special case of Restricted Recalibration (RR), in which the asymptotic sampling distribution of estimated parameters follows from the general theory of pseudo Maximum Likelihood (ML) estimation. We evaluate the combination of RR and MI by a simulation study to examine the impact of carry-over sampling variability under various simulation conditions. We also illustrate how to apply the proposed method to real data by revisiting Thissen et al. (2015).</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"509-528"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382091/pdf/10.1177_01466216221108136.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10133732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Sampling Variance of Item Response Theory Parameter Estimates in Detecting Outliers in Common Item Equating. 项目反应理论参数估计抽样方差在常见项目方程异常值检测中的应用。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-09-01 DOI: 10.1177/01466216221108122
Chunyan Liu, Daniel Jurich
{"title":"Application of Sampling Variance of Item Response Theory Parameter Estimates in Detecting Outliers in Common Item Equating.","authors":"Chunyan Liu,&nbsp;Daniel Jurich","doi":"10.1177/01466216221108122","DOIUrl":"https://doi.org/10.1177/01466216221108122","url":null,"abstract":"<p><p>In common item equating, the existence of item outliers may impact the accuracy of equating results and bring significant ramifications to the validity of test score interpretations. Therefore, common item equating should involve a screening process to flag outlying items and exclude them from the common item set before equating is conducted. The current simulation study demonstrated that the sampling variance associated with the item response theory (IRT) item parameter estimates can help detect outliers in the common items under the 2-PL and 3-PL IRT models. The results showed the proposed sampling variance statistic (<i>SV</i>) outperformed the traditional displacement method with cutoff values of 0.3 and 0.5 along a variety of evaluation criteria. Based on the favorable results, item outlier detection statistics based on estimated sampling variability warrant further consideration in both research and practice.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"529-547"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382092/pdf/10.1177_01466216221108122.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10487809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two New Models for Item Preknowledge. 项目预知的两个新模型。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-09-01 DOI: 10.1177/01466216221108130
Kylie Gorney, James A Wollack
{"title":"Two New Models for Item Preknowledge.","authors":"Kylie Gorney,&nbsp;James A Wollack","doi":"10.1177/01466216221108130","DOIUrl":"https://doi.org/10.1177/01466216221108130","url":null,"abstract":"<p><p>To evaluate preknowledge detection methods, researchers often conduct simulation studies in which they use models to generate the data. In this article, we propose two new models to represent item preknowledge. Contrary to existing models, we allow the impact of preknowledge to vary across persons and items in order to better represent situations that are encountered in practice. We use three real data sets to evaluate the fit of the new models with respect to two types of preknowledge: items only, and items and the correct answer key. Results show that the two new models provide the best fit compared to several other existing preknowledge models. Furthermore, model parameter estimates were found to vary substantially depending on the type of preknowledge being considered, indicating that answer key disclosure has a profound impact on testing behavior.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"447-461"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382093/pdf/10.1177_01466216221108130.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10487814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Item-Fit Statistic Based on Posterior Probabilities of Membership in Ability Groups. 基于能力组隶属度后验概率的项目拟合统计。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-09-01 DOI: 10.1177/01466216221108061
Bartosz Kondratek
{"title":"Item-Fit Statistic Based on Posterior Probabilities of Membership in Ability Groups.","authors":"Bartosz Kondratek","doi":"10.1177/01466216221108061","DOIUrl":"https://doi.org/10.1177/01466216221108061","url":null,"abstract":"<p><p>A novel approach to item-fit analysis based on an asymptotic test is proposed. The new test statistic, <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> </mrow> </math> , compares pseudo-observed and expected item mean scores over a set of ability bins. The item mean scores are computed as weighted means with weights based on test-takers' <i>a posteriori</i> density of ability within the bin. This article explores the properties of <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> </mrow> </math> in case of dichotomously scored items for unidimensional IRT models. Monte Carlo experiments were conducted to analyze the performance of <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> </mrow> </math> . Type I error of <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> <mo> </mo></mrow> </math> was acceptably close to the nominal level and it had greater power than Orlando and Thissen's <math><mrow><mi>S</mi> <mo>-</mo> <msup><mi>x</mi> <mn>2</mn></msup> </mrow> </math> . Under some conditions, power of <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> </mrow> </math> also exceeded the one reported for the computationally more demanding Stone's <math> <mrow><msup><mi>χ</mi> <mrow><mn>2</mn> <mo>∗</mo></mrow> </msup> </mrow> </math> .</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"462-478"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382089/pdf/10.1177_01466216221108061.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10132911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Item Response Theory True Score Equating for the Bifactor Model Under the Common-Item Nonequivalent Groups Design. 共同项目非等值组设计下双因子模型的项目反应理论真实得分等化。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2022-09-01 Epub Date: 2022-06-17 DOI: 10.1177/01466216221108995
Kyung Yong Kim
{"title":"Item Response Theory True Score Equating for the Bifactor Model Under the Common-Item Nonequivalent Groups Design.","authors":"Kyung Yong Kim","doi":"10.1177/01466216221108995","DOIUrl":"10.1177/01466216221108995","url":null,"abstract":"<p><p>Applying item response theory (IRT) true score equating to multidimensional IRT models is not straightforward due to the one-to-many relationship between a true score and latent variables. Under the common-item nonequivalent groups design, the purpose of the current study was to introduce two IRT true score equating procedures that adopted different dimension reduction strategies for the bifactor model. The first procedure, which was referred to as the integration procedure, linked the latent variable scales for the bifactor model and integrated out the specific factors from the item response function of the bifactor model. Then, IRT true score equating was applied to the marginalized bifactor model. The second procedure, which was referred to as the PIRT-based procedure, projected the specific dimensions onto the general dimension to obtain a locally dependent unidimensional IRT (UIRT) model and linked the scales of the UIRT model, followed by the application of IRT true score equating to the locally dependent UIRT model. Equating results obtained with the two equating procedures along with those obtained with the unidimensional three-parameter logistic (3PL) model were compared using both simulated and real data. In general, the integration and PIRT-based procedures provided equating results that were not practically different. Furthermore, the equating results produced by the two bifactor-based procedures became more accurate than the results returned by the 3PL model as tests became more multidimensional.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"479-493"},"PeriodicalIF":1.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382090/pdf/10.1177_01466216221108995.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10189451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Factor Retention Using Machine Learning With Ordinal Data. 使用有序数据的机器学习进行因子保留。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-07-01 Epub Date: 2022-05-04 DOI: 10.1177/01466216221089345
David Goretzko, Markus Bühner
{"title":"Factor Retention Using Machine Learning With Ordinal Data.","authors":"David Goretzko,&nbsp;Markus Bühner","doi":"10.1177/01466216221089345","DOIUrl":"https://doi.org/10.1177/01466216221089345","url":null,"abstract":"<p><p>Determining the number of factors in exploratory factor analysis is probably the most crucial decision when conducting the analysis as it clearly influences the meaningfulness of the results (i.e., factorial validity). A new method called the Factor Forest that combines data simulation and machine learning has been developed recently. This method based on simulated data reached very high accuracy for multivariate normal data, but it has not yet been tested with ordinal data. Hence, in this simulation study, we evaluated the Factor Forest with ordinal data based on different numbers of categories (2-6 categories) and compared it to common factor retention criteria. It showed higher overall accuracy for all types of ordinal data than all common factor retention criteria that were used for comparison (Parallel Analysis, Comparison Data, the Empirical Kaiser Criterion and the Kaiser Guttman Rule). The results indicate that the Factor Forest is applicable to ordinal data with at least five categories (typical scale in questionnaire research) in the majority of conditions and to binary or ordinal data based on items with less categories when the sample size is large.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"406-421"},"PeriodicalIF":1.2,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/ff/4b/10.1177_01466216221089345.PMC9265486.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40489940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
glca: An R Package for Multiple-Group Latent Class Analysis. glca:用于多组潜类分析的 R 软件包。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2022-07-01 Epub Date: 2022-05-11 DOI: 10.1177/01466216221084197
Youngsun Kim, Saebom Jeon, Chi Chang, Hwan Chung
{"title":"glca: An R Package for Multiple-Group Latent Class Analysis.","authors":"Youngsun Kim, Saebom Jeon, Chi Chang, Hwan Chung","doi":"10.1177/01466216221084197","DOIUrl":"10.1177/01466216221084197","url":null,"abstract":"<p><p>Group similarities and differences may manifest themselves in a variety of ways in multiple-group latent class analysis (LCA). Sometimes, measurement models are identical across groups in LCA. In other situations, the measurement models may differ, suggesting that the latent structure itself is different between groups. Tests of measurement invariance shed light on this distinction. We created an R package glca that implements procedures for exploring differences in latent class structure between populations, taking multilevel data structure into account. The glca package deals with the fixed-effect LCA and the nonparametric random-effect LCA; the former can be applied in the situation where populations are segmented by the observed group variable itself, whereas the latter can be used when there are too many levels in the group variable to make a meaningful group comparisons by identifying a group-level latent variable. The glca package consists of functions for statistical test procedures for exploring group differences in various LCA models considering multilevel data structure.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 5","pages":"439-441"},"PeriodicalIF":1.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9265491/pdf/10.1177_01466216221084197.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10091269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging Models of Biometric and Psychometric Assessment: A Three-Way Joint Modeling Approach of Item Responses, Response Times, and Gaze Fixation Counts. 生物计量和心理计量评估的桥接模型:项目反应、反应时间和注视计数的三方联合建模方法。
IF 1.2 4区 心理学
Applied Psychological Measurement Pub Date : 2022-07-01 DOI: 10.1177/01466216221089344
Kaiwen Man, Jeffrey R Harring, Peida Zhan
{"title":"Bridging Models of Biometric and Psychometric Assessment: A Three-Way Joint Modeling Approach of Item Responses, Response Times, and Gaze Fixation Counts.","authors":"Kaiwen Man,&nbsp;Jeffrey R Harring,&nbsp;Peida Zhan","doi":"10.1177/01466216221089344","DOIUrl":"https://doi.org/10.1177/01466216221089344","url":null,"abstract":"<p><p>Recently, joint models of item response data and response times have been proposed to better assess and understand test takers' learning processes. This article demonstrates how biometric information such as gaze fixation counts obtained from an eye-tracking machine can be integrated into the measurement model. The proposed joint modeling framework accommodates the relations among a test taker's latent ability, working speed and test engagement level via a person-side variance-covariance structure, while simultaneously permitting the modeling of item difficulty, time-intensity, and the engagement intensity through an item-side variance-covariance structure. A Bayesian estimation scheme is used to fit the proposed model to data. Posterior predictive model checking based on three discrepancy measures corresponding to various model components are introduced to assess model-data fit. Findings from a Monte Carlo simulation and results from analyzing experimental data demonstrate the utility of the model.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 5","pages":"361-381"},"PeriodicalIF":1.2,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9265489/pdf/10.1177_01466216221089344.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10091266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Bayesian Item Response Theory Models With Flexible Generalized Logit Links. 具有灵活广义对数链接的贝叶斯项目反应理论模型
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2022-07-01 Epub Date: 2022-05-20 DOI: 10.1177/01466216221089343
Jiwei Zhang, Ying-Ying Zhang, Jian Tao, Ming-Hui Chen
{"title":"Bayesian Item Response Theory Models With Flexible Generalized Logit Links.","authors":"Jiwei Zhang, Ying-Ying Zhang, Jian Tao, Ming-Hui Chen","doi":"10.1177/01466216221089343","DOIUrl":"10.1177/01466216221089343","url":null,"abstract":"<p><p>In educational and psychological research, the logit and probit links are often used to fit the binary item response data. The appropriateness and importance of the choice of links within the item response theory (IRT) framework has not been investigated yet. In this paper, we present a family of IRT models with generalized logit links, which include the traditional logistic and normal ogive models as special cases. This family of models are flexible enough not only to adjust the item characteristic curve tail probability by two shape parameters but also to allow us to fit the same link or different links to different items within the IRT model framework. In addition, the proposed models are implemented in the Stan software to sample from the posterior distributions. Using readily available Stan outputs, the four Bayesian model selection criteria are computed for guiding the choice of the links within the IRT model framework. Extensive simulation studies are conducted to examine the empirical performance of the proposed models and the model fittings in terms of \"in-sample\" and \"out-of-sample\" predictions based on the deviance. Finally, a detailed analysis of the real reading assessment data is carried out to illustrate the proposed methodology.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 5","pages":"382-405"},"PeriodicalIF":1.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9265488/pdf/10.1177_01466216221089343.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10091271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信