Educational and Psychological Measurement最新文献

筛选
英文 中文
Linear and Nonlinear Indices of Score Accuracy and Item Effectiveness for Measures That Contain Locally Dependent Items 包含局部依赖性项目的测量的得分准确性和项目有效性的线性和非线性指数
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-13 DOI: 10.1177/00131644241257602
P. J. Ferrando, D. Navarro-González, F. Morales-Vives
{"title":"Linear and Nonlinear Indices of Score Accuracy and Item Effectiveness for Measures That Contain Locally Dependent Items","authors":"P. J. Ferrando, D. Navarro-González, F. Morales-Vives","doi":"10.1177/00131644241257602","DOIUrl":"https://doi.org/10.1177/00131644241257602","url":null,"abstract":"The problem of local item dependencies (LIDs) is very common in personality and attitude measures, particularly in those that measure narrow-bandwidth dimensions. At the structural level, these dependencies can be modeled by using extended factor analytic (FA) solutions that include correlated residuals. However, the effects that LIDs have on the scores based on these extended solutions have received little attention so far. Here, we propose an approach to simple sum scores, designed to assess the impact of LIDs on the accuracy and effectiveness of the scores derived from extended FA solutions with correlated residuals. The proposal is structured at three levels—(a) total score, (b) bivariate-doublet, and (c) item-by-item deletion—and considers two types of FA models: the standard linear model and the nonlinear model for ordered-categorical item responses. The current proposal is implemented in SINRELEF.LD, an R package available through CRAN. The usefulness of the proposal for item analysis is illustrated with the data of 928 participants who completed the Family Involvement Questionnaire-High School Version (FIQ-HS). The results show not only the distortion that the doublets cause in the omega reliability estimate when local independency is assumed but also the loss of information/efficiency due to the local dependencies.","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141348988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why Forced-Choice and Likert Items Provide the Same Information on Personality, Including Social Desirability. 为什么强迫选择和同类项目提供了相同的人格信息,包括社会期望
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-01 Epub Date: 2023-06-12 DOI: 10.1177/00131644231178721
Martin Bäckström, Fredrik Björklund
{"title":"Why Forced-Choice and Likert Items Provide the Same Information on Personality, Including Social Desirability.","authors":"Martin Bäckström, Fredrik Björklund","doi":"10.1177/00131644231178721","DOIUrl":"10.1177/00131644231178721","url":null,"abstract":"<p><p>The forced-choice response format is often considered superior to the standard Likert-type format for controlling social desirability in personality inventories. We performed simulations and found that the trait information based on the two formats converges when the number of items is high and forced-choice items are mixed with regard to positively and negatively keyed items. Given that forced-choice items extract the same personality information as Likert-type items do, including socially desirable responding, other means are needed to counteract social desirability. We propose using evaluatively neutralized items in personality measurement, as they can counteract social desirability regardless of response format.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095325/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44637778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Multiple Imputation to Account for the Uncertainty Due to Missing Data in the Context of Factor Retention. 因子保留下数据缺失不确定性的多重归算方法
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-01 Epub Date: 2023-06-12 DOI: 10.1177/00131644231178800
Yan Xia, Selim Havan
{"title":"Using Multiple Imputation to Account for the Uncertainty Due to Missing Data in the Context of Factor Retention.","authors":"Yan Xia, Selim Havan","doi":"10.1177/00131644231178800","DOIUrl":"10.1177/00131644231178800","url":null,"abstract":"<p><p>Although parallel analysis has been found to be an accurate method for determining the number of factors in many conditions with complete data, its application under missing data is limited. The existing literature recommends that, after using an appropriate multiple imputation method, researchers either apply parallel analysis to every imputed data set and use the number of factors suggested by most of the data copies or average the correlation matrices across all data copies, followed by applying the parallel analysis to the average correlation matrix. Both approaches for pooling the results provide a single suggested number without reflecting the uncertainty introduced by missing values. The present study proposes the use of an alternative approach, which calculates the proportion of imputed data sets that result in <i>k</i> (<i>k</i> = 1, 2, 3 . . .) factors. This approach will inform applied researchers of the degree of uncertainty due to the missingness. Results from a simulation experiment show that the proposed method can more likely suggest the correct number of factors when missingness contributes to a large amount of uncertainty.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46745523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Equating Methods for Varying Levels of Form Difference. 评价不同形式差异水平的等价方法
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-01 Epub Date: 2023-06-08 DOI: 10.1177/00131644231176989
Ting Sun, Stella Yun Kim
{"title":"Evaluating Equating Methods for Varying Levels of Form Difference.","authors":"Ting Sun, Stella Yun Kim","doi":"10.1177/00131644231176989","DOIUrl":"10.1177/00131644231176989","url":null,"abstract":"<p><p>Equating is a statistical procedure used to adjust for the difference in form difficulty such that scores on those forms can be used and interpreted comparably. In practice, however, equating methods are often implemented without considering the extent to which two forms differ in difficulty. The study aims to examine the effect of the magnitude of a form difficulty difference on equating results under random group (RG) and common-item nonequivalent group (CINEG) designs. Specifically, this study evaluates the performance of six equating methods under a set of simulation conditions including varying levels of form difference. Results revealed that, under the RG design, mean equating was proven to be the most accurate method when there is no or small form difference, whereas equipercentile is the most accurate method when the difficulty difference is medium or large. Under the CINEG design, Tucker Linear was found to be the most accurate method when the difficulty difference is medium or small, and either chained equipercentile or frequency estimation is preferred with a large difficulty level. This study would provide practitioners with research evidence-based guidance in the choice of equating methods with varying levels of form difference. As the condition of no form difficulty difference is also included, this study would inform testing companies of appropriate equating methods when two forms are similar in difficulty level.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095324/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46627790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can People With Higher Versus Lower Scores on Impression Management or Self-Monitoring Be Identified Through Different Traces Under Faking? 印象管理或自我监控得分较高与较低的人是否可以通过作假的不同痕迹来识别?
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-01 Epub Date: 2023-07-02 DOI: 10.1177/00131644231182598
Jessica Röhner, Philipp Thoss, Liad Uziel
{"title":"Can People With Higher Versus Lower Scores on Impression Management or Self-Monitoring Be Identified Through Different Traces Under Faking?","authors":"Jessica Röhner, Philipp Thoss, Liad Uziel","doi":"10.1177/00131644231182598","DOIUrl":"10.1177/00131644231182598","url":null,"abstract":"<p><p>According to faking models, personality variables and faking are related. Most prominently, people's tendency to try to make an appropriate impression (impression management; IM) and their tendency to adjust the impression they make (self-monitoring; SM) have been suggested to be associated with faking. Nevertheless, empirical findings connecting these personality variables to faking have been contradictory, partly because different studies have given individuals different tests to fake and different faking directions (to fake low vs. high scores). Importantly, whereas past research has focused on faking by examining test scores, recent advances have suggested that the faking process could be better understood by analyzing individuals' responses at the item level (response pattern). Using machine learning (elastic net and random forest regression), we reanalyzed a data set (<i>N</i> = 260) to investigate whether individuals' faked response patterns on extraversion (features; i.e., input variables) could reveal their IM and SM scores. We found that individuals had similar response patterns when they faked, irrespective of their IM scores (excluding the faking of high scores when random forest regression was used). Elastic net and random forest regression converged in revealing that individuals higher on SM differed from individuals lower on SM in how they faked. Thus, response patterns were able to reveal individuals' SM, but not IM. Feature importance analyses showed that whereas some items were faked differently by individuals with higher versus lower SM scores, others were faked similarly. Our results imply that analyses of response patterns offer valuable new insights into the faking process.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095321/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47440034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Item Response Theory Model for Incorporating Response Times in Forced-Choice Measures. 在强迫选择措施中纳入反应时间的项目反应理论模型
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-01 Epub Date: 2023-06-04 DOI: 10.1177/00131644231171193
Zhichen Guo, Daxun Wang, Yan Cai, Dongbo Tu
{"title":"An Item Response Theory Model for Incorporating Response Times in Forced-Choice Measures.","authors":"Zhichen Guo, Daxun Wang, Yan Cai, Dongbo Tu","doi":"10.1177/00131644231171193","DOIUrl":"10.1177/00131644231171193","url":null,"abstract":"<p><p>Forced-choice (FC) measures have been widely used in many personality or attitude tests as an alternative to rating scales, which employ comparative rather than absolute judgments. Several response biases, such as social desirability, response styles, and acquiescence bias, can be reduced effectively. Another type of data linked with comparative judgments is response time (RT), which contains potential information concerning respondents' decision-making process. It would be challenging but exciting to combine RT into FC measures better to reveal respondents' behaviors or preferences in personality measurement. Given this situation, this study aims to propose a new item response theory (IRT) model that incorporates RT into FC measures to improve personality assessment. Simulation studies show that the proposed model can effectively improve the estimation accuracy of personality traits with the ancillary information contained in RT. Also, an application on a real data set reveals that the proposed model estimates similar but different parameter values compared with the conventional Thurstonian IRT model. The RT information can explain these differences.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43885429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring Unipolar Traits With Continuous Response Items: Some Methodological and Substantive Developments. 用连续反应项目测量单极特质:一些方法和实质性的发展
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-01 Epub Date: 2023-06-26 DOI: 10.1177/00131644231181889
Pere J Ferrando, Fabia Morales-Vives, Ana Hernández-Dorado
{"title":"Measuring Unipolar Traits With Continuous Response Items: Some Methodological and Substantive Developments.","authors":"Pere J Ferrando, Fabia Morales-Vives, Ana Hernández-Dorado","doi":"10.1177/00131644231181889","DOIUrl":"10.1177/00131644231181889","url":null,"abstract":"<p><p>In recent years, some models for binary and graded format responses have been proposed to assess unipolar variables or \"quasi-traits.\" These studies have mainly focused on clinical variables that have traditionally been treated as bipolar traits. In the present study, we have made a proposal for unipolar traits measured with continuous response items. The proposed log-logistic continuous unipolar model (LL-C) is remarkably simple and is more similar to the original binary formulation than the graded extensions, which is an advantage. Furthermore, considering that irrational, extreme, or polarizing beliefs could be another domain of unipolar variables, we have applied this proposal to an empirical example of superstitious beliefs. The results suggest that, in certain cases, the standard linear model can be a good approximation to the LL-C model in terms of parameter estimation and goodness of fit, but not trait estimates and their accuracy. The results also show the importance of considering the unipolar nature of this kind of trait when predicting criterion variables, since the validity results were clearly different.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42691490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wald χ2 Test for Differential Item Functioning Detection with Polytomous Items in Multilevel Data. Wald χ2检验在多水平数据中多同构项目的差异项目功能检测
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-01 Epub Date: 2023-07-11 DOI: 10.1177/00131644231181688
Sijia Huang, Dubravka Svetina Valdivia
{"title":"Wald χ<sup>2</sup> Test for Differential Item Functioning Detection with Polytomous Items in Multilevel Data.","authors":"Sijia Huang, Dubravka Svetina Valdivia","doi":"10.1177/00131644231181688","DOIUrl":"10.1177/00131644231181688","url":null,"abstract":"<p><p>Identifying items with differential item functioning (DIF) in an assessment is a crucial step for achieving equitable measurement. One critical issue that has not been fully addressed with existing studies is how DIF items can be detected when data are multilevel. In the present study, we introduced a Lord's Wald <math><mrow><msup><mrow><mi>χ</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math> test-based procedure for detecting both uniform and non-uniform DIF with polytomous items in the presence of the ubiquitous multilevel data structure. The proposed approach is a multilevel extension of a two-stage procedure, which identifies anchor items in its first stage and formally evaluates candidate items in the second stage. We applied the Metropolis-Hastings Robbins-Monro (MH-RM) algorithm to estimate multilevel polytomous item response theory (IRT) models and to obtain accurate covariance matrices. To evaluate the performance of the proposed approach, we conducted a preliminary simulation study that considered various conditions to mimic real-world scenarios. The simulation results indicated that the proposed approach has great power for identifying DIF items and well controls the Type I error rate. Limitations and future research directions were also discussed.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42032084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Evaluation of Fit Indices Used in Model Selection of Dichotomous Mixture IRT Models. 二分类混合IRT模型选择的拟合指标评价
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-06-01 Epub Date: 2023-06-26 DOI: 10.1177/00131644231180529
Sedat Sen, Allan S Cohen
{"title":"An Evaluation of Fit Indices Used in Model Selection of Dichotomous Mixture IRT Models.","authors":"Sedat Sen, Allan S Cohen","doi":"10.1177/00131644231180529","DOIUrl":"10.1177/00131644231180529","url":null,"abstract":"<p><p>A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's information criterion (DIC), sample size adjusted BIC (SABIC), relative entropy, the integrated classification likelihood criterion (ICL-BIC), the adjusted Lo-Mendell-Rubin (LMR), and Vuong-Lo-Mendell-Rubin (VLMR). The accuracy of the fit indices was assessed for correct detection of the number of latent classes for different simulation conditions including sample size (2,500 and 5,000), test length (15, 30, and 45), mixture proportions (equal and unequal), number of latent classes (2, 3, and 4), and latent class separation (no-separation and small separation). Simulation study results indicated that as the number of examinees or number of items increased, correct identification rates also increased for most of the indices. Correct identification rates by the different fit indices, however, decreased as the number of estimated latent classes or parameters (i.e., model complexity) increased. Results were good for BIC, CAIC, DIC, SABIC, ICL-BIC, LMR, and VLMR, and the relative entropy index tended to select correct models most of the time. Consistent with previous studies, AIC and AICc showed poor performance. Most of these indices had limited utility for three-class and four-class mixture 3PL model conditions.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095322/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46075824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing the Detection of Social Desirability Bias Using Machine Learning: A Novel Application of Person-Fit Indices 利用机器学习加强对社会可取性偏见的检测:拟人指数的新应用
IF 2.7 3区 心理学
Educational and Psychological Measurement Pub Date : 2024-05-30 DOI: 10.1177/00131644241255109
Sanaz Nazari, Walter L. Leite, A. Corinne Huggins-Manley
{"title":"Enhancing the Detection of Social Desirability Bias Using Machine Learning: A Novel Application of Person-Fit Indices","authors":"Sanaz Nazari, Walter L. Leite, A. Corinne Huggins-Manley","doi":"10.1177/00131644241255109","DOIUrl":"https://doi.org/10.1177/00131644241255109","url":null,"abstract":"Social desirability bias (SDB) is a common threat to the validity of conclusions from responses to a scale or survey. There is a wide range of person-fit statistics in the literature that can be employed to detect SDB. In addition, machine learning classifiers, such as logistic regression and random forest, have the potential to distinguish between biased and unbiased responses. This study proposes a new application of these classifiers to detect SDB by considering several person-fit indices as features or predictors in the machine learning methods. The results of a Monte Carlo simulation study showed that for a single feature, applying person-fit indices directly and logistic regression led to similar classification results. However, the random forest classifier improved the classification of biased and unbiased responses substantially. Classification was improved in both logistic regression and random forest by considering multiple features simultaneously. Moreover, cross-validation indicated stable area under the curves (AUCs) across machine learning classifiers. A didactical illustration of applying random forest to detect SDB is presented.","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信