西班牙在2018年Pisa测试中的表现如何?对儿童Pisa阅读成绩的最新估计

IF 1.7 3区 教育学 Q2 EDUCATION & EDUCATIONAL RESEARCH
John Jerrim, Luis Alejandro Lopez-Agudo, Oscar David Marcenaro-Gutierrez
{"title":"西班牙在2018年Pisa测试中的表现如何?对儿童Pisa阅读成绩的最新估计","authors":"John Jerrim, Luis Alejandro Lopez-Agudo, Oscar David Marcenaro-Gutierrez","doi":"10.1080/00071005.2023.2258184","DOIUrl":null,"url":null,"abstract":"ABSTRACTInternational large-scale assessments have gained much attention since the beginning of the twenty-first century, influencing education legislation in many countries. This includes Spain, where they have been used by successive governments to justify education policy change. Unfortunately, there was a problem with the PISA 2018 reading scores for this country, meaning the OECD refused to initially release the results. Therefore, in this paper we attempt to estimate the likely PISA 2018 reading scores for Spain, and for each region within. The figure finally published by the OECD for Spain – in terms of reading scores – was 476.5 points, which is between the lower and upper bound of the interval we find (475 to 483 test points in 2018). Additionally, we report some robustness checks for the OCED countries participating in PISA 2018, which show that the difference between the actual scores and the ones we found with the imputation methods are quite close.Keywords: PISAmultiple imputationinternational large-scale assessmentsreading2018 7. Disclosure StatementNo potential conflict of interest was reported by the author(s).8. Supplementary DataSupplemental data for this article can be accessed online at https://doi.org/10.1080/00071005.2023.2258184Notes1 There are many other examples of problems with PISA data in specific countries; for example, in PISA 2012 Albania presented some serious irregularity (OECD, Citation2014a; Annex A4), in PISA 2015 Albania, Argentina, Kazakhstan and Malaysia (OECD, Citation2016; Annex A4) and, in PISA 2018, Viet Nam and Spain (OECD, Citation2019c; Annex A4).2 The list of countries participating on paper-based assessment in PISA 2018 can be found in OECD (Citation2019c; Annex A5).3 Many more competences (such as financial literacy, problem-solving skills or the global competence) are assessed by PISA, together with other background questionnaires (parental, teacher, ICT, well-being, educational career questionnaires); nevertheless, their administration has been performed irregularly by PISA cycles and not all countries took them, so we focus on the competences and student information which remain fixed through PISA cycles.4 Official information on other previous PISA subjects such as sample design and weighting can be found at OECD (Citation2009, Citation2012, Citation2014b, Citation2017, Citation2020a). A summary of this topic can be found in Jerrim et al. (Citation2017).5 Due to the change from a paper- to a computer-based assessment since PISA 2015 some of these PISA procedures changed from one cycle to the following; hence, we focus here on the last cycle (2018), but more information on this subject for PISA 2009, 2012 and 2015 can be found at OECD (Citation2012, Citation2014b, Citation2017).6 This global competence was new in PISA 2018 and it ‘“examines students” ability to consider local, global and intercultural issues, understand and appreciate different perspectives and world views, interact respectfully with others, and take responsible action towards sustainability and collective well-being’ (OECD, Citation2019c, p. 29).7 More information on this procedure can be found in OECD (Citation2019a).8 The software employed by the OECD to perform these IRT models is mdltm (von Davier, Citation2005).9 This background information was incorporated by, first, coding variables so that refused responses could be included (i.e., contrast coding); then, a principal component analysis was performed, so that background information can be summarised and information from students with missing values can be kept, satisfying the linearity assumption for the model (OECD, Citation2020a).10 The OECD employed the software DGROUP (Rogers et al., Citation2006) to estimate the multivariate latent regression model and obtain the plausible values to estimate this model, fixing the parameters of the cognitive items obtained from the multi-group IRT models.11 PISA technical reports have widely shown these high correlations between the three domains (e.g., OECD, Citation2020a). Although these reports do not analyse much the underlying mechanism behind these high correlations (Ding and Homer, Citation2020), some authors such as Ashkenazi et al. (Citation2017) indicated that there might be shared cognitive processes (e.g., memory) or a general ability (e.g., intelligence) which may contribute to the three of them simultaneously. Additional explanations might be that reading ability may act as a proxy of some other constructs that influence mathematics and science performance (Grimm, Citation2008) or the relatively high reading demands in PISA’s cognitive tests for all domains (Wu, Citation2010).12 These correlations are similar once demographic characteristics and school composition have been controlled.13 The ESCS index was created by the OECD using the highest level of education of parents, highest parental occupation, and home possessions by the use of principal component analysis (OECD, Citation2020a).14 These variables have been consistently found in the literature to be very relevant in the definition of the education production function (Hanushek, Citation1979; Hyde et al., Citation1990; Karadag, Citation2017; Reilly et al., Citation2015; Sirin, Citation2005; Wößmann, Citation2005).15 These results make sense. For PISA 2018, in Ireland reading scores are high compared to mathematics and science scores (518, 500, 496, respectively; OECD, Citation2019c), meaning we get the largest imputed value for Spain when using this nation as the donor country. On the other hand, in Japan reading scores are lower than mathematics and science scores (504, 527, 529, respectively; OECD, Citation2019c), meaning that our predicted score for Spain is very low.16 In order to check the capacity of our model to predict gender differences in scores for PISA 2018, we have run a similar specification for Spain in mathematics, considering this domain as missing completely at random for PISA 2009 to 2018. This model has accurately estimated PISA 2009, 2012 and 2015 mathematics scores by gender and has also predicted a mathematics score for boys of 489 (being the real one 485) and 479 for girls (being the real one 478), so we can be quite confident on the results of our multiple imputation model also by gender. Results for mathematics scores in previous PISA cycles will be provided upon request to the authors.17 These differences in terms of standard deviations have been obtained by calculating the absolute difference between the previous and predicted reading scores for that region and dividing the result by 100 (which is plausible values’ standard deviation).Additional informationFundingThis work has been partly supported by FEDER funding [under Research Project PY20-00228]; Ministerio de Ciencia e Innovación [under Research Project PID2020-119471RB-I00]; by the Andalusian Regional Government (SEJ-645) and the Universidad de Málaga under Research Project B1-2022_23.","PeriodicalId":47509,"journal":{"name":"British Journal of Educational Studies","volume":"231 1","pages":"0"},"PeriodicalIF":1.7000,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"HOW DID SPAIN PERFORM IN PISA 2018? NEW ESTIMATES OF CHILDREN’S PISA READING SCORES\",\"authors\":\"John Jerrim, Luis Alejandro Lopez-Agudo, Oscar David Marcenaro-Gutierrez\",\"doi\":\"10.1080/00071005.2023.2258184\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACTInternational large-scale assessments have gained much attention since the beginning of the twenty-first century, influencing education legislation in many countries. This includes Spain, where they have been used by successive governments to justify education policy change. Unfortunately, there was a problem with the PISA 2018 reading scores for this country, meaning the OECD refused to initially release the results. Therefore, in this paper we attempt to estimate the likely PISA 2018 reading scores for Spain, and for each region within. The figure finally published by the OECD for Spain – in terms of reading scores – was 476.5 points, which is between the lower and upper bound of the interval we find (475 to 483 test points in 2018). Additionally, we report some robustness checks for the OCED countries participating in PISA 2018, which show that the difference between the actual scores and the ones we found with the imputation methods are quite close.Keywords: PISAmultiple imputationinternational large-scale assessmentsreading2018 7. Disclosure StatementNo potential conflict of interest was reported by the author(s).8. Supplementary DataSupplemental data for this article can be accessed online at https://doi.org/10.1080/00071005.2023.2258184Notes1 There are many other examples of problems with PISA data in specific countries; for example, in PISA 2012 Albania presented some serious irregularity (OECD, Citation2014a; Annex A4), in PISA 2015 Albania, Argentina, Kazakhstan and Malaysia (OECD, Citation2016; Annex A4) and, in PISA 2018, Viet Nam and Spain (OECD, Citation2019c; Annex A4).2 The list of countries participating on paper-based assessment in PISA 2018 can be found in OECD (Citation2019c; Annex A5).3 Many more competences (such as financial literacy, problem-solving skills or the global competence) are assessed by PISA, together with other background questionnaires (parental, teacher, ICT, well-being, educational career questionnaires); nevertheless, their administration has been performed irregularly by PISA cycles and not all countries took them, so we focus on the competences and student information which remain fixed through PISA cycles.4 Official information on other previous PISA subjects such as sample design and weighting can be found at OECD (Citation2009, Citation2012, Citation2014b, Citation2017, Citation2020a). A summary of this topic can be found in Jerrim et al. (Citation2017).5 Due to the change from a paper- to a computer-based assessment since PISA 2015 some of these PISA procedures changed from one cycle to the following; hence, we focus here on the last cycle (2018), but more information on this subject for PISA 2009, 2012 and 2015 can be found at OECD (Citation2012, Citation2014b, Citation2017).6 This global competence was new in PISA 2018 and it ‘“examines students” ability to consider local, global and intercultural issues, understand and appreciate different perspectives and world views, interact respectfully with others, and take responsible action towards sustainability and collective well-being’ (OECD, Citation2019c, p. 29).7 More information on this procedure can be found in OECD (Citation2019a).8 The software employed by the OECD to perform these IRT models is mdltm (von Davier, Citation2005).9 This background information was incorporated by, first, coding variables so that refused responses could be included (i.e., contrast coding); then, a principal component analysis was performed, so that background information can be summarised and information from students with missing values can be kept, satisfying the linearity assumption for the model (OECD, Citation2020a).10 The OECD employed the software DGROUP (Rogers et al., Citation2006) to estimate the multivariate latent regression model and obtain the plausible values to estimate this model, fixing the parameters of the cognitive items obtained from the multi-group IRT models.11 PISA technical reports have widely shown these high correlations between the three domains (e.g., OECD, Citation2020a). Although these reports do not analyse much the underlying mechanism behind these high correlations (Ding and Homer, Citation2020), some authors such as Ashkenazi et al. (Citation2017) indicated that there might be shared cognitive processes (e.g., memory) or a general ability (e.g., intelligence) which may contribute to the three of them simultaneously. Additional explanations might be that reading ability may act as a proxy of some other constructs that influence mathematics and science performance (Grimm, Citation2008) or the relatively high reading demands in PISA’s cognitive tests for all domains (Wu, Citation2010).12 These correlations are similar once demographic characteristics and school composition have been controlled.13 The ESCS index was created by the OECD using the highest level of education of parents, highest parental occupation, and home possessions by the use of principal component analysis (OECD, Citation2020a).14 These variables have been consistently found in the literature to be very relevant in the definition of the education production function (Hanushek, Citation1979; Hyde et al., Citation1990; Karadag, Citation2017; Reilly et al., Citation2015; Sirin, Citation2005; Wößmann, Citation2005).15 These results make sense. For PISA 2018, in Ireland reading scores are high compared to mathematics and science scores (518, 500, 496, respectively; OECD, Citation2019c), meaning we get the largest imputed value for Spain when using this nation as the donor country. On the other hand, in Japan reading scores are lower than mathematics and science scores (504, 527, 529, respectively; OECD, Citation2019c), meaning that our predicted score for Spain is very low.16 In order to check the capacity of our model to predict gender differences in scores for PISA 2018, we have run a similar specification for Spain in mathematics, considering this domain as missing completely at random for PISA 2009 to 2018. This model has accurately estimated PISA 2009, 2012 and 2015 mathematics scores by gender and has also predicted a mathematics score for boys of 489 (being the real one 485) and 479 for girls (being the real one 478), so we can be quite confident on the results of our multiple imputation model also by gender. Results for mathematics scores in previous PISA cycles will be provided upon request to the authors.17 These differences in terms of standard deviations have been obtained by calculating the absolute difference between the previous and predicted reading scores for that region and dividing the result by 100 (which is plausible values’ standard deviation).Additional informationFundingThis work has been partly supported by FEDER funding [under Research Project PY20-00228]; Ministerio de Ciencia e Innovación [under Research Project PID2020-119471RB-I00]; by the Andalusian Regional Government (SEJ-645) and the Universidad de Málaga under Research Project B1-2022_23.\",\"PeriodicalId\":47509,\"journal\":{\"name\":\"British Journal of Educational Studies\",\"volume\":\"231 1\",\"pages\":\"0\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Educational Studies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/00071005.2023.2258184\",\"RegionNum\":3,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Studies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/00071005.2023.2258184","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 1

摘要

摘要进入21世纪以来,国际上的大规模评估受到了广泛关注,影响了许多国家的教育立法。这其中就包括西班牙,该国历届政府都用这些数据来证明教育政策改革的合理性。不幸的是,这个国家的2018年国际学生评估项目阅读分数存在问题,这意味着经合组织最初拒绝公布结果。因此,在本文中,我们试图估计西班牙以及内部每个地区的2018年PISA阅读分数。经合组织最终公布的西班牙阅读分数为476.5分,介于我们发现的区间的下限和上限之间(2018年为475至483分)。此外,我们报告了参加2018年国际学生评估项目的经合组织国家的一些稳健性检查,结果表明,实际分数与我们用估算方法发现的分数之间的差异非常接近。关键词:pisa;多重假设;国际大规模评估;披露声明作者未报告潜在的利益冲突。补充数据本文的补充数据可以在https://doi.org/10.1080/00071005.2023.2258184Notes1上在线获取。在特定国家,还有许多其他PISA数据存在问题的例子;例如,在PISA 2012中,阿尔巴尼亚出现了一些严重的违规现象(OECD, Citation2014a;附件A4),阿尔巴尼亚、阿根廷、哈萨克斯坦和马来西亚(OECD, Citation2016;附件A4),在2018年的PISA中,越南和西班牙(OECD, Citation2019c;附件A4)。2参与2018年PISA纸面评估的国家名单可在OECD (Citation2019c;附件A5)。3更多的能力(如金融知识,解决问题的能力或全球能力)由PISA评估,以及其他背景问卷(父母,教师,ICT,福祉,教育职业问卷);然而,他们的管理是通过PISA周期不定期进行的,并不是所有的国家都参加了他们,所以我们关注的是通过PISA周期保持固定的能力和学生信息以前的其他PISA科目的官方信息,如样本设计和权重,可以在经合组织(Citation2009, Citation2012, Citation2014b, Citation2017, Citation2020a)中找到。关于这个主题的总结可以在Jerrim等人(Citation2017)中找到自2015年以来,由于从纸质评估到基于计算机的评估的变化,一些PISA程序从一个周期改变为以下几个周期;因此,我们在这里关注上一个周期(2018年),但关于2009年、2012年和2015年PISA的更多信息可以在经合组织(Citation2012, Citation2014b, Citation2017)中找到这种全球能力在2018年的PISA中是新的,它“考察学生考虑本地、全球和跨文化问题的能力,理解和欣赏不同的观点和世界观,与他人相互尊重,并为可持续发展和集体福祉采取负责任的行动”(经合组织,Citation2019c,第29页)关于这一程序的更多信息可以在OECD (Citation2019a)中找到经合组织用来执行这些IRT模型的软件是mdltm (von Davier, Citation2005)这一背景信息被纳入,首先,编码变量,以便拒绝响应可以包括在内(即,对比编码);然后,进行主成分分析,以便总结背景信息,并保留缺失值学生的信息,满足模型的线性假设(OECD, Citation2020a)OECD使用DGROUP软件(Rogers et al., Citation2006)对多元潜在回归模型进行估计,并获得估计该模型的似是而非的值,固定了从多组IRT模型中获得的认知项目的参数PISA技术报告广泛显示了这三个领域之间的高度相关性(例如,OECD, Citation2020a)。虽然这些报告并没有分析这些高相关性背后的潜在机制(Ding和Homer, Citation2020),但一些作者,如Ashkenazi等人(Citation2017)指出,可能存在共同的认知过程(如记忆)或一般能力(如智力),这可能同时促成了这三者。另外的解释可能是,阅读能力可能是影响数学和科学表现的一些其他构念的代理(Grimm, Citation2008),或者是PISA在所有领域的认知测试中相对较高的阅读要求(Wu, Citation2010)一旦人口特征和学校构成得到控制,这些相关性是相似的。 13 ESCS指数是由经合组织使用主成分分析(OECD, Citation2020a),利用父母的最高教育水平、最高职业和家庭财产创建的这些变量在文献中一直被发现与教育生产函数的定义非常相关(Hanushek, Citation1979;Hyde et al., Citation1990;Karadag Citation2017;Reilly et al., Citation2015;Sirin Citation2005;我们ßmann Citation2005)含量这些结果是有道理的。在2018年的国际学生评估项目中,爱尔兰的阅读分数比数学和科学分数高(分别为518,500和496;经合组织,Citation2019c),这意味着当我们使用西班牙作为捐助国时,我们得到了西班牙最大的估算值。另一方面,在日本,阅读分数低于数学和科学分数(分别为504,527,529;OECD, Citation2019c),这意味着我们对西班牙的预测得分非常低为了检验我们的模型预测2018年PISA分数性别差异的能力,我们对西班牙的数学进行了类似的规范,认为这一领域在2009年至2018年的PISA中完全随机缺失。该模型根据性别准确估计了PISA 2009年、2012年和2015年的数学成绩,并预测了男生的数学成绩为489分(即真分485分),女生的数学成绩为479分(即真分478分),因此我们可以对我们的多重归算模型的结果非常有信心。先前PISA周期的数学分数结果将根据作者的要求提供这些标准差的差异是通过计算该地区以前的阅读分数和预测的阅读分数之间的绝对差,并将结果除以100(这是似是而非的值的标准差)得到的。本研究得到了联邦研究基金的部分资助[研究项目PY20-00228];科技部Innovación[研究项目PID2020-119471RB-I00];由安达卢西亚地区政府(SEJ-645)和Málaga大学在B1-2022_23研究项目下进行。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
HOW DID SPAIN PERFORM IN PISA 2018? NEW ESTIMATES OF CHILDREN’S PISA READING SCORES
ABSTRACTInternational large-scale assessments have gained much attention since the beginning of the twenty-first century, influencing education legislation in many countries. This includes Spain, where they have been used by successive governments to justify education policy change. Unfortunately, there was a problem with the PISA 2018 reading scores for this country, meaning the OECD refused to initially release the results. Therefore, in this paper we attempt to estimate the likely PISA 2018 reading scores for Spain, and for each region within. The figure finally published by the OECD for Spain – in terms of reading scores – was 476.5 points, which is between the lower and upper bound of the interval we find (475 to 483 test points in 2018). Additionally, we report some robustness checks for the OCED countries participating in PISA 2018, which show that the difference between the actual scores and the ones we found with the imputation methods are quite close.Keywords: PISAmultiple imputationinternational large-scale assessmentsreading2018 7. Disclosure StatementNo potential conflict of interest was reported by the author(s).8. Supplementary DataSupplemental data for this article can be accessed online at https://doi.org/10.1080/00071005.2023.2258184Notes1 There are many other examples of problems with PISA data in specific countries; for example, in PISA 2012 Albania presented some serious irregularity (OECD, Citation2014a; Annex A4), in PISA 2015 Albania, Argentina, Kazakhstan and Malaysia (OECD, Citation2016; Annex A4) and, in PISA 2018, Viet Nam and Spain (OECD, Citation2019c; Annex A4).2 The list of countries participating on paper-based assessment in PISA 2018 can be found in OECD (Citation2019c; Annex A5).3 Many more competences (such as financial literacy, problem-solving skills or the global competence) are assessed by PISA, together with other background questionnaires (parental, teacher, ICT, well-being, educational career questionnaires); nevertheless, their administration has been performed irregularly by PISA cycles and not all countries took them, so we focus on the competences and student information which remain fixed through PISA cycles.4 Official information on other previous PISA subjects such as sample design and weighting can be found at OECD (Citation2009, Citation2012, Citation2014b, Citation2017, Citation2020a). A summary of this topic can be found in Jerrim et al. (Citation2017).5 Due to the change from a paper- to a computer-based assessment since PISA 2015 some of these PISA procedures changed from one cycle to the following; hence, we focus here on the last cycle (2018), but more information on this subject for PISA 2009, 2012 and 2015 can be found at OECD (Citation2012, Citation2014b, Citation2017).6 This global competence was new in PISA 2018 and it ‘“examines students” ability to consider local, global and intercultural issues, understand and appreciate different perspectives and world views, interact respectfully with others, and take responsible action towards sustainability and collective well-being’ (OECD, Citation2019c, p. 29).7 More information on this procedure can be found in OECD (Citation2019a).8 The software employed by the OECD to perform these IRT models is mdltm (von Davier, Citation2005).9 This background information was incorporated by, first, coding variables so that refused responses could be included (i.e., contrast coding); then, a principal component analysis was performed, so that background information can be summarised and information from students with missing values can be kept, satisfying the linearity assumption for the model (OECD, Citation2020a).10 The OECD employed the software DGROUP (Rogers et al., Citation2006) to estimate the multivariate latent regression model and obtain the plausible values to estimate this model, fixing the parameters of the cognitive items obtained from the multi-group IRT models.11 PISA technical reports have widely shown these high correlations between the three domains (e.g., OECD, Citation2020a). Although these reports do not analyse much the underlying mechanism behind these high correlations (Ding and Homer, Citation2020), some authors such as Ashkenazi et al. (Citation2017) indicated that there might be shared cognitive processes (e.g., memory) or a general ability (e.g., intelligence) which may contribute to the three of them simultaneously. Additional explanations might be that reading ability may act as a proxy of some other constructs that influence mathematics and science performance (Grimm, Citation2008) or the relatively high reading demands in PISA’s cognitive tests for all domains (Wu, Citation2010).12 These correlations are similar once demographic characteristics and school composition have been controlled.13 The ESCS index was created by the OECD using the highest level of education of parents, highest parental occupation, and home possessions by the use of principal component analysis (OECD, Citation2020a).14 These variables have been consistently found in the literature to be very relevant in the definition of the education production function (Hanushek, Citation1979; Hyde et al., Citation1990; Karadag, Citation2017; Reilly et al., Citation2015; Sirin, Citation2005; Wößmann, Citation2005).15 These results make sense. For PISA 2018, in Ireland reading scores are high compared to mathematics and science scores (518, 500, 496, respectively; OECD, Citation2019c), meaning we get the largest imputed value for Spain when using this nation as the donor country. On the other hand, in Japan reading scores are lower than mathematics and science scores (504, 527, 529, respectively; OECD, Citation2019c), meaning that our predicted score for Spain is very low.16 In order to check the capacity of our model to predict gender differences in scores for PISA 2018, we have run a similar specification for Spain in mathematics, considering this domain as missing completely at random for PISA 2009 to 2018. This model has accurately estimated PISA 2009, 2012 and 2015 mathematics scores by gender and has also predicted a mathematics score for boys of 489 (being the real one 485) and 479 for girls (being the real one 478), so we can be quite confident on the results of our multiple imputation model also by gender. Results for mathematics scores in previous PISA cycles will be provided upon request to the authors.17 These differences in terms of standard deviations have been obtained by calculating the absolute difference between the previous and predicted reading scores for that region and dividing the result by 100 (which is plausible values’ standard deviation).Additional informationFundingThis work has been partly supported by FEDER funding [under Research Project PY20-00228]; Ministerio de Ciencia e Innovación [under Research Project PID2020-119471RB-I00]; by the Andalusian Regional Government (SEJ-645) and the Universidad de Málaga under Research Project B1-2022_23.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
British Journal of Educational Studies
British Journal of Educational Studies EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
4.50
自引率
5.30%
发文量
36
期刊介绍: The British Journal of Educational Studies is one of the UK foremost international education journals. It publishes scholarly, research-based articles on education which draw particularly upon historical, philosophical and sociological analysis and sources.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信