大学排名正在损害发展中国家的学术界:紧急行动呼吁

IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Mohamed L. Seghier, Habib Zaidi
{"title":"大学排名正在损害发展中国家的学术界:紧急行动呼吁","authors":"Mohamed L. Seghier,&nbsp;Habib Zaidi","doi":"10.1002/ima.23140","DOIUrl":null,"url":null,"abstract":"<p>Higher education institutions in developing countries are increasingly relying on university rankings in the decision-making process on how to improve reputation and impact [<span>1</span>]. Such ranking schemes, some being promoted by unaccountable for-profit agencies, have many well-documented limitations [<span>2</span>], such as the overly subjective and biased measurement of excellence and reputation for universities operating in diverse socio-economic and political contexts. Despite these limitations, these rankings are still being promoted as critical indicators of academic excellence [<span>3</span>], thereby influencing the higher education landscape at an unsustainable pace. Indeed, every year, in pursuing an elusive high rank, academics in emerging universities feel the pressure to make quick changes, sometimes by espousing short-sighted strategies that do not always align with long-term goals [<span>4</span>]. There are indeed stories from some universities in developing countries where research programmes and even whole departments were closed because they operated within domains with low citation dynamics. Such obsession with university rankings is hurting academia with dear consequences: talent deterred and income affected [<span>5</span>]. This race for top spots in the league table of universities has brought the worst of academia to emerging universities, for example, the publish-and-perish model and the conversion to numbers-centred instead of people-centred institutions.</p><p>As recently advocated by the United Nations University International Institute for Global Health [<span>6</span>], it is urgent to raise awareness about the damaging effects of university rankings in developing countries. An examination of current university rankings schemes shows that the whole process is affected by many fallacies at different degrees: the supremacy of quantitative measures (the McNamara fallacy), indicators taken as goals (Goodhart's law), indicators replacing the original dimensions they aim to measure (surrogation), and high susceptibility to corruption (Campbell's law) including process manipulation (gaming the system) and perverse incentives (the cobra effect). It is thus essential to take a more proactive instance by moving from ‘ranking takers’ to ‘ranking makers’ [<span>7</span>] and espouse a more responsible evaluation process [<span>8</span>]. By analogy to the San Francisco Declaration on Research Assessment (DORA), this call is a plea for a paradigm shift. Specifically, here we recommend the following measures:</p><p><i>Avoiding the McNamara fallacy</i>: Numbers are not everything. Quantitative indicators have inherent drawbacks and biases and cannot comprehensively measure complex constructs like academic reputation and excellence. These indicators also lack validity across different social and political contexts and tend to reverberate the same privileges top universities typically enjoy. It thus makes sense to adopt indicators meaningful to each university's particularities, including indicators tailored to national or regional contexts and interests. Importantly, not all constructs are quantifiable; hence, academic outcomes that cannot be measured should not be considered less important (i.e., the McNamara fallacy).</p><p><i>Be aware of Goodhart's law</i>: Universities should refrain from changing their mission and strategic plans to only conform to rankings while disregarding national needs and aspirations. This is a risky trap and one of the most damaging ramifications of an unhealthy obsession with university rankings. National needs, aspirations and missions should always guide university reforms. Indicators are performance measurements and should not be taken as goals or targets. It is well known (Goodhart's law) that every measure that becomes a goal becomes a flawed measure. This issue can develop into something more detrimental when indicators start replacing the construct of interest they aim to measure (the phenomenon of surrogation). For instance, people might falsely believe that an university reputation score actually is university reputation. Likewise, a sustainability score is not the whole story of sustainability with all its intricate interactions with diverse factors.</p><p><i>Campbell's law is in action</i>: Indicators are increasing in numbers and complexity. When they invade the decision-making processes, they might make the system vulnerable to corruption pressures (Campbell's law), thus leading to a distorted academic landscape. Specifically, two known phenomena are unfortunately becoming a concern in academia: (i) gaming the system, where strategies and practices are intentionally adopted to manipulate the system for a desired outcome such as high research productivity or student satisfaction, and (ii) perverse incentives (i.e., the cobra effect) where incentives for promoting excellence can unintentionally reward faculty for making bad or unethical choices and results. Again, safeguards must be put in place to minimise any corruption risks to academia, including raising awareness about the inherent limitations to indicators when they are used as key indicators in the decision-making process.</p><p><i>Lead by example</i>: We call ‘elite’ universities to withdraw from current rankings, particularly those sponsored by commercial entities that do not adhere to fair and transparent assessment methods. These rankings are promoted by significant means [<span>9</span>], and they usually brag about having renowned universities on their lists, hence bringing a kind of legitimacy to their practices (which young university does not want to be listed alongside Harvard or Cambridge University?). We believe this would send a strong message that university rankings do not tell the whole story and that reputation and excellence do not need to be sanctioned by unaccountable agencies. It would also empower universities in developing countries to value (and stick to) their mission beyond these reductionist rankings.</p><p><i>Not too fast</i>: We advocate reducing the frequency of rankings publication to every 4 years instead of the current yearly basis. The current pace of university rankings is unsustainable and unhelpful. Academic reforms typically require a long period to assess their actual effect. Indeed, the expectation that universities can reform and implement new strategies yearly to improve their performance is unrealistic. Furthermore, many indicators are unreliable because they are measured over a short period that does not appropriately reflect the (slow) dynamics of academic changes.</p><p>In conclusion, universities in developing countries should not succumb to the pressure of climbing the league table of universities at any price, sometimes through inflated metrics and unethical practices [<span>7, 10</span>]. Universities should free themselves from this detrimental reputational anxiety trap and develop a healthier model for academia that better fits their local socio-economic and political context. They must commit themselves to responsible evaluation practices focusing on equity, diversity and inclusion [<span>8</span>]. Beyond these rankings, universities in developing countries should instead focus on their core mission to graduate skilled citizens, foster a healthy academic environment, and create useful and sustainable knowledge.</p><p>The authors declare no conflicts of interest.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23140","citationCount":"0","resultStr":"{\"title\":\"University Rankings Are Hurting Academia in Developing Countries: An Urgent Call to Action\",\"authors\":\"Mohamed L. Seghier,&nbsp;Habib Zaidi\",\"doi\":\"10.1002/ima.23140\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Higher education institutions in developing countries are increasingly relying on university rankings in the decision-making process on how to improve reputation and impact [<span>1</span>]. Such ranking schemes, some being promoted by unaccountable for-profit agencies, have many well-documented limitations [<span>2</span>], such as the overly subjective and biased measurement of excellence and reputation for universities operating in diverse socio-economic and political contexts. Despite these limitations, these rankings are still being promoted as critical indicators of academic excellence [<span>3</span>], thereby influencing the higher education landscape at an unsustainable pace. Indeed, every year, in pursuing an elusive high rank, academics in emerging universities feel the pressure to make quick changes, sometimes by espousing short-sighted strategies that do not always align with long-term goals [<span>4</span>]. There are indeed stories from some universities in developing countries where research programmes and even whole departments were closed because they operated within domains with low citation dynamics. Such obsession with university rankings is hurting academia with dear consequences: talent deterred and income affected [<span>5</span>]. This race for top spots in the league table of universities has brought the worst of academia to emerging universities, for example, the publish-and-perish model and the conversion to numbers-centred instead of people-centred institutions.</p><p>As recently advocated by the United Nations University International Institute for Global Health [<span>6</span>], it is urgent to raise awareness about the damaging effects of university rankings in developing countries. An examination of current university rankings schemes shows that the whole process is affected by many fallacies at different degrees: the supremacy of quantitative measures (the McNamara fallacy), indicators taken as goals (Goodhart's law), indicators replacing the original dimensions they aim to measure (surrogation), and high susceptibility to corruption (Campbell's law) including process manipulation (gaming the system) and perverse incentives (the cobra effect). It is thus essential to take a more proactive instance by moving from ‘ranking takers’ to ‘ranking makers’ [<span>7</span>] and espouse a more responsible evaluation process [<span>8</span>]. By analogy to the San Francisco Declaration on Research Assessment (DORA), this call is a plea for a paradigm shift. Specifically, here we recommend the following measures:</p><p><i>Avoiding the McNamara fallacy</i>: Numbers are not everything. Quantitative indicators have inherent drawbacks and biases and cannot comprehensively measure complex constructs like academic reputation and excellence. These indicators also lack validity across different social and political contexts and tend to reverberate the same privileges top universities typically enjoy. It thus makes sense to adopt indicators meaningful to each university's particularities, including indicators tailored to national or regional contexts and interests. Importantly, not all constructs are quantifiable; hence, academic outcomes that cannot be measured should not be considered less important (i.e., the McNamara fallacy).</p><p><i>Be aware of Goodhart's law</i>: Universities should refrain from changing their mission and strategic plans to only conform to rankings while disregarding national needs and aspirations. This is a risky trap and one of the most damaging ramifications of an unhealthy obsession with university rankings. National needs, aspirations and missions should always guide university reforms. Indicators are performance measurements and should not be taken as goals or targets. It is well known (Goodhart's law) that every measure that becomes a goal becomes a flawed measure. This issue can develop into something more detrimental when indicators start replacing the construct of interest they aim to measure (the phenomenon of surrogation). For instance, people might falsely believe that an university reputation score actually is university reputation. Likewise, a sustainability score is not the whole story of sustainability with all its intricate interactions with diverse factors.</p><p><i>Campbell's law is in action</i>: Indicators are increasing in numbers and complexity. When they invade the decision-making processes, they might make the system vulnerable to corruption pressures (Campbell's law), thus leading to a distorted academic landscape. Specifically, two known phenomena are unfortunately becoming a concern in academia: (i) gaming the system, where strategies and practices are intentionally adopted to manipulate the system for a desired outcome such as high research productivity or student satisfaction, and (ii) perverse incentives (i.e., the cobra effect) where incentives for promoting excellence can unintentionally reward faculty for making bad or unethical choices and results. Again, safeguards must be put in place to minimise any corruption risks to academia, including raising awareness about the inherent limitations to indicators when they are used as key indicators in the decision-making process.</p><p><i>Lead by example</i>: We call ‘elite’ universities to withdraw from current rankings, particularly those sponsored by commercial entities that do not adhere to fair and transparent assessment methods. These rankings are promoted by significant means [<span>9</span>], and they usually brag about having renowned universities on their lists, hence bringing a kind of legitimacy to their practices (which young university does not want to be listed alongside Harvard or Cambridge University?). We believe this would send a strong message that university rankings do not tell the whole story and that reputation and excellence do not need to be sanctioned by unaccountable agencies. It would also empower universities in developing countries to value (and stick to) their mission beyond these reductionist rankings.</p><p><i>Not too fast</i>: We advocate reducing the frequency of rankings publication to every 4 years instead of the current yearly basis. The current pace of university rankings is unsustainable and unhelpful. Academic reforms typically require a long period to assess their actual effect. Indeed, the expectation that universities can reform and implement new strategies yearly to improve their performance is unrealistic. Furthermore, many indicators are unreliable because they are measured over a short period that does not appropriately reflect the (slow) dynamics of academic changes.</p><p>In conclusion, universities in developing countries should not succumb to the pressure of climbing the league table of universities at any price, sometimes through inflated metrics and unethical practices [<span>7, 10</span>]. Universities should free themselves from this detrimental reputational anxiety trap and develop a healthier model for academia that better fits their local socio-economic and political context. They must commit themselves to responsible evaluation practices focusing on equity, diversity and inclusion [<span>8</span>]. Beyond these rankings, universities in developing countries should instead focus on their core mission to graduate skilled citizens, foster a healthy academic environment, and create useful and sustainable knowledge.</p><p>The authors declare no conflicts of interest.</p>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"34 4\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-06-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23140\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.23140\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23140","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

发展中国家的高等教育机构在如何提高声誉和影响力的决策过程中,越来越依赖于大学排名[1]。此类排名计划(有些由不负责任的营利机构推广)有许多有据可查的局限性[2],例如对在不同社会经济和政治背景下运作的大学的卓越性和声誉的衡量过于主观和有失偏颇。尽管存在这些局限性,这些排名仍被当作衡量学术卓越性的关键指标加以推广[3],从而以不可持续的速度影响着高等教育格局。事实上,为了追求难以企及的高排名,新兴大学的学者们每年都会感受到迅速做出改变的压力,有时甚至会采取并不总是与长期目标相一致的短视战略[4]。一些发展中国家的大学确实发生过这样的故事:一些研究项目甚至整个系都被关闭了,因为它们所处的领域引用率低。这种对大学排名的痴迷正在伤害学术界,造成严重后果:人才受阻,收入受影响 [5]。正如联合国大学国际全球健康研究所最近所倡导的那样[6],迫切需要提高人们对大学排名对发展中国家的破坏性影响的认识。对当前大学排名计划的研究表明,整个过程在不同程度上受到许多谬误的影响:定量衡量标准至上(麦克纳马拉谬误)、将指标作为目标(古德哈特定律)、指标取代其旨在衡量的原始维度(代用)、极易滋生腐败(坎贝尔定律),包括过程操纵(博弈系统)和不正当激励(眼镜蛇效应)。因此,必须采取更加积极主动的方式,从 "排名接受者 "转变为 "排名制定者"[7],并支持更加负责任的评估过程[8]。与《旧金山研究评估宣言》(DORA)类似,本呼吁也是一种模式转变的呼吁。具体而言,我们建议采取以下措施:避免麦克纳马拉谬误:数字不是万能的。定量指标有其固有的缺陷和偏差,无法全面衡量学术声誉和卓越性等复杂因素。这些指标在不同的社会和政治背景下也缺乏有效性,而且往往会使顶尖大学通常享有的特权得到反响。因此,采用对每所大学的特殊性有意义的指标,包括根据国家或地区背景和利益定制的指标,是有意义的。重要的是,并非所有的建设都可以量化;因此,不能将无法衡量的学术成果视为不那么重要(即麦克纳马拉谬误):注意古德哈特定律:大学应避免只为迎合排名而改变使命和战略计划,忽视国家的需求和愿望。这是一个危险的陷阱,也是对大学排名不健康的痴迷所造成的最有害的后果之一。国家需求、愿望和使命应始终指导大学改革。指标是衡量绩效的尺度,不应被视为目标或指标。众所周知(古德哈特定律),每一个成为目标的衡量标准都是有缺陷的。当指标开始取代其所要衡量的利益结构时(代用现象),这个问题就会发展成更为有害的问题。例如,人们可能会错误地认为大学声誉得分实际上就是大学声誉。同样,可持续发展得分也不是可持续发展的全部,因为可持续发展与各种因素之间存在着错综复杂的相互作用:坎贝尔定律正在发挥作用:指标的数量和复杂性都在增加。坎贝尔定律正在发挥作用:指标的数量和复杂性都在不断增加,当它们侵入决策过程时,可能会使系统容易受到腐败压力的影响(坎贝尔定律),从而导致学术景观的扭曲。具体而言,有两种已知现象正不幸成为学术界的一个问题:(i) 系统博弈,即故意采取策略和做法来操纵系统,以获得理想的结果,如高研究生产率或学生满意度;(ii) 反常激励(即眼镜蛇效应),即促进卓越的激励措施会无意中奖励做出错误或不道德选择和结果的教师。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
University Rankings Are Hurting Academia in Developing Countries: An Urgent Call to Action

Higher education institutions in developing countries are increasingly relying on university rankings in the decision-making process on how to improve reputation and impact [1]. Such ranking schemes, some being promoted by unaccountable for-profit agencies, have many well-documented limitations [2], such as the overly subjective and biased measurement of excellence and reputation for universities operating in diverse socio-economic and political contexts. Despite these limitations, these rankings are still being promoted as critical indicators of academic excellence [3], thereby influencing the higher education landscape at an unsustainable pace. Indeed, every year, in pursuing an elusive high rank, academics in emerging universities feel the pressure to make quick changes, sometimes by espousing short-sighted strategies that do not always align with long-term goals [4]. There are indeed stories from some universities in developing countries where research programmes and even whole departments were closed because they operated within domains with low citation dynamics. Such obsession with university rankings is hurting academia with dear consequences: talent deterred and income affected [5]. This race for top spots in the league table of universities has brought the worst of academia to emerging universities, for example, the publish-and-perish model and the conversion to numbers-centred instead of people-centred institutions.

As recently advocated by the United Nations University International Institute for Global Health [6], it is urgent to raise awareness about the damaging effects of university rankings in developing countries. An examination of current university rankings schemes shows that the whole process is affected by many fallacies at different degrees: the supremacy of quantitative measures (the McNamara fallacy), indicators taken as goals (Goodhart's law), indicators replacing the original dimensions they aim to measure (surrogation), and high susceptibility to corruption (Campbell's law) including process manipulation (gaming the system) and perverse incentives (the cobra effect). It is thus essential to take a more proactive instance by moving from ‘ranking takers’ to ‘ranking makers’ [7] and espouse a more responsible evaluation process [8]. By analogy to the San Francisco Declaration on Research Assessment (DORA), this call is a plea for a paradigm shift. Specifically, here we recommend the following measures:

Avoiding the McNamara fallacy: Numbers are not everything. Quantitative indicators have inherent drawbacks and biases and cannot comprehensively measure complex constructs like academic reputation and excellence. These indicators also lack validity across different social and political contexts and tend to reverberate the same privileges top universities typically enjoy. It thus makes sense to adopt indicators meaningful to each university's particularities, including indicators tailored to national or regional contexts and interests. Importantly, not all constructs are quantifiable; hence, academic outcomes that cannot be measured should not be considered less important (i.e., the McNamara fallacy).

Be aware of Goodhart's law: Universities should refrain from changing their mission and strategic plans to only conform to rankings while disregarding national needs and aspirations. This is a risky trap and one of the most damaging ramifications of an unhealthy obsession with university rankings. National needs, aspirations and missions should always guide university reforms. Indicators are performance measurements and should not be taken as goals or targets. It is well known (Goodhart's law) that every measure that becomes a goal becomes a flawed measure. This issue can develop into something more detrimental when indicators start replacing the construct of interest they aim to measure (the phenomenon of surrogation). For instance, people might falsely believe that an university reputation score actually is university reputation. Likewise, a sustainability score is not the whole story of sustainability with all its intricate interactions with diverse factors.

Campbell's law is in action: Indicators are increasing in numbers and complexity. When they invade the decision-making processes, they might make the system vulnerable to corruption pressures (Campbell's law), thus leading to a distorted academic landscape. Specifically, two known phenomena are unfortunately becoming a concern in academia: (i) gaming the system, where strategies and practices are intentionally adopted to manipulate the system for a desired outcome such as high research productivity or student satisfaction, and (ii) perverse incentives (i.e., the cobra effect) where incentives for promoting excellence can unintentionally reward faculty for making bad or unethical choices and results. Again, safeguards must be put in place to minimise any corruption risks to academia, including raising awareness about the inherent limitations to indicators when they are used as key indicators in the decision-making process.

Lead by example: We call ‘elite’ universities to withdraw from current rankings, particularly those sponsored by commercial entities that do not adhere to fair and transparent assessment methods. These rankings are promoted by significant means [9], and they usually brag about having renowned universities on their lists, hence bringing a kind of legitimacy to their practices (which young university does not want to be listed alongside Harvard or Cambridge University?). We believe this would send a strong message that university rankings do not tell the whole story and that reputation and excellence do not need to be sanctioned by unaccountable agencies. It would also empower universities in developing countries to value (and stick to) their mission beyond these reductionist rankings.

Not too fast: We advocate reducing the frequency of rankings publication to every 4 years instead of the current yearly basis. The current pace of university rankings is unsustainable and unhelpful. Academic reforms typically require a long period to assess their actual effect. Indeed, the expectation that universities can reform and implement new strategies yearly to improve their performance is unrealistic. Furthermore, many indicators are unreliable because they are measured over a short period that does not appropriately reflect the (slow) dynamics of academic changes.

In conclusion, universities in developing countries should not succumb to the pressure of climbing the league table of universities at any price, sometimes through inflated metrics and unethical practices [7, 10]. Universities should free themselves from this detrimental reputational anxiety trap and develop a healthier model for academia that better fits their local socio-economic and political context. They must commit themselves to responsible evaluation practices focusing on equity, diversity and inclusion [8]. Beyond these rankings, universities in developing countries should instead focus on their core mission to graduate skilled citizens, foster a healthy academic environment, and create useful and sustainable knowledge.

The authors declare no conflicts of interest.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Imaging Systems and Technology
International Journal of Imaging Systems and Technology 工程技术-成像科学与照相技术
CiteScore
6.90
自引率
6.10%
发文量
138
审稿时长
3 months
期刊介绍: The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals. IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging. The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered. The scope of the journal includes, but is not limited to, the following in the context of biomedical research: Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.; Neuromodulation and brain stimulation techniques such as TMS and tDCS; Software and hardware for imaging, especially related to human and animal health; Image segmentation in normal and clinical populations; Pattern analysis and classification using machine learning techniques; Computational modeling and analysis; Brain connectivity and connectomics; Systems-level characterization of brain function; Neural networks and neurorobotics; Computer vision, based on human/animal physiology; Brain-computer interface (BCI) technology; Big data, databasing and data mining.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信