{"title":"大学排名正在损害发展中国家的学术界:紧急行动呼吁","authors":"Mohamed L. Seghier, Habib Zaidi","doi":"10.1002/ima.23140","DOIUrl":null,"url":null,"abstract":"<p>Higher education institutions in developing countries are increasingly relying on university rankings in the decision-making process on how to improve reputation and impact [<span>1</span>]. Such ranking schemes, some being promoted by unaccountable for-profit agencies, have many well-documented limitations [<span>2</span>], such as the overly subjective and biased measurement of excellence and reputation for universities operating in diverse socio-economic and political contexts. Despite these limitations, these rankings are still being promoted as critical indicators of academic excellence [<span>3</span>], thereby influencing the higher education landscape at an unsustainable pace. Indeed, every year, in pursuing an elusive high rank, academics in emerging universities feel the pressure to make quick changes, sometimes by espousing short-sighted strategies that do not always align with long-term goals [<span>4</span>]. There are indeed stories from some universities in developing countries where research programmes and even whole departments were closed because they operated within domains with low citation dynamics. Such obsession with university rankings is hurting academia with dear consequences: talent deterred and income affected [<span>5</span>]. This race for top spots in the league table of universities has brought the worst of academia to emerging universities, for example, the publish-and-perish model and the conversion to numbers-centred instead of people-centred institutions.</p><p>As recently advocated by the United Nations University International Institute for Global Health [<span>6</span>], it is urgent to raise awareness about the damaging effects of university rankings in developing countries. An examination of current university rankings schemes shows that the whole process is affected by many fallacies at different degrees: the supremacy of quantitative measures (the McNamara fallacy), indicators taken as goals (Goodhart's law), indicators replacing the original dimensions they aim to measure (surrogation), and high susceptibility to corruption (Campbell's law) including process manipulation (gaming the system) and perverse incentives (the cobra effect). It is thus essential to take a more proactive instance by moving from ‘ranking takers’ to ‘ranking makers’ [<span>7</span>] and espouse a more responsible evaluation process [<span>8</span>]. By analogy to the San Francisco Declaration on Research Assessment (DORA), this call is a plea for a paradigm shift. Specifically, here we recommend the following measures:</p><p><i>Avoiding the McNamara fallacy</i>: Numbers are not everything. Quantitative indicators have inherent drawbacks and biases and cannot comprehensively measure complex constructs like academic reputation and excellence. These indicators also lack validity across different social and political contexts and tend to reverberate the same privileges top universities typically enjoy. It thus makes sense to adopt indicators meaningful to each university's particularities, including indicators tailored to national or regional contexts and interests. Importantly, not all constructs are quantifiable; hence, academic outcomes that cannot be measured should not be considered less important (i.e., the McNamara fallacy).</p><p><i>Be aware of Goodhart's law</i>: Universities should refrain from changing their mission and strategic plans to only conform to rankings while disregarding national needs and aspirations. This is a risky trap and one of the most damaging ramifications of an unhealthy obsession with university rankings. National needs, aspirations and missions should always guide university reforms. Indicators are performance measurements and should not be taken as goals or targets. It is well known (Goodhart's law) that every measure that becomes a goal becomes a flawed measure. This issue can develop into something more detrimental when indicators start replacing the construct of interest they aim to measure (the phenomenon of surrogation). For instance, people might falsely believe that an university reputation score actually is university reputation. Likewise, a sustainability score is not the whole story of sustainability with all its intricate interactions with diverse factors.</p><p><i>Campbell's law is in action</i>: Indicators are increasing in numbers and complexity. When they invade the decision-making processes, they might make the system vulnerable to corruption pressures (Campbell's law), thus leading to a distorted academic landscape. Specifically, two known phenomena are unfortunately becoming a concern in academia: (i) gaming the system, where strategies and practices are intentionally adopted to manipulate the system for a desired outcome such as high research productivity or student satisfaction, and (ii) perverse incentives (i.e., the cobra effect) where incentives for promoting excellence can unintentionally reward faculty for making bad or unethical choices and results. Again, safeguards must be put in place to minimise any corruption risks to academia, including raising awareness about the inherent limitations to indicators when they are used as key indicators in the decision-making process.</p><p><i>Lead by example</i>: We call ‘elite’ universities to withdraw from current rankings, particularly those sponsored by commercial entities that do not adhere to fair and transparent assessment methods. These rankings are promoted by significant means [<span>9</span>], and they usually brag about having renowned universities on their lists, hence bringing a kind of legitimacy to their practices (which young university does not want to be listed alongside Harvard or Cambridge University?). We believe this would send a strong message that university rankings do not tell the whole story and that reputation and excellence do not need to be sanctioned by unaccountable agencies. It would also empower universities in developing countries to value (and stick to) their mission beyond these reductionist rankings.</p><p><i>Not too fast</i>: We advocate reducing the frequency of rankings publication to every 4 years instead of the current yearly basis. The current pace of university rankings is unsustainable and unhelpful. Academic reforms typically require a long period to assess their actual effect. Indeed, the expectation that universities can reform and implement new strategies yearly to improve their performance is unrealistic. Furthermore, many indicators are unreliable because they are measured over a short period that does not appropriately reflect the (slow) dynamics of academic changes.</p><p>In conclusion, universities in developing countries should not succumb to the pressure of climbing the league table of universities at any price, sometimes through inflated metrics and unethical practices [<span>7, 10</span>]. Universities should free themselves from this detrimental reputational anxiety trap and develop a healthier model for academia that better fits their local socio-economic and political context. They must commit themselves to responsible evaluation practices focusing on equity, diversity and inclusion [<span>8</span>]. Beyond these rankings, universities in developing countries should instead focus on their core mission to graduate skilled citizens, foster a healthy academic environment, and create useful and sustainable knowledge.</p><p>The authors declare no conflicts of interest.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23140","citationCount":"0","resultStr":"{\"title\":\"University Rankings Are Hurting Academia in Developing Countries: An Urgent Call to Action\",\"authors\":\"Mohamed L. Seghier, Habib Zaidi\",\"doi\":\"10.1002/ima.23140\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Higher education institutions in developing countries are increasingly relying on university rankings in the decision-making process on how to improve reputation and impact [<span>1</span>]. Such ranking schemes, some being promoted by unaccountable for-profit agencies, have many well-documented limitations [<span>2</span>], such as the overly subjective and biased measurement of excellence and reputation for universities operating in diverse socio-economic and political contexts. Despite these limitations, these rankings are still being promoted as critical indicators of academic excellence [<span>3</span>], thereby influencing the higher education landscape at an unsustainable pace. Indeed, every year, in pursuing an elusive high rank, academics in emerging universities feel the pressure to make quick changes, sometimes by espousing short-sighted strategies that do not always align with long-term goals [<span>4</span>]. There are indeed stories from some universities in developing countries where research programmes and even whole departments were closed because they operated within domains with low citation dynamics. Such obsession with university rankings is hurting academia with dear consequences: talent deterred and income affected [<span>5</span>]. This race for top spots in the league table of universities has brought the worst of academia to emerging universities, for example, the publish-and-perish model and the conversion to numbers-centred instead of people-centred institutions.</p><p>As recently advocated by the United Nations University International Institute for Global Health [<span>6</span>], it is urgent to raise awareness about the damaging effects of university rankings in developing countries. An examination of current university rankings schemes shows that the whole process is affected by many fallacies at different degrees: the supremacy of quantitative measures (the McNamara fallacy), indicators taken as goals (Goodhart's law), indicators replacing the original dimensions they aim to measure (surrogation), and high susceptibility to corruption (Campbell's law) including process manipulation (gaming the system) and perverse incentives (the cobra effect). It is thus essential to take a more proactive instance by moving from ‘ranking takers’ to ‘ranking makers’ [<span>7</span>] and espouse a more responsible evaluation process [<span>8</span>]. By analogy to the San Francisco Declaration on Research Assessment (DORA), this call is a plea for a paradigm shift. Specifically, here we recommend the following measures:</p><p><i>Avoiding the McNamara fallacy</i>: Numbers are not everything. Quantitative indicators have inherent drawbacks and biases and cannot comprehensively measure complex constructs like academic reputation and excellence. These indicators also lack validity across different social and political contexts and tend to reverberate the same privileges top universities typically enjoy. It thus makes sense to adopt indicators meaningful to each university's particularities, including indicators tailored to national or regional contexts and interests. Importantly, not all constructs are quantifiable; hence, academic outcomes that cannot be measured should not be considered less important (i.e., the McNamara fallacy).</p><p><i>Be aware of Goodhart's law</i>: Universities should refrain from changing their mission and strategic plans to only conform to rankings while disregarding national needs and aspirations. This is a risky trap and one of the most damaging ramifications of an unhealthy obsession with university rankings. National needs, aspirations and missions should always guide university reforms. Indicators are performance measurements and should not be taken as goals or targets. It is well known (Goodhart's law) that every measure that becomes a goal becomes a flawed measure. This issue can develop into something more detrimental when indicators start replacing the construct of interest they aim to measure (the phenomenon of surrogation). For instance, people might falsely believe that an university reputation score actually is university reputation. Likewise, a sustainability score is not the whole story of sustainability with all its intricate interactions with diverse factors.</p><p><i>Campbell's law is in action</i>: Indicators are increasing in numbers and complexity. When they invade the decision-making processes, they might make the system vulnerable to corruption pressures (Campbell's law), thus leading to a distorted academic landscape. Specifically, two known phenomena are unfortunately becoming a concern in academia: (i) gaming the system, where strategies and practices are intentionally adopted to manipulate the system for a desired outcome such as high research productivity or student satisfaction, and (ii) perverse incentives (i.e., the cobra effect) where incentives for promoting excellence can unintentionally reward faculty for making bad or unethical choices and results. Again, safeguards must be put in place to minimise any corruption risks to academia, including raising awareness about the inherent limitations to indicators when they are used as key indicators in the decision-making process.</p><p><i>Lead by example</i>: We call ‘elite’ universities to withdraw from current rankings, particularly those sponsored by commercial entities that do not adhere to fair and transparent assessment methods. These rankings are promoted by significant means [<span>9</span>], and they usually brag about having renowned universities on their lists, hence bringing a kind of legitimacy to their practices (which young university does not want to be listed alongside Harvard or Cambridge University?). We believe this would send a strong message that university rankings do not tell the whole story and that reputation and excellence do not need to be sanctioned by unaccountable agencies. It would also empower universities in developing countries to value (and stick to) their mission beyond these reductionist rankings.</p><p><i>Not too fast</i>: We advocate reducing the frequency of rankings publication to every 4 years instead of the current yearly basis. The current pace of university rankings is unsustainable and unhelpful. Academic reforms typically require a long period to assess their actual effect. Indeed, the expectation that universities can reform and implement new strategies yearly to improve their performance is unrealistic. Furthermore, many indicators are unreliable because they are measured over a short period that does not appropriately reflect the (slow) dynamics of academic changes.</p><p>In conclusion, universities in developing countries should not succumb to the pressure of climbing the league table of universities at any price, sometimes through inflated metrics and unethical practices [<span>7, 10</span>]. Universities should free themselves from this detrimental reputational anxiety trap and develop a healthier model for academia that better fits their local socio-economic and political context. They must commit themselves to responsible evaluation practices focusing on equity, diversity and inclusion [<span>8</span>]. Beyond these rankings, universities in developing countries should instead focus on their core mission to graduate skilled citizens, foster a healthy academic environment, and create useful and sustainable knowledge.</p><p>The authors declare no conflicts of interest.</p>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"34 4\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-06-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23140\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.23140\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23140","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
University Rankings Are Hurting Academia in Developing Countries: An Urgent Call to Action
Higher education institutions in developing countries are increasingly relying on university rankings in the decision-making process on how to improve reputation and impact [1]. Such ranking schemes, some being promoted by unaccountable for-profit agencies, have many well-documented limitations [2], such as the overly subjective and biased measurement of excellence and reputation for universities operating in diverse socio-economic and political contexts. Despite these limitations, these rankings are still being promoted as critical indicators of academic excellence [3], thereby influencing the higher education landscape at an unsustainable pace. Indeed, every year, in pursuing an elusive high rank, academics in emerging universities feel the pressure to make quick changes, sometimes by espousing short-sighted strategies that do not always align with long-term goals [4]. There are indeed stories from some universities in developing countries where research programmes and even whole departments were closed because they operated within domains with low citation dynamics. Such obsession with university rankings is hurting academia with dear consequences: talent deterred and income affected [5]. This race for top spots in the league table of universities has brought the worst of academia to emerging universities, for example, the publish-and-perish model and the conversion to numbers-centred instead of people-centred institutions.
As recently advocated by the United Nations University International Institute for Global Health [6], it is urgent to raise awareness about the damaging effects of university rankings in developing countries. An examination of current university rankings schemes shows that the whole process is affected by many fallacies at different degrees: the supremacy of quantitative measures (the McNamara fallacy), indicators taken as goals (Goodhart's law), indicators replacing the original dimensions they aim to measure (surrogation), and high susceptibility to corruption (Campbell's law) including process manipulation (gaming the system) and perverse incentives (the cobra effect). It is thus essential to take a more proactive instance by moving from ‘ranking takers’ to ‘ranking makers’ [7] and espouse a more responsible evaluation process [8]. By analogy to the San Francisco Declaration on Research Assessment (DORA), this call is a plea for a paradigm shift. Specifically, here we recommend the following measures:
Avoiding the McNamara fallacy: Numbers are not everything. Quantitative indicators have inherent drawbacks and biases and cannot comprehensively measure complex constructs like academic reputation and excellence. These indicators also lack validity across different social and political contexts and tend to reverberate the same privileges top universities typically enjoy. It thus makes sense to adopt indicators meaningful to each university's particularities, including indicators tailored to national or regional contexts and interests. Importantly, not all constructs are quantifiable; hence, academic outcomes that cannot be measured should not be considered less important (i.e., the McNamara fallacy).
Be aware of Goodhart's law: Universities should refrain from changing their mission and strategic plans to only conform to rankings while disregarding national needs and aspirations. This is a risky trap and one of the most damaging ramifications of an unhealthy obsession with university rankings. National needs, aspirations and missions should always guide university reforms. Indicators are performance measurements and should not be taken as goals or targets. It is well known (Goodhart's law) that every measure that becomes a goal becomes a flawed measure. This issue can develop into something more detrimental when indicators start replacing the construct of interest they aim to measure (the phenomenon of surrogation). For instance, people might falsely believe that an university reputation score actually is university reputation. Likewise, a sustainability score is not the whole story of sustainability with all its intricate interactions with diverse factors.
Campbell's law is in action: Indicators are increasing in numbers and complexity. When they invade the decision-making processes, they might make the system vulnerable to corruption pressures (Campbell's law), thus leading to a distorted academic landscape. Specifically, two known phenomena are unfortunately becoming a concern in academia: (i) gaming the system, where strategies and practices are intentionally adopted to manipulate the system for a desired outcome such as high research productivity or student satisfaction, and (ii) perverse incentives (i.e., the cobra effect) where incentives for promoting excellence can unintentionally reward faculty for making bad or unethical choices and results. Again, safeguards must be put in place to minimise any corruption risks to academia, including raising awareness about the inherent limitations to indicators when they are used as key indicators in the decision-making process.
Lead by example: We call ‘elite’ universities to withdraw from current rankings, particularly those sponsored by commercial entities that do not adhere to fair and transparent assessment methods. These rankings are promoted by significant means [9], and they usually brag about having renowned universities on their lists, hence bringing a kind of legitimacy to their practices (which young university does not want to be listed alongside Harvard or Cambridge University?). We believe this would send a strong message that university rankings do not tell the whole story and that reputation and excellence do not need to be sanctioned by unaccountable agencies. It would also empower universities in developing countries to value (and stick to) their mission beyond these reductionist rankings.
Not too fast: We advocate reducing the frequency of rankings publication to every 4 years instead of the current yearly basis. The current pace of university rankings is unsustainable and unhelpful. Academic reforms typically require a long period to assess their actual effect. Indeed, the expectation that universities can reform and implement new strategies yearly to improve their performance is unrealistic. Furthermore, many indicators are unreliable because they are measured over a short period that does not appropriately reflect the (slow) dynamics of academic changes.
In conclusion, universities in developing countries should not succumb to the pressure of climbing the league table of universities at any price, sometimes through inflated metrics and unethical practices [7, 10]. Universities should free themselves from this detrimental reputational anxiety trap and develop a healthier model for academia that better fits their local socio-economic and political context. They must commit themselves to responsible evaluation practices focusing on equity, diversity and inclusion [8]. Beyond these rankings, universities in developing countries should instead focus on their core mission to graduate skilled citizens, foster a healthy academic environment, and create useful and sustainable knowledge.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.