{"title":"研究型大学的卡内基分类方法研究","authors":"R. Kosar, D. W. Scott","doi":"10.1080/2330443X.2018.1442271","DOIUrl":null,"url":null,"abstract":"ABSTRACT University ranking is a popular yet controversial endeavor. Most rankings are based on both public data, such as student test scores and retention rates, and proprietary data, such as school reputation as perceived by high school counselors and academic peers. The weights applied to these characteristics to compute the rankings are often determined in a subjective fashion. Of significant importance in the academic field, the Carnegie Classification was developed by the Carnegie Foundation for the Advancement of Teaching. It has been updated approximately every 5 years since 1973, most recently in February 2016. Based on bivariate scores, Carnegie assigns one of three classes (R1/R2/R3) to doctorate-granting universities according to their level of research activity. The Carnegie methodology uses only publicly available data and determines weights via principal component analysis. In this article, we review Carnegie’s stated goals and the extent to which their methodology achieves those goals. In particular, we examine Carnegie’s separation of aggregate and per capita (per tenured/tenure-track faculty member) variables and its use of two separate principal component analyses on each; the resulting bivariate scores are very highly correlated. We propose and evaluate two alternatives and provide a graphical tool for evaluating and comparing the three scenarios.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"5 1","pages":"1 - 12"},"PeriodicalIF":1.5000,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2018.1442271","citationCount":"16","resultStr":"{\"title\":\"Examining the Carnegie Classification Methodology for Research Universities\",\"authors\":\"R. Kosar, D. W. Scott\",\"doi\":\"10.1080/2330443X.2018.1442271\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT University ranking is a popular yet controversial endeavor. Most rankings are based on both public data, such as student test scores and retention rates, and proprietary data, such as school reputation as perceived by high school counselors and academic peers. The weights applied to these characteristics to compute the rankings are often determined in a subjective fashion. Of significant importance in the academic field, the Carnegie Classification was developed by the Carnegie Foundation for the Advancement of Teaching. It has been updated approximately every 5 years since 1973, most recently in February 2016. Based on bivariate scores, Carnegie assigns one of three classes (R1/R2/R3) to doctorate-granting universities according to their level of research activity. The Carnegie methodology uses only publicly available data and determines weights via principal component analysis. In this article, we review Carnegie’s stated goals and the extent to which their methodology achieves those goals. In particular, we examine Carnegie’s separation of aggregate and per capita (per tenured/tenure-track faculty member) variables and its use of two separate principal component analyses on each; the resulting bivariate scores are very highly correlated. We propose and evaluate two alternatives and provide a graphical tool for evaluating and comparing the three scenarios.\",\"PeriodicalId\":43397,\"journal\":{\"name\":\"Statistics and Public Policy\",\"volume\":\"5 1\",\"pages\":\"1 - 12\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2018-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/2330443X.2018.1442271\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Statistics and Public Policy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/2330443X.2018.1442271\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"SOCIAL SCIENCES, MATHEMATICAL METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistics and Public Policy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/2330443X.2018.1442271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, MATHEMATICAL METHODS","Score":null,"Total":0}
Examining the Carnegie Classification Methodology for Research Universities
ABSTRACT University ranking is a popular yet controversial endeavor. Most rankings are based on both public data, such as student test scores and retention rates, and proprietary data, such as school reputation as perceived by high school counselors and academic peers. The weights applied to these characteristics to compute the rankings are often determined in a subjective fashion. Of significant importance in the academic field, the Carnegie Classification was developed by the Carnegie Foundation for the Advancement of Teaching. It has been updated approximately every 5 years since 1973, most recently in February 2016. Based on bivariate scores, Carnegie assigns one of three classes (R1/R2/R3) to doctorate-granting universities according to their level of research activity. The Carnegie methodology uses only publicly available data and determines weights via principal component analysis. In this article, we review Carnegie’s stated goals and the extent to which their methodology achieves those goals. In particular, we examine Carnegie’s separation of aggregate and per capita (per tenured/tenure-track faculty member) variables and its use of two separate principal component analyses on each; the resulting bivariate scores are very highly correlated. We propose and evaluate two alternatives and provide a graphical tool for evaluating and comparing the three scenarios.