{"title":"关于回归入门课程中最小二乘估计的推导","authors":"M. Inlow","doi":"10.37119/jpss2022.v20i1.506","DOIUrl":null,"url":null,"abstract":"Introductory regression books typically begin their derivation of the least squares matrix estimation formula by considering the simple linear regression model. We suggest beginning with the zero-intercept model which has advantages. We provide two examples of this approach, one of which is a new, non-calculus derivation using the Cauchy-Schwarz inequality. \n ","PeriodicalId":161562,"journal":{"name":"Journal of Probability and Statistical Science","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On Deriving the Least Squares Estimates in Introductory Regression Courses\",\"authors\":\"M. Inlow\",\"doi\":\"10.37119/jpss2022.v20i1.506\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Introductory regression books typically begin their derivation of the least squares matrix estimation formula by considering the simple linear regression model. We suggest beginning with the zero-intercept model which has advantages. We provide two examples of this approach, one of which is a new, non-calculus derivation using the Cauchy-Schwarz inequality. \\n \",\"PeriodicalId\":161562,\"journal\":{\"name\":\"Journal of Probability and Statistical Science\",\"volume\":\"115 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Probability and Statistical Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.37119/jpss2022.v20i1.506\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Probability and Statistical Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.37119/jpss2022.v20i1.506","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On Deriving the Least Squares Estimates in Introductory Regression Courses
Introductory regression books typically begin their derivation of the least squares matrix estimation formula by considering the simple linear regression model. We suggest beginning with the zero-intercept model which has advantages. We provide two examples of this approach, one of which is a new, non-calculus derivation using the Cauchy-Schwarz inequality.