{"title":"Principal Component Analysis","authors":"Xuan Chen","doi":"10.1142/9781786349378_0007","DOIUrl":"https://doi.org/10.1142/9781786349378_0007","url":null,"abstract":"Xuanye Chen Introduction Principal component analysis was first introduced by Karl Pearson for non-random variables, and then H. Hotelling extended this method to the case of random vectors. Principal component analysis (PCA) is a technique for reducing dimensionality, increasing interpretability, and at the same time minimizing information loss. Definition Principal Component Analysis (PCA) is a statistical method. Through orthogonal transformation, a group of variables that may be correlated is transformed into a group of linearly uncorrelated variables, which are called principal components. To be specific, it transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. The Calculation of PCA F1 is used to represent the first linear combination selected, that is, the first comprehensive indicator. The larger the Var ( F1 ) is, the more information F1 contains. Therefore, F1 selected among all linear combinations has the largest variance, so F1 is called the first principal component. If the first principal component is not enough to represent the information of the original P indicators, then F2 is selected, that is, the second linear combination. In order to effectively reflect the original information, the existing information of F1 does not need to appear in F2. In other words, Cov(F1, F2) = 0, and F2 is called the second principal component. And so on, we can construct 3rd, 4th, ... , Pth principal component. Fp = a1i*ZX1 + a2i*ZX2 + ...... + api*ZXp","PeriodicalId":402819,"journal":{"name":"Advanced Textbooks in Mathematics","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122097267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supervised Learning","authors":"Wilhelm Kirchgässner","doi":"10.1142/9781786349378_0002","DOIUrl":"https://doi.org/10.1142/9781786349378_0002","url":null,"abstract":"Urinary Incontinence affects over 200 million people worldwide, severely impacting the quality of life of individuals. Bladder state detection technology has the potential to improve the lives of people with urinary incontinence by alerting the user before voiding occurs. To this end, the objective of this study is to investigate the feasibility of using supervised machine learning classifiers to determine the bladder state of ‘full’ or ‘not full’ from electrical impedance measurements. Electrical impedance data was obtained from computational models and a realistic experimental pelvic phantom. Multiple datasets with increasing complexity were formed for varying noise levels in simulation. 10-Fold testing was performed on each dataset to classify ‘full’ and ‘not full’ bladder states, including phantom measurement data. Support vector machines and k-Nearest-Neighbours classifiers were compared in terms of accuracy, sensitivity, and specificity. The minimum and maximum accuracies across all datasets were 73.16% and 100%, respectively. Factors that contributed most to misclassification were the noise level and bladder volumes near the threshold of ‘full’ or ‘not full’. This paper represents the first study to use machine learning for bladder state detection with electrical impedance measurements . The results show promise for impedance-based bladder state detection to support those living with urinary incontinence.","PeriodicalId":402819,"journal":{"name":"Advanced Textbooks in Mathematics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129579644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Overview of Machine Learning and Financial Applications","authors":"","doi":"10.1142/9781786349378_0001","DOIUrl":"https://doi.org/10.1142/9781786349378_0001","url":null,"abstract":"","PeriodicalId":402819,"journal":{"name":"Advanced Textbooks in Mathematics","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115138654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Case Study in Finance: Home Credit Default Risk","authors":"","doi":"10.1142/9781786349378_0009","DOIUrl":"https://doi.org/10.1142/9781786349378_0009","url":null,"abstract":"","PeriodicalId":402819,"journal":{"name":"Advanced Textbooks in Mathematics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115315082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tree-based Models","authors":"H. Schreuder","doi":"10.1142/9781786349378_0004","DOIUrl":"https://doi.org/10.1142/9781786349378_0004","url":null,"abstract":"Gadbury, G.L.; lyer, H.K.; Schreuder, H.T.; and Ueng, C.Y. A nonparametric analysis of plot basal area growth using tree based models. Res. Pap. RMRS-RP-2. Fort Collins, CO: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station. 14 p.","PeriodicalId":402819,"journal":{"name":"Advanced Textbooks in Mathematics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125034038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear Regression and Regularization","authors":"","doi":"10.1142/9781786349378_0003","DOIUrl":"https://doi.org/10.1142/9781786349378_0003","url":null,"abstract":"","PeriodicalId":402819,"journal":{"name":"Advanced Textbooks in Mathematics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129993675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}