Factor analysis of the information field of the neuroendocrine-immune complex and metabolism in female rats

Y. Zavidnyuk, O. Mel’nyk, O. Mysakovets
{"title":"Factor analysis of the information field of the neuroendocrine-immune complex and metabolism in female rats","authors":"Y. Zavidnyuk, O. Mel’nyk, O. Mysakovets","doi":"10.25040/ecpb2019.03.012","DOIUrl":null,"url":null,"abstract":"Introduction. Despite considerable informativeness, factor analysis in biomedical research is still rarely used. Therefore, we set out to introduce our colleagues to the theoretical foundations of factor analysis and to demonstrate its application in our own material. According to the theory of factor analysis [1], it is considered that the observed parameters (variables) are a linear combination of some latent (hypothetical, unobservable) factors. In other words, the factors are hypothetical, not directly measured, hidden variables, in terms of which the measured variables are described. Some of the factors are assumed to be common to two or more variables, while others are specific to each parameter. Characteristic (unique) factors are orthogonal to one another, that is, they do not contribute to the covariance between the variables. In other words, only common factors that are much smaller than the number of variables contribute to the covariance between them. The latent factor structure can be accurately identified by examining the resulting covariance matrix. In practice, it is impossible to obtain the exact structure of the factor model, only estimates of the parameters of the factor structure can be found. Therefore, on the principle of postulate of parsimony, adopt a model with a minimum number of common factors. One of the methods of factor analysis is the analysis of principal components. Principal components (PCs) are linear combinations of observed variables that have orthogonality properties, that is, natural orthogonal functions. Thus, PCs are opposite to common factors, since the latter are hypothetical and are not expressed through a combination of variables, whereas PCs are linear functions of the observed variables. The essence of the PCs method lies in the linear transformation and condensation of the original information. On the basis of correlation matrices, a system of orthogonal, linearly independent functions, nominated by eigenvectors, corresponding to a system of independent random variables nominated by eigenvalues of the correlation matrix (λ) is determined. The first few eigenvalues of the correlation matrix exhaust the bulk of the total field variance, so special attention is given to the first eigenvalues and their corresponding components when analyzing the decomposition results. And since large-scale processes, which are functional systems of the body, are characterized","PeriodicalId":12101,"journal":{"name":"Experimental and Clinical Physiology and Biochemistry","volume":"37 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Experimental and Clinical Physiology and Biochemistry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25040/ecpb2019.03.012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Introduction. Despite considerable informativeness, factor analysis in biomedical research is still rarely used. Therefore, we set out to introduce our colleagues to the theoretical foundations of factor analysis and to demonstrate its application in our own material. According to the theory of factor analysis [1], it is considered that the observed parameters (variables) are a linear combination of some latent (hypothetical, unobservable) factors. In other words, the factors are hypothetical, not directly measured, hidden variables, in terms of which the measured variables are described. Some of the factors are assumed to be common to two or more variables, while others are specific to each parameter. Characteristic (unique) factors are orthogonal to one another, that is, they do not contribute to the covariance between the variables. In other words, only common factors that are much smaller than the number of variables contribute to the covariance between them. The latent factor structure can be accurately identified by examining the resulting covariance matrix. In practice, it is impossible to obtain the exact structure of the factor model, only estimates of the parameters of the factor structure can be found. Therefore, on the principle of postulate of parsimony, adopt a model with a minimum number of common factors. One of the methods of factor analysis is the analysis of principal components. Principal components (PCs) are linear combinations of observed variables that have orthogonality properties, that is, natural orthogonal functions. Thus, PCs are opposite to common factors, since the latter are hypothetical and are not expressed through a combination of variables, whereas PCs are linear functions of the observed variables. The essence of the PCs method lies in the linear transformation and condensation of the original information. On the basis of correlation matrices, a system of orthogonal, linearly independent functions, nominated by eigenvectors, corresponding to a system of independent random variables nominated by eigenvalues of the correlation matrix (λ) is determined. The first few eigenvalues of the correlation matrix exhaust the bulk of the total field variance, so special attention is given to the first eigenvalues and their corresponding components when analyzing the decomposition results. And since large-scale processes, which are functional systems of the body, are characterized
雌性大鼠神经内分泌免疫复合物及代谢信息场的因子分析
介绍。尽管具有相当大的信息量,但因子分析在生物医学研究中仍然很少使用。因此,我们开始向我们的同事介绍因子分析的理论基础,并展示其在我们自己的材料中的应用。根据因子分析理论[1],认为观测参数(变量)是一些潜在(假设的、不可观测的)因素的线性组合。换句话说,这些因素是假设的,不是直接测量的,隐藏变量,测量变量是根据这些变量来描述的。有些因素被认为是两个或多个变量共有的,而另一些则是特定于每个参数的。特征(唯一)因素是相互正交的,也就是说,它们对变量之间的协方差没有贡献。换句话说,只有比变量数量小得多的共同因素才会对它们之间的协方差有贡献。通过检查产生的协方差矩阵,可以准确地识别潜在因素结构。在实践中,不可能获得因子模型的确切结构,只能找到因子结构参数的估计。因此,在简约假设的原则下,采用公因数最少的模型。因子分析的方法之一是主成分分析。主成分(PCs)是具有正交性的观测变量的线性组合,即自然正交函数。因此,pc与共同因素相反,因为共同因素是假设的,不能通过变量的组合来表示,而pc是观察变量的线性函数。pc方法的本质在于对原始信息进行线性变换和浓缩。在相关矩阵的基础上,确定了一个由特征向量表示的正交线性无关函数系统,该系统对应于由相关矩阵(λ)的特征值表示的独立随机变量系统。相关矩阵的前几个特征值耗尽了总场方差的大部分,因此在分析分解结果时要特别注意前几个特征值及其对应的分量。因为大规模的过程,也就是身体的功能系统,是有特征的
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信