Semi-Supervised Learning最新文献

筛选
英文 中文
Prediction of Protein Function from Networks 基于网络的蛋白质功能预测
Semi-Supervised Learning Pub Date : 2006-11-01 DOI: 10.7551/MITPRESS/9780262033589.003.0020
Hyunjung Shin, K. Tsuda
{"title":"Prediction of Protein Function from Networks","authors":"Hyunjung Shin, K. Tsuda","doi":"10.7551/MITPRESS/9780262033589.003.0020","DOIUrl":"https://doi.org/10.7551/MITPRESS/9780262033589.003.0020","url":null,"abstract":"This chapter contains sections titled: Introduction, Graph-Based Semi-Supervised Learning, Combining Multiple Graphs, Experiments on Function Prediction of Proteins, Conclusion and Outlook","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125326803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Graph Kernels by Spectral Transforms 谱变换的图核
Semi-Supervised Learning Pub Date : 2006-09-16 DOI: 10.7551/mitpress/9780262033589.003.0015
Xiaojin Zhu, J. Kandola, J. Lafferty, Zoubin Ghahramani
{"title":"Graph Kernels by Spectral Transforms","authors":"Xiaojin Zhu, J. Kandola, J. Lafferty, Zoubin Ghahramani","doi":"10.7551/mitpress/9780262033589.003.0015","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0015","url":null,"abstract":"Many graph-based semi-supervised learning methods can be viewed as imposing smoothness conditions on the target function with respect to a graph representing the data points to be labeled. The smoothness properties of the functions are encoded in terms of Mercer kernels over the graph. The central quantity in such regularization is the spectral decomposition of the graph Laplacian, a matrix derived from the graph's edge weights. The eigenvectors with small eigenvalues are smooth, and ideally represent large cluster structures within the data. The eigenvectors having large eigenvalues are rugged, and considered noise. Different weightings of the eigenvectors of the graph Laplacian lead to different measures of smoothness. Such weightings can be viewed as spectral transforms, that is, as transformations of the standard eigenspectrum that lead to different regularizers over the graph. Familiar kernels, such as the diffusion kernel resulting by solving a discrete heat equation on the graph, can be seen as simple parametric spectral transforms. The question naturally arises whether one can obtain effective spectral transforms automatically. In this paper we develop an approach to searching over a nonparametric family of spectral transforms by using convex optimization to maximize kernel alignment to the labeled data. Order constraints are imposed to encode a preference for smoothness with respect to the graph structure. This results in a flexible family of kernels that is more data-driven than the standard parametric spectral transforms. Our approach relies on a quadratically constrained quadratic program (QCQP), and is computationally practical for large datasets.","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131491314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Probabilistic Semi-Supervised Clustering with Constraints 带约束的概率半监督聚类
Semi-Supervised Learning Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0005
Sugato Basu, M. Bilenko, A. Banerjee, R. Mooney
{"title":"Probabilistic Semi-Supervised Clustering with Constraints","authors":"Sugato Basu, M. Bilenko, A. Banerjee, R. Mooney","doi":"10.7551/mitpress/9780262033589.003.0005","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0005","url":null,"abstract":"In certain clustering tasks it is possible to obtain limited supervision in the form of pairwise constraints, i.e., pairs of instances labeled as belonging to same or different clusters. The resulting problem is known as semi-supervised clustering, an instance of semi-supervised learning stemming from a traditional unsupervised learning setting. Several algorithms exist for enhancing clustering quality by using supervision in the form of constraints. These algorithms typically utilize the pairwise constraints to either modify the clustering objective function or to learn the clustering distortion measure. This chapter describes an approach that employs Hidden Markov Random Fields (HMRFs) as a probabilistic generative model for semi-supervised clustering, thereby providing a principled framework for incorporating constraint-based supervision into prototype-based clustering. The HMRF-based model allows the use of a broad range of clustering distortion measures, including Bregman divergences (e.g., squared Euclidean distance, KL divergence) and directional distance measures (e.g., cosine distance), making it applicable to a number of domains. The model leads to the HMRF-KMeans algorithm which minimizes an objective function derived from the joint probability of the model, and allows unification of constraint-based and distance-based semi-supervised clustering methods. Additionally, a two-phase active learning algorithm for selecting informative pairwise constraints in a querydriven framework is derived from the HMRF model, facilitating improved clustering performance with relatively small amounts of supervision from the user.","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123133957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Metric-Based Approaches for Semi-Supervised Regression and Classification 基于度量的半监督回归和分类方法
Semi-Supervised Learning Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0023
D. Schuurmans, F. Southey, Dana F. Wilkinson, Yuhong Guo
{"title":"Metric-Based Approaches for Semi-Supervised Regression and Classification","authors":"D. Schuurmans, F. Southey, Dana F. Wilkinson, Yuhong Guo","doi":"10.7551/mitpress/9780262033589.003.0023","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0023","url":null,"abstract":"Semi-supervised learning methods typically require an explicit relationship to be asserted between the labeled and unlabeled data—as illustrated, for example, by the neighbourhoods used in graph-based methods. Semi-supervised model selection and regularization methods are presented here that instead require only that the labeled and unlabeled data are drawn from the same distribution. From this assumption, a metric can be constructed over hypotheses based on their predictions for unlabeled data. This metric can then be used to detect untrustworthy training error estimates, leading to model selection strategies that select the richest hypothesis class while providing theoretical guarantees against over-fitting. This general approach is then adapted to regularization for supervised regression and supervised classification with probabilistic classifiers. The regularization adapts not only to the hypothesis class but also to the specific data sample provided, allowing for better performance than regularizers that account only for class complexity.","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116678996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Label Propagation and Quadratic Criterion 标签传播与二次准则
Semi-Supervised Learning Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0011
Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux
{"title":"Label Propagation and Quadratic Criterion","authors":"Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux","doi":"10.7551/mitpress/9780262033589.003.0011","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0011","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127598086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
The Geometric Basis of Semi-Supervised Learning 半监督学习的几何基础
Semi-Supervised Learning Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0012
Vikas Sindhwani, M. Belkin, P. Niyogi
{"title":"The Geometric Basis of Semi-Supervised Learning","authors":"Vikas Sindhwani, M. Belkin, P. Niyogi","doi":"10.7551/mitpress/9780262033589.003.0012","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0012","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134070737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Entropy Regularization 熵正则化
Semi-Supervised Learning Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0009
Yves Grandvalet, Yoshua Bengio
{"title":"Entropy Regularization","authors":"Yves Grandvalet, Yoshua Bengio","doi":"10.7551/mitpress/9780262033589.003.0009","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0009","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121206315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Gaussian Processes and the Null-Category Noise Model 高斯过程与零类噪声模型
Semi-Supervised Learning Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0008
Neil D. Lawrence, Michael I. Jordan
{"title":"Gaussian Processes and the Null-Category Noise Model","authors":"Neil D. Lawrence, Michael I. Jordan","doi":"10.7551/mitpress/9780262033589.003.0008","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0008","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128926455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Transductive Inference and Semi-Supervised Learning 传导推理和半监督学习
Semi-Supervised Learning Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0024
V. Vapnik
{"title":"Transductive Inference and Semi-Supervised Learning","authors":"V. Vapnik","doi":"10.7551/mitpress/9780262033589.003.0024","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0024","url":null,"abstract":"This chapter discusses the difference between transductive inference and semi-supervised learning. It argues that transductive inference captures the intrinsic properties of the mechanism for extracting additional information from the unla-beled data. It also shows an important role of transduction for creating noninductive models of inference. Let us start with the formal problem setting for transductive inference and semi-supervised learning. and a sequence of k test vectors, find among an admissible set of binary vectors, 1. These remarks were inspired by the discussion, What is the Difference between Trans-ductive Inference and Semi-Supervised Learning?, that took place during a workshop close to Tübingen, Germany (May 24, 2005).","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127136211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Large-Scale Algorithms 大规模的算法
Semi-Supervised Learning Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0018
Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux
{"title":"Large-Scale Algorithms","authors":"Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux","doi":"10.7551/mitpress/9780262033589.003.0018","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0018","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123890726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信