{"title":"Large-scale Machine Learning over Graphs","authors":"Yiming Yang","doi":"10.1145/3234944.3240462","DOIUrl":null,"url":null,"abstract":"Graphs provide powerful representations for statistical modeling of interrelated variables (observed or latent) in a broad range of machine learning applications. Examples include learning and inference based on the dependency structures among words, documents, topics, users, items, web sites, and more. How to best leverage such dependency structures from multiple graphs with massive and heterogeneous types of nodes and relations has posed grand challenges to machine learning theory and algorithms. This talk presents our recent work in this direction focusing on three significant tasks, including 1) a novel framework for fusing multiple heterogeneous graphs into a unified product graph to enable semi-supervised multi-relational learning, 2) the first algorithmic solution for imposing analogical structures in graph-based entity/relation embedding, and 3) a new formulation of neural architecture search as a graph topology optimization problem, with simple yet powerful algorithms that automatically discover high-performing convolutional neural architectures on image recognition benchmarks, and reduce the computational cost over state-of-the-art non-differentiable techniques by several orders of magnitude.","PeriodicalId":193631,"journal":{"name":"Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3234944.3240462","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Graphs provide powerful representations for statistical modeling of interrelated variables (observed or latent) in a broad range of machine learning applications. Examples include learning and inference based on the dependency structures among words, documents, topics, users, items, web sites, and more. How to best leverage such dependency structures from multiple graphs with massive and heterogeneous types of nodes and relations has posed grand challenges to machine learning theory and algorithms. This talk presents our recent work in this direction focusing on three significant tasks, including 1) a novel framework for fusing multiple heterogeneous graphs into a unified product graph to enable semi-supervised multi-relational learning, 2) the first algorithmic solution for imposing analogical structures in graph-based entity/relation embedding, and 3) a new formulation of neural architecture search as a graph topology optimization problem, with simple yet powerful algorithms that automatically discover high-performing convolutional neural architectures on image recognition benchmarks, and reduce the computational cost over state-of-the-art non-differentiable techniques by several orders of magnitude.