{"title":"对数凹采样的自适应复杂性","authors":"Huanjian Zhou, Baoxiang Wang, Masashi Sugiyama","doi":"arxiv-2408.13045","DOIUrl":null,"url":null,"abstract":"In large-data applications, such as the inference process of diffusion\nmodels, it is desirable to design sampling algorithms with a high degree of\nparallelization. In this work, we study the adaptive complexity of sampling,\nwhich is the minimal number of sequential rounds required to achieve sampling\ngiven polynomially many queries executed in parallel at each round. For\nunconstrained sampling, we examine distributions that are log-smooth or\nlog-Lipschitz and log strongly or non-strongly concave. We show that an almost\nlinear iteration algorithm cannot return a sample with a specific exponentially\nsmall accuracy under total variation distance. For box-constrained sampling, we\nshow that an almost linear iteration algorithm cannot return a sample with\nsup-polynomially small accuracy under total variation distance for log-concave\ndistributions. Our proof relies upon novel analysis with the characterization\nof the output for the hardness potentials based on the chain-like structure\nwith random partition and classical smoothing techniques.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive complexity of log-concave sampling\",\"authors\":\"Huanjian Zhou, Baoxiang Wang, Masashi Sugiyama\",\"doi\":\"arxiv-2408.13045\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In large-data applications, such as the inference process of diffusion\\nmodels, it is desirable to design sampling algorithms with a high degree of\\nparallelization. In this work, we study the adaptive complexity of sampling,\\nwhich is the minimal number of sequential rounds required to achieve sampling\\ngiven polynomially many queries executed in parallel at each round. For\\nunconstrained sampling, we examine distributions that are log-smooth or\\nlog-Lipschitz and log strongly or non-strongly concave. We show that an almost\\nlinear iteration algorithm cannot return a sample with a specific exponentially\\nsmall accuracy under total variation distance. For box-constrained sampling, we\\nshow that an almost linear iteration algorithm cannot return a sample with\\nsup-polynomially small accuracy under total variation distance for log-concave\\ndistributions. Our proof relies upon novel analysis with the characterization\\nof the output for the hardness potentials based on the chain-like structure\\nwith random partition and classical smoothing techniques.\",\"PeriodicalId\":501525,\"journal\":{\"name\":\"arXiv - CS - Data Structures and Algorithms\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Data Structures and Algorithms\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.13045\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Data Structures and Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.13045","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In large-data applications, such as the inference process of diffusion
models, it is desirable to design sampling algorithms with a high degree of
parallelization. In this work, we study the adaptive complexity of sampling,
which is the minimal number of sequential rounds required to achieve sampling
given polynomially many queries executed in parallel at each round. For
unconstrained sampling, we examine distributions that are log-smooth or
log-Lipschitz and log strongly or non-strongly concave. We show that an almost
linear iteration algorithm cannot return a sample with a specific exponentially
small accuracy under total variation distance. For box-constrained sampling, we
show that an almost linear iteration algorithm cannot return a sample with
sup-polynomially small accuracy under total variation distance for log-concave
distributions. Our proof relies upon novel analysis with the characterization
of the output for the hardness potentials based on the chain-like structure
with random partition and classical smoothing techniques.