Transactions on machine learning research最新文献

筛选
英文 中文
Conformal Bounds on Full-Reference Image Quality for Imaging Inverse Problems. 成像逆问题全参考图像质量的保角界。
Jeffrey Wen, Rizwan Ahmad, Philip Schniter
{"title":"Conformal Bounds on Full-Reference Image Quality for Imaging Inverse Problems.","authors":"Jeffrey Wen, Rizwan Ahmad, Philip Schniter","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In imaging inverse problems, we would like to know how close the recovered image is to the true image in terms of full-reference image quality (FRIQ) metrics like PSNR, SSIM, LPIPS, etc. This is especially important in safety-critical applications like medical imaging, where knowing that, say, the SSIM was poor could potentially avoid a costly misdiagnosis. But since we don't know the true image, computing FRIQ is non-trivial. In this work, we combine conformal prediction with approximate posterior sampling to construct bounds on FRIQ that are guaranteed to hold up to a user-specified error probability. We demonstrate our approach on image denoising and accelerated magnetic resonance imaging (MRI) problems. Code is available at https://github.com/jwen307/quality_uq.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12956293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147357946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Downstream Task Guided Masking Learning in Masked Autoencoders Using Multi-Level Optimization. 基于多级优化的掩蔽自编码器的下游任务引导掩蔽学习。
Han Guo, Ramtin Hosseini, Ruiyi Zhang, Sai Ashish Somayajula, Ranak Roy Chowdhury, Rajesh K Gupta, Pengtao Xie
{"title":"Downstream Task Guided Masking Learning in Masked Autoencoders Using Multi-Level Optimization.","authors":"Han Guo, Ramtin Hosseini, Ruiyi Zhang, Sai Ashish Somayajula, Ranak Roy Chowdhury, Rajesh K Gupta, Pengtao Xie","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Masked Autoencoder (MAE) is a notable method for self-supervised pretraining in visual representation learning. It operates by randomly masking image patches and reconstructing these masked patches using the unmasked ones. A key limitation of MAE lies in its disregard for the varying informativeness of different patches, as it uniformly selects patches to mask. To overcome this, some approaches propose masking based on patch informativeness. However, these methods often do not consider the specific requirements of downstream tasks, potentially leading to suboptimal representations for these tasks. In response, we introduce the Multi-level Optimized Mask Autoencoder (MLO-MAE), a novel framework that leverages end-to-end feedback from downstream tasks to learn an optimal masking strategy during pretraining. Our experimental findings highlight MLO-MAE's significant advancements in visual representation learning. Compared to existing methods, it demonstrates remarkable improvements across diverse datasets and tasks, showcasing its adaptability and efficiency. Our code is available at https://github.com/Alexiland/MLO-MAE.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating Learned Image Compression Through Modeling Neural Training Dynamics. 通过建模神经训练动力学加速学习图像压缩。
Yichi Zhang, Zhihao Duan, Yuning Huang, Fengqing Zhu
{"title":"Accelerating Learned Image Compression Through Modeling Neural Training Dynamics.","authors":"Yichi Zhang, Zhihao Duan, Yuning Huang, Fengqing Zhu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>As learned image compression (LIC) methods become increasingly computationally demanding, enhancing their training efficiency is crucial. This paper takes a step forward in accelerating the training of LIC methods by modeling the neural training dynamics. We first propose a Sensitivity-aware True and Dummy Embedding Training mechanism (STDET) that clusters LIC model parameters into few separate modes where parameters are expressed as affine transformations of reference parameters within the same mode. By further utilizing the stable intra-mode correlations throughout training and parameter sensitivities, we gradually embed non-reference parameters, reducing the number of trainable parameters. Additionally, we incorporate a Sampling-then-Moving Average (SMA) technique, interpolating sampled weights from stochastic gradient descent (SGD) training to obtain the moving average weights, ensuring smooth temporal behavior and minimizing training state variances. Overall, our method significantly reduces training space dimensions and the number of trainable parameters without sacrificing model performance, thus accelerating model convergence. We also provide a theoretical analysis on the Noisy quadratic model, showing that the proposed method achieves a lower training variance than standard SGD. Our approach offers valuable insights for further developing efficient training methods for LICs.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12129407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Theoretical Study of Neural Network Expressive Power via Manifold Topology. 基于流形拓扑的神经网络表达能力理论研究。
Jiachen Yao, Lingjie Yi, Mayank Goswami, Chao Chen
{"title":"A Theoretical Study of Neural Network Expressive Power via Manifold Topology.","authors":"Jiachen Yao, Lingjie Yi, Mayank Goswami, Chao Chen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>A prevalent assumption regarding real-world data is that it lies on or close to a low-dimensional manifold. When deploying a neural network on data manifolds, the required size, i.e., the number of neurons of the network, heavily depends on the intricacy of the underlying latent manifold. While significant advancements have been made in understanding the geometric attributes of manifolds, it's essential to recognize that topology, too, is a fundamental characteristic of manifolds. In this study, we investigate network expressive power in terms of the latent data manifold. Integrating both topological and geometric facets of the data manifold, we present a size upper bound of ReLU neural networks.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12974905/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lower Ricci Curvature for Efficient Community Detection. 有效社区检测的低Ricci曲率。
Yun Jin Park, Didong Li
{"title":"Lower Ricci Curvature for Efficient Community Detection.","authors":"Yun Jin Park, Didong Li","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study introduces the Lower Ricci Curvature (LRC), a novel, scalable, and scale-free discrete curvature designed to enhance community detection in networks. Addressing the computational challenges posed by existing curvature-based methods, LRC offers a streamlined approach with linear computational complexity, which makes it well suited for large-scale network analysis. We further develop an LRC-based preprocessing method that effectively augments popular community detection algorithms. Through applications on multiple real-world datasets, including the NCAA football league network, the DBLP collaboration network, the Amazon product co-purchasing network, and the YouTube social network, we demonstrate the efficacy of our method in significantly improving the performance of various community detection algorithms.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13021251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147576784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrated Probabilistic Forecasts for Arbitrary Sequences. 任意序列的校准概率预测。
Charles Marx, Volodymyr Kuleshov, Stefano Ermon
{"title":"Calibrated Probabilistic Forecasts for Arbitrary Sequences.","authors":"Charles Marx, Volodymyr Kuleshov, Stefano Ermon","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors, which challenges the validity of forecasts. We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves. Leveraging the concept of Blackwell approachability from game theory, we introduce a forecasting framework that guarantees calibrated uncertainties for outcomes in any compact space (e.g., classification or bounded regression). We extend this framework to recalibrate existing forecasters, guaranteeing calibration without sacrificing predictive performance. We implement both general-purpose gradient-based algorithms and algorithms optimized for popular special cases of our framework. Empirically, our algorithms improve calibration and downstream decision-making for energy systems.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12975122/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the stability of gradient descent with second order dynamics for time-varying cost functions. 时变代价函数二阶动力学梯度下降的稳定性。
Travis E Gibson, Sawal Acharya, Anjali Parashar, Joseph E Gaudio, Anuradha M Annaswamy
{"title":"On the stability of gradient descent with second order dynamics for time-varying cost functions.","authors":"Travis E Gibson, Sawal Acharya, Anjali Parashar, Joseph E Gaudio, Anuradha M Annaswamy","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Gradient based optimization algorithms deployed in Machine Learning (ML) applications are often analyzed and compared by their convergence rates or regret bounds. While these rates and bounds convey valuable information they don't always directly translate to stability guarantees. Stability and similar concepts, like robustness, will become ever more important as we move towards deploying models in real-time and safety critical systems. In this work we build upon the results in Gaudio et al. 2021 and Moreu & Annaswamy 2022 for gradient descent with second order dynamics when applied to explicitly time varying cost functions and provide more general stability guarantees. These more general results can aid in the design and certification of these optimization schemes so as to help ensure safe and reliable deployment for real-time learning applications. We also hope that the techniques provided here will stimulate and cross-fertilize the analysis that occurs on the same algorithms from the online learning and stochastic optimization communities.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12284918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balanced Mixed-Type Tabular Data Synthesis with Diffusion Models. 基于扩散模型的平衡混合型表格数据综合。
Zeyu Yang, Han Yu, Peikun Guo, Khadija Zanna, Xiaoxue Yang, Akane Sano
{"title":"Balanced Mixed-Type Tabular Data Synthesis with Diffusion Models.","authors":"Zeyu Yang, Han Yu, Peikun Guo, Khadija Zanna, Xiaoxue Yang, Akane Sano","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Diffusion models have emerged as a robust framework for various generative tasks, including tabular data synthesis. However, current tabular diffusion models tend to inherit bias in the training dataset and generate biased synthetic data, which may influence discriminatory actions. In this research, we introduce a novel tabular diffusion model that incorporates sensitive guidance to generate fair synthetic data with balanced joint distributions of the target label and sensitive attributes, such as sex and race. The empirical results demonstrate that our method effectively mitigates bias in training data while maintaining the quality of the generated samples. Furthermore, we provide evidence that our approach outperforms existing methods for synthesizing tabular data on fairness metrics such as demographic parity ratio and equalized odds ratio, achieving improvements of over 10%. Our implementation is available at https://github.com/comp-well-org/fair-tab-diffusion.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12975070/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer Architecture Search for Improving Out-of-Domain Generalization in Machine Translation. 改进机器翻译领域外泛化的变压器体系结构搜索。
Yiheng He, Ruiyi Zhang, Sai Ashish Somayajula, Pengtao Xie
{"title":"Transformer Architecture Search for Improving Out-of-Domain Generalization in Machine Translation.","authors":"Yiheng He, Ruiyi Zhang, Sai Ashish Somayajula, Pengtao Xie","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Interest in automatically searching for Transformer neural architectures for machine translation (MT) has been increasing. Current methods show promising results in in-domain settings, where training and test data share the same distribution. However, in real-world MT applications, it is common that the test data has a different distribution than the training data. In these out-of-domain (OOD) situations, Transformer architectures optimized for the linguistic characteristics of the training sentences struggle to produce accurate translations for OOD sentences during testing. To tackle this issue, we propose a multi-level optimization based method to automatically search for neural architectures that possess robust OOD generalization capabilities. During the architecture search process, our method automatically synthesizes approximated OOD MT data, which is used to evaluate and improve the architectures' ability of generalizing to OOD scenarios. The generation of approximated OOD data and the search for optimal architectures are executed in an integrated, end-to-end manner. Evaluated across multiple datasets, our method demonstrates strong OOD generalization performance, surpassing state-of-the-art approaches. Our code is publicly available at https://github.com/yihenghe/transformer_nas.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12356094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaWaveNet: Adaptive Wavelet Network for Time Series Analysis. 时间序列分析的自适应小波网络。
Han Yu, Peikun Guo, Akane Sano
{"title":"AdaWaveNet: Adaptive Wavelet Network for Time Series Analysis.","authors":"Han Yu, Peikun Guo, Akane Sano","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Time series data analysis is a critical component in various domains such as finance, healthcare, and meteorology. Despite the progress in deep learning for time series analysis, there remains a challenge in addressing the non-stationary nature of time series data. Most of the existing models, which are built on the assumption of constant statistical properties over time, often struggle to capture the temporal dynamics in realistic time series and result in bias and error in time series analysis. This paper introduces the Adaptive Wavelet Network (<i>AdaWaveNet</i>), a novel approach that employs Adaptive Wavelet Transformation for multiscale analysis of non-stationary time series data. <i>AdaWaveNet</i> designed a lifting scheme-based wavelet decomposition and construction mechanism for adaptive and learnable wavelet transforms, which offers enhanced flexibility and robustness in analysis. We conduct extensive experiments on 10 datasets across 3 different tasks, including forecasting, imputation, and a newly established super-resolution task. The evaluations demonstrate the effectiveness of <i>AdaWaveNet</i> over existing methods in all three tasks, which illustrates its potential in various real-world applications. The code implemented for the <i>Ada WaveNet</i> is available at https://github.com/comp-well-org/AdaWaveNet.</p>","PeriodicalId":75238,"journal":{"name":"Transactions on machine learning research","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12974719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书