arXiv - CS - Machine Learning最新文献

筛选
英文 中文
DEMAU: Decompose, Explore, Model and Analyse Uncertainties DEMAU:分解、探索、模拟和分析不确定性
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.08105
Arthur Hoarau, Vincent Lemaire
{"title":"DEMAU: Decompose, Explore, Model and Analyse Uncertainties","authors":"Arthur Hoarau, Vincent Lemaire","doi":"arxiv-2409.08105","DOIUrl":"https://doi.org/arxiv-2409.08105","url":null,"abstract":"Recent research in machine learning has given rise to a flourishing\u0000literature on the quantification and decomposition of model uncertainty. This\u0000information can be very useful during interactions with the learner, such as in\u0000active learning or adaptive learning, and especially in uncertainty sampling.\u0000To allow a simple representation of these total, epistemic (reducible) and\u0000aleatoric (irreducible) uncertainties, we offer DEMAU, an open-source\u0000educational, exploratory and analytical tool allowing to visualize and explore\u0000several types of uncertainty for classification models in machine learning.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XMOL: Explainable Multi-property Optimization of Molecules XMOL:可解释的分子多性能优化
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07786
Aye Phyu Phyu Aung, Jay Chaudhary, Ji Wei Yoon, Senthilnath Jayavelu
{"title":"XMOL: Explainable Multi-property Optimization of Molecules","authors":"Aye Phyu Phyu Aung, Jay Chaudhary, Ji Wei Yoon, Senthilnath Jayavelu","doi":"arxiv-2409.07786","DOIUrl":"https://doi.org/arxiv-2409.07786","url":null,"abstract":"Molecular optimization is a key challenge in drug discovery and material\u0000science domain, involving the design of molecules with desired properties.\u0000Existing methods focus predominantly on single-property optimization,\u0000necessitating repetitive runs to target multiple properties, which is\u0000inefficient and computationally expensive. Moreover, these methods often lack\u0000transparency, making it difficult for researchers to understand and control the\u0000optimization process. To address these issues, we propose a novel framework,\u0000Explainable Multi-property Optimization of Molecules (XMOL), to optimize\u0000multiple molecular properties simultaneously while incorporating\u0000explainability. Our approach builds on state-of-the-art geometric diffusion\u0000models, extending them to multi-property optimization through the introduction\u0000of spectral normalization and enhanced molecular constraints for stabilized\u0000training. Additionally, we integrate interpretive and explainable techniques\u0000throughout the optimization process. We evaluated XMOL on the real-world\u0000molecular datasets i.e., QM9, demonstrating its effectiveness in both single\u0000property and multiple properties optimization while offering interpretable\u0000results, paving the way for more efficient and reliable molecular design.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BLens: Contrastive Captioning of Binary Functions using Ensemble Embedding BLens:利用集合嵌入对二进制函数进行对比式字幕制作
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07889
Tristan Benoit, Yunru Wang, Moritz Dannehl, Johannes Kinder
{"title":"BLens: Contrastive Captioning of Binary Functions using Ensemble Embedding","authors":"Tristan Benoit, Yunru Wang, Moritz Dannehl, Johannes Kinder","doi":"arxiv-2409.07889","DOIUrl":"https://doi.org/arxiv-2409.07889","url":null,"abstract":"Function names can greatly aid human reverse engineers, which has spurred\u0000development of machine learning-based approaches to predicting function names\u0000in stripped binaries. Much current work in this area now uses transformers,\u0000applying a metaphor of machine translation from code to function names. Still,\u0000function naming models face challenges in generalizing to projects completely\u0000unrelated to the training set. In this paper, we take a completely new approach\u0000by transferring advances in automated image captioning to the domain of binary\u0000reverse engineering, such that different parts of a binary function can be\u0000associated with parts of its name. We propose BLens, which combines multiple\u0000binary function embeddings into a new ensemble representation, aligns it with\u0000the name representation latent space via a contrastive learning approach, and\u0000generates function names with a transformer architecture tailored for function\u0000names. In our experiments, we demonstrate that BLens significantly outperforms\u0000the state of the art. In the usual setting of splitting per binary, we achieve\u0000an $F_1$ score of 0.77 compared to 0.67. Moreover, in the cross-project\u0000setting, which emphasizes generalizability, we achieve an $F_1$ score of 0.46\u0000compared to 0.29.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alignment with Preference Optimization Is All You Need for LLM Safety 与偏好优化保持一致是保证 LLM 安全的必要条件
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07772
Reda Alami, Ali Khalifa Almansoori, Ahmed Alzubaidi, Mohamed El Amine Seddik, Mugariya Farooq, Hakim Hacid
{"title":"Alignment with Preference Optimization Is All You Need for LLM Safety","authors":"Reda Alami, Ali Khalifa Almansoori, Ahmed Alzubaidi, Mohamed El Amine Seddik, Mugariya Farooq, Hakim Hacid","doi":"arxiv-2409.07772","DOIUrl":"https://doi.org/arxiv-2409.07772","url":null,"abstract":"We demonstrate that preference optimization methods can effectively enhance\u0000LLM safety. Applying various alignment techniques to the Falcon 11B model using\u0000safety datasets, we achieve a significant boost in global safety score (from\u0000$57.64%$ to $99.90%$) as measured by LlamaGuard 3 8B, competing with\u0000state-of-the-art models. On toxicity benchmarks, average scores in adversarial\u0000settings dropped from over $0.6$ to less than $0.07$. However, this safety\u0000improvement comes at the cost of reduced general capabilities, particularly in\u0000math, suggesting a trade-off. We identify noise contrastive alignment\u0000(Safe-NCA) as an optimal method for balancing safety and performance. Our study\u0000ultimately shows that alignment techniques can be sufficient for building safe\u0000and robust models.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improve Machine Learning carbon footprint using Nvidia GPU and Mixed Precision training for classification algorithms 使用 Nvidia GPU 和混合精度训练分类算法,改善机器学习的碳足迹
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07853
Andrew Antonopoulos
{"title":"Improve Machine Learning carbon footprint using Nvidia GPU and Mixed Precision training for classification algorithms","authors":"Andrew Antonopoulos","doi":"arxiv-2409.07853","DOIUrl":"https://doi.org/arxiv-2409.07853","url":null,"abstract":"This study was part of my dissertation for my master degree and compares the\u0000power consumption using the default floating point (32bit) and Nvidia mixed\u0000precision (16bit and 32bit) while training a classification ML model. A custom\u0000PC with specific hardware was built to perform the experiments, and different\u0000ML hyper-parameters, such as batch size, neurons, and epochs, were chosen to\u0000build Deep Neural Networks (DNN). Additionally, various software was used\u0000during the experiments to collect the power consumption data in Watts from the\u0000Graphics Processing Unit (GPU), Central Processing Unit (CPU), Random Access\u0000Memory (RAM) and manually from a wattmeter connected to the wall. A\u0000benchmarking test with default hyper parameter values for the DNN was used as a\u0000reference, while the experiments used a combination of different settings. The\u0000results were recorded in Excel, and descriptive statistics were chosen to\u0000calculate the mean between the groups and compare them using graphs and tables.\u0000The outcome was positive when using mixed precision combined with specific\u0000hyper-parameters. Compared to the benchmarking, the optimisation for the\u0000classification reduced the power consumption between 7 and 11 Watts. Similarly,\u0000the carbon footprint is reduced because the calculation uses the same power\u0000consumption data. Still, a consideration is required when configuring\u0000hyper-parameters because it can negatively affect hardware performance.\u0000However, this research required inferential statistics, specifically ANOVA and\u0000T-test, to compare the relationship between the means. Furthermore, tests\u0000indicated no statistical significance of the relationship between the\u0000benchmarking and experiments. However, a more extensive implementation with a\u0000cluster of GPUs can increase the sample size significantly, as it is an\u0000essential factor and can change the outcome of the statistical analysis.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large Language Models are Pattern Matchers: Editing Semi-Structured and Structured Documents with ChatGPT 大语言模型是模式匹配器:使用 ChatGPT 编辑半结构化和结构化文档
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07732
Irene Weber
{"title":"Large Language Models are Pattern Matchers: Editing Semi-Structured and Structured Documents with ChatGPT","authors":"Irene Weber","doi":"arxiv-2409.07732","DOIUrl":"https://doi.org/arxiv-2409.07732","url":null,"abstract":"Large Language Models (LLMs) offer numerous applications, the full extent of\u0000which is not yet understood. This paper investigates if LLMs can be applied for\u0000editing structured and semi-structured documents with minimal effort. Using a\u0000qualitative research approach, we conduct two case studies with ChatGPT and\u0000thoroughly analyze the results. Our experiments indicate that LLMs can\u0000effectively edit structured and semi-structured documents when provided with\u0000basic, straightforward prompts. ChatGPT demonstrates a strong ability to\u0000recognize and process the structure of annotated documents. This suggests that\u0000explicitly structuring tasks and data in prompts might enhance an LLM's ability\u0000to understand and solve tasks. Furthermore, the experiments also reveal\u0000impressive pattern matching skills in ChatGPT. This observation deserves\u0000further investigation, as it may contribute to understanding the processes\u0000leading to hallucinations in LLMs.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedHide: Federated Learning by Hiding in the Neighbors FedHide:通过隐藏在邻居中进行联合学习
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07808
Hyunsin Park, Sungrack Yun
{"title":"FedHide: Federated Learning by Hiding in the Neighbors","authors":"Hyunsin Park, Sungrack Yun","doi":"arxiv-2409.07808","DOIUrl":"https://doi.org/arxiv-2409.07808","url":null,"abstract":"We propose a prototype-based federated learning method designed for embedding\u0000networks in classification or verification tasks. Our focus is on scenarios\u0000where each client has data from a single class. The main challenge is to\u0000develop an embedding network that can distinguish between different classes\u0000while adhering to privacy constraints. Sharing true class prototypes with the\u0000server or other clients could potentially compromise sensitive information. To\u0000tackle this issue, we propose a proxy class prototype that will be shared among\u0000clients instead of the true class prototype. Our approach generates proxy class\u0000prototypes by linearly combining them with their nearest neighbors. This\u0000technique conceals the true class prototype while enabling clients to learn\u0000discriminative embedding networks. We compare our method to alternative\u0000techniques, such as adding random Gaussian noise and using random selection\u0000with cosine similarity constraints. Furthermore, we evaluate the robustness of\u0000our approach against gradient inversion attacks and introduce a measure for\u0000prototype leakage. This measure quantifies the extent of private information\u0000revealed when sharing the proposed proxy class prototype. Moreover, we provide\u0000a theoretical analysis of the convergence properties of our approach. Our\u0000proposed method for federated learning from scratch demonstrates its\u0000effectiveness through empirical results on three benchmark datasets: CIFAR-100,\u0000VoxCeleb1, and VGGFace2.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attack End-to-End Autonomous Driving through Module-Wise Noise 通过模块噪声攻克端到端自动驾驶技术
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.07706
Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, Jiaqi Kang
{"title":"Attack End-to-End Autonomous Driving through Module-Wise Noise","authors":"Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, Jiaqi Kang","doi":"arxiv-2409.07706","DOIUrl":"https://doi.org/arxiv-2409.07706","url":null,"abstract":"With recent breakthroughs in deep neural networks, numerous tasks within\u0000autonomous driving have exhibited remarkable performance. However, deep\u0000learning models are susceptible to adversarial attacks, presenting significant\u0000security risks to autonomous driving systems. Presently, end-to-end\u0000architectures have emerged as the predominant solution for autonomous driving,\u0000owing to their collaborative nature across different tasks. Yet, the\u0000implications of adversarial attacks on such models remain relatively\u0000unexplored. In this paper, we conduct comprehensive adversarial security\u0000research on the modular end-to-end autonomous driving model for the first time.\u0000We thoroughly consider the potential vulnerabilities in the model inference\u0000process and design a universal attack scheme through module-wise noise\u0000injection. We conduct large-scale experiments on the full-stack autonomous\u0000driving model and demonstrate that our attack method outperforms previous\u0000attack methods. We trust that our research will offer fresh insights into\u0000ensuring the safety and reliability of autonomous driving systems.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Click2Mask: Local Editing with Dynamic Mask Generation Click2Mask:本地编辑与动态蒙版生成
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.08272
Omer Regev, Omri Avrahami, Dani Lischinski
{"title":"Click2Mask: Local Editing with Dynamic Mask Generation","authors":"Omer Regev, Omri Avrahami, Dani Lischinski","doi":"arxiv-2409.08272","DOIUrl":"https://doi.org/arxiv-2409.08272","url":null,"abstract":"Recent advancements in generative models have revolutionized image generation\u0000and editing, making these tasks accessible to non-experts. This paper focuses\u0000on local image editing, particularly the task of adding new content to a\u0000loosely specified area. Existing methods often require a precise mask or a\u0000detailed description of the location, which can be cumbersome and prone to\u0000errors. We propose Click2Mask, a novel approach that simplifies the local\u0000editing process by requiring only a single point of reference (in addition to\u0000the content description). A mask is dynamically grown around this point during\u0000a Blended Latent Diffusion (BLD) process, guided by a masked CLIP-based\u0000semantic loss. Click2Mask surpasses the limitations of segmentation-based and\u0000fine-tuning dependent methods, offering a more user-friendly and contextually\u0000accurate solution. Our experiments demonstrate that Click2Mask not only\u0000minimizes user effort but also delivers competitive or superior local image\u0000manipulation results compared to SoTA methods, according to both human\u0000judgement and automatic metrics. Key contributions include the simplification\u0000of user input, the ability to freely add objects unconstrained by existing\u0000segments, and the integration potential of our dynamic mask approach within\u0000other editing methods.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Laplacian-based Bayesian Multi-fidelity Modeling 基于图谱拉普拉斯的贝叶斯多保真度建模
arXiv - CS - Machine Learning Pub Date : 2024-09-12 DOI: arxiv-2409.08211
Orazio Pinti, Jeremy M. Budd, Franca Hoffmann, Assad A. Oberai
{"title":"Graph Laplacian-based Bayesian Multi-fidelity Modeling","authors":"Orazio Pinti, Jeremy M. Budd, Franca Hoffmann, Assad A. Oberai","doi":"arxiv-2409.08211","DOIUrl":"https://doi.org/arxiv-2409.08211","url":null,"abstract":"We present a novel probabilistic approach for generating multi-fidelity data\u0000while accounting for errors inherent in both low- and high-fidelity data. In\u0000this approach a graph Laplacian constructed from the low-fidelity data is used\u0000to define a multivariate Gaussian prior density for the coordinates of the true\u0000data points. In addition, few high-fidelity data points are used to construct a\u0000conjugate likelihood term. Thereafter, Bayes rule is applied to derive an\u0000explicit expression for the posterior density which is also multivariate\u0000Gaussian. The maximum textit{a posteriori} (MAP) estimate of this density is\u0000selected to be the optimal multi-fidelity estimate. It is shown that the MAP\u0000estimate and the covariance of the posterior density can be determined through\u0000the solution of linear systems of equations. Thereafter, two methods, one based\u0000on spectral truncation and another based on a low-rank approximation, are\u0000developed to solve these equations efficiently. The multi-fidelity approach is\u0000tested on a variety of problems in solid and fluid mechanics with data that\u0000represents vectors of quantities of interest and discretized spatial fields in\u0000one and two dimensions. The results demonstrate that by utilizing a small\u0000fraction of high-fidelity data, the multi-fidelity approach can significantly\u0000improve the accuracy of a large collection of low-fidelity data points.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142180604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信