在分散的联邦学习环境中释放开源模型训练的潜力

IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Ekaterina Pavlova , Grigorii Melnikov , Yury Yanovich , Alexey Frolov
{"title":"在分散的联邦学习环境中释放开源模型训练的潜力","authors":"Ekaterina Pavlova ,&nbsp;Grigorii Melnikov ,&nbsp;Yury Yanovich ,&nbsp;Alexey Frolov","doi":"10.1016/j.bcra.2024.100264","DOIUrl":null,"url":null,"abstract":"<div><div>The field of Artificial Intelligence (AI) is rapidly evolving, creating a demand for sophisticated models that rely on substantial data and computational resources for training. However, the high costs associated with training these models have limited accessibility, leading to concerns about transparency, biases, and hidden agendas within AI systems. As AI becomes more integrated into governmental services and the pursuit of Artificial General Intelligence (AGI) advances, the necessity for transparent and reliable AI models becomes increasingly critical. Decentralized Federated Learning (DFL) offers decentralized approaches to model training while safeguarding data privacy and ensuring resilience against adversarial participants. Nonetheless, the guarantees provided are not absolute, and even open-weight AI models do not qualify as truly open source. This paper suggests using blockchain technology, smart contracts, and publicly verifiable secret sharing in DFL environments to bolster trust, cooperation, and transparency in model training processes. Our numerical experiments illustrate that the overhead required to offer robust assurances to all peers regarding the correctness of the training process is relatively small. By incorporating these tools, participants can trust that trained models adhere to specified procedures, addressing accountability issues within AI systems and promoting the development of more ethical and dependable applications of AI.</div></div>","PeriodicalId":53141,"journal":{"name":"Blockchain-Research and Applications","volume":"6 2","pages":"Article 100264"},"PeriodicalIF":6.9000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unlocking potential of open source model training in decentralized federated learning environment\",\"authors\":\"Ekaterina Pavlova ,&nbsp;Grigorii Melnikov ,&nbsp;Yury Yanovich ,&nbsp;Alexey Frolov\",\"doi\":\"10.1016/j.bcra.2024.100264\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The field of Artificial Intelligence (AI) is rapidly evolving, creating a demand for sophisticated models that rely on substantial data and computational resources for training. However, the high costs associated with training these models have limited accessibility, leading to concerns about transparency, biases, and hidden agendas within AI systems. As AI becomes more integrated into governmental services and the pursuit of Artificial General Intelligence (AGI) advances, the necessity for transparent and reliable AI models becomes increasingly critical. Decentralized Federated Learning (DFL) offers decentralized approaches to model training while safeguarding data privacy and ensuring resilience against adversarial participants. Nonetheless, the guarantees provided are not absolute, and even open-weight AI models do not qualify as truly open source. This paper suggests using blockchain technology, smart contracts, and publicly verifiable secret sharing in DFL environments to bolster trust, cooperation, and transparency in model training processes. Our numerical experiments illustrate that the overhead required to offer robust assurances to all peers regarding the correctness of the training process is relatively small. By incorporating these tools, participants can trust that trained models adhere to specified procedures, addressing accountability issues within AI systems and promoting the development of more ethical and dependable applications of AI.</div></div>\",\"PeriodicalId\":53141,\"journal\":{\"name\":\"Blockchain-Research and Applications\",\"volume\":\"6 2\",\"pages\":\"Article 100264\"},\"PeriodicalIF\":6.9000,\"publicationDate\":\"2025-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Blockchain-Research and Applications\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2096720924000770\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Blockchain-Research and Applications","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096720924000770","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)领域正在迅速发展,对依赖大量数据和计算资源进行训练的复杂模型产生了需求。然而,与训练这些模型相关的高成本限制了可访问性,导致人们担心人工智能系统中的透明度、偏见和隐藏议程。随着人工智能越来越多地融入政府服务,以及对通用人工智能(AGI)的追求不断进步,透明、可靠的人工智能模型的必要性变得越来越重要。去中心化联邦学习(DFL)提供去中心化的模型训练方法,同时保护数据隐私并确保对敌对参与者的弹性。尽管如此,所提供的保证并不是绝对的,甚至开放权重的AI模型也不符合真正的开源标准。本文建议在DFL环境中使用区块链技术、智能合约和可公开验证的秘密共享,以加强模型训练过程中的信任、合作和透明度。我们的数值实验表明,向所有同伴提供关于训练过程正确性的鲁棒保证所需的开销相对较小。通过整合这些工具,参与者可以相信经过训练的模型遵守指定的程序,解决人工智能系统内的问责问题,并促进人工智能更道德、更可靠应用的发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Unlocking potential of open source model training in decentralized federated learning environment
The field of Artificial Intelligence (AI) is rapidly evolving, creating a demand for sophisticated models that rely on substantial data and computational resources for training. However, the high costs associated with training these models have limited accessibility, leading to concerns about transparency, biases, and hidden agendas within AI systems. As AI becomes more integrated into governmental services and the pursuit of Artificial General Intelligence (AGI) advances, the necessity for transparent and reliable AI models becomes increasingly critical. Decentralized Federated Learning (DFL) offers decentralized approaches to model training while safeguarding data privacy and ensuring resilience against adversarial participants. Nonetheless, the guarantees provided are not absolute, and even open-weight AI models do not qualify as truly open source. This paper suggests using blockchain technology, smart contracts, and publicly verifiable secret sharing in DFL environments to bolster trust, cooperation, and transparency in model training processes. Our numerical experiments illustrate that the overhead required to offer robust assurances to all peers regarding the correctness of the training process is relatively small. By incorporating these tools, participants can trust that trained models adhere to specified procedures, addressing accountability issues within AI systems and promoting the development of more ethical and dependable applications of AI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
11.30
自引率
3.60%
发文量
0
期刊介绍: Blockchain: Research and Applications is an international, peer reviewed journal for researchers, engineers, and practitioners to present the latest advances and innovations in blockchain research. The journal publishes theoretical and applied papers in established and emerging areas of blockchain research to shape the future of blockchain technology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信