{"title":"在分散的联邦学习环境中释放开源模型训练的潜力","authors":"Ekaterina Pavlova , Grigorii Melnikov , Yury Yanovich , Alexey Frolov","doi":"10.1016/j.bcra.2024.100264","DOIUrl":null,"url":null,"abstract":"<div><div>The field of Artificial Intelligence (AI) is rapidly evolving, creating a demand for sophisticated models that rely on substantial data and computational resources for training. However, the high costs associated with training these models have limited accessibility, leading to concerns about transparency, biases, and hidden agendas within AI systems. As AI becomes more integrated into governmental services and the pursuit of Artificial General Intelligence (AGI) advances, the necessity for transparent and reliable AI models becomes increasingly critical. Decentralized Federated Learning (DFL) offers decentralized approaches to model training while safeguarding data privacy and ensuring resilience against adversarial participants. Nonetheless, the guarantees provided are not absolute, and even open-weight AI models do not qualify as truly open source. This paper suggests using blockchain technology, smart contracts, and publicly verifiable secret sharing in DFL environments to bolster trust, cooperation, and transparency in model training processes. Our numerical experiments illustrate that the overhead required to offer robust assurances to all peers regarding the correctness of the training process is relatively small. By incorporating these tools, participants can trust that trained models adhere to specified procedures, addressing accountability issues within AI systems and promoting the development of more ethical and dependable applications of AI.</div></div>","PeriodicalId":53141,"journal":{"name":"Blockchain-Research and Applications","volume":"6 2","pages":"Article 100264"},"PeriodicalIF":6.9000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unlocking potential of open source model training in decentralized federated learning environment\",\"authors\":\"Ekaterina Pavlova , Grigorii Melnikov , Yury Yanovich , Alexey Frolov\",\"doi\":\"10.1016/j.bcra.2024.100264\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The field of Artificial Intelligence (AI) is rapidly evolving, creating a demand for sophisticated models that rely on substantial data and computational resources for training. However, the high costs associated with training these models have limited accessibility, leading to concerns about transparency, biases, and hidden agendas within AI systems. As AI becomes more integrated into governmental services and the pursuit of Artificial General Intelligence (AGI) advances, the necessity for transparent and reliable AI models becomes increasingly critical. Decentralized Federated Learning (DFL) offers decentralized approaches to model training while safeguarding data privacy and ensuring resilience against adversarial participants. Nonetheless, the guarantees provided are not absolute, and even open-weight AI models do not qualify as truly open source. This paper suggests using blockchain technology, smart contracts, and publicly verifiable secret sharing in DFL environments to bolster trust, cooperation, and transparency in model training processes. Our numerical experiments illustrate that the overhead required to offer robust assurances to all peers regarding the correctness of the training process is relatively small. By incorporating these tools, participants can trust that trained models adhere to specified procedures, addressing accountability issues within AI systems and promoting the development of more ethical and dependable applications of AI.</div></div>\",\"PeriodicalId\":53141,\"journal\":{\"name\":\"Blockchain-Research and Applications\",\"volume\":\"6 2\",\"pages\":\"Article 100264\"},\"PeriodicalIF\":6.9000,\"publicationDate\":\"2025-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Blockchain-Research and Applications\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2096720924000770\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Blockchain-Research and Applications","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096720924000770","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Unlocking potential of open source model training in decentralized federated learning environment
The field of Artificial Intelligence (AI) is rapidly evolving, creating a demand for sophisticated models that rely on substantial data and computational resources for training. However, the high costs associated with training these models have limited accessibility, leading to concerns about transparency, biases, and hidden agendas within AI systems. As AI becomes more integrated into governmental services and the pursuit of Artificial General Intelligence (AGI) advances, the necessity for transparent and reliable AI models becomes increasingly critical. Decentralized Federated Learning (DFL) offers decentralized approaches to model training while safeguarding data privacy and ensuring resilience against adversarial participants. Nonetheless, the guarantees provided are not absolute, and even open-weight AI models do not qualify as truly open source. This paper suggests using blockchain technology, smart contracts, and publicly verifiable secret sharing in DFL environments to bolster trust, cooperation, and transparency in model training processes. Our numerical experiments illustrate that the overhead required to offer robust assurances to all peers regarding the correctness of the training process is relatively small. By incorporating these tools, participants can trust that trained models adhere to specified procedures, addressing accountability issues within AI systems and promoting the development of more ethical and dependable applications of AI.
期刊介绍:
Blockchain: Research and Applications is an international, peer reviewed journal for researchers, engineers, and practitioners to present the latest advances and innovations in blockchain research. The journal publishes theoretical and applied papers in established and emerging areas of blockchain research to shape the future of blockchain technology.