Rui Xing, Zhenzhe Zheng, Qinya Li, Fan Wu, Guihai Chen
{"title":"MI-VFL: Feature discrepancy-aware distributed model interpretation for vertical federated learning","authors":"Rui Xing, Zhenzhe Zheng, Qinya Li, Fan Wu, Guihai Chen","doi":"10.1016/j.comnet.2025.111220","DOIUrl":null,"url":null,"abstract":"<div><div>Vertical federated learning (VFL) allows multiple distributed clients with misaligned feature spaces to collaboratively accomplish global model training. Applying VFL to high-stakes decision services greatly requires model interpretation for decision reliability and diagnosis. However, the feature discrepancy in VFL raises new issues for model interpretation in distributed setting: one is from the local–global perspective, where the local importance of features is not equal to the global importance; and the other is from the local–local perspective, where information asymmetry among clients causes difficulty in identifying overlapped features. In this work, we propose a new distributed <u>M</u>odel <u>I</u>nterpretation method for <u>V</u>ertical <u>F</u>ederated <u>L</u>earning with feature discrepancy, namely MI-VFL. In particular, to deal with the local–global discrepancy, MI-VFL leverages the tools from probability theory and adversarial game theory to adjust the local importance of features and ensure the completeness of the selected features. To handle the local–local discrepancy, MI-VFL builds a federated adversarial learning model to efficiently identify the overlapped features at one time, rather than performing client-to-client intersections multiple times. We extensively evaluate MI-VFL on six synthetic datasets and five real-world datasets. The evaluation results reveal that MI-VFL can accurately identify the important features, suppress the overlapped features, and thus improve the model performance.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"263 ","pages":"Article 111220"},"PeriodicalIF":4.4000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625001884","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
MI-VFL: Feature discrepancy-aware distributed model interpretation for vertical federated learning
Vertical federated learning (VFL) allows multiple distributed clients with misaligned feature spaces to collaboratively accomplish global model training. Applying VFL to high-stakes decision services greatly requires model interpretation for decision reliability and diagnosis. However, the feature discrepancy in VFL raises new issues for model interpretation in distributed setting: one is from the local–global perspective, where the local importance of features is not equal to the global importance; and the other is from the local–local perspective, where information asymmetry among clients causes difficulty in identifying overlapped features. In this work, we propose a new distributed Model Interpretation method for Vertical Federated Learning with feature discrepancy, namely MI-VFL. In particular, to deal with the local–global discrepancy, MI-VFL leverages the tools from probability theory and adversarial game theory to adjust the local importance of features and ensure the completeness of the selected features. To handle the local–local discrepancy, MI-VFL builds a federated adversarial learning model to efficiently identify the overlapped features at one time, rather than performing client-to-client intersections multiple times. We extensively evaluate MI-VFL on six synthetic datasets and five real-world datasets. The evaluation results reveal that MI-VFL can accurately identify the important features, suppress the overlapped features, and thus improve the model performance.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.