Incentivized Federated Learning with Local Differential Privacy Using Permissioned Blockchains

Publications

Incentivized Federated Learning with Local Differential Privacy Using Permissioned Blockchains

Year : 2024

Publisher : Springer Science and Business Media Deutschland GmbH

Source Title : Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Document Type :

Abstract

Federated Learning (FL) is a collaborative machine learning approach that enables data owning nodes to retain their data locally, preventing its transfer to a central server. It involves sharing only the local model parameters with the server to update a global model, which is then disseminated back to the local nodes. Despite its iterative convergence, FL has several limitations, such as the risk of single-point failure, inadequate incentives for participating nodes, and potential privacy breaches. While Local Differential Privacy (LDP) is often used to mitigate privacy concerns, the other challenges of FL have not yet been addressed comprehensively, even for Locally Differentially Private Federated Learning (LDP-FL). We propose an integrated approach that uses permissioned blockchains to guard against a single point of failure and a token-based incentivization (TBI) mechanism for encouraging participation in LDP-FL. In our scheme, participating nodes receive tokens upon sharing their model parameters, which can subsequently be used to access updated global models. The number of tokens awarded for parameter sharing is determined by ϵ – the privacy factor of LDP, ensuring that the nodes do not overly obfuscate the data they share. We demonstrate the feasibility of our approach by developing the Blockchain-based TBI-LDP-FL framework (hereinafter, referred to as BTLF) on HyperLedger Fabric. Extensive results of experimentation establish the efficacy of BTLF.