Privacy-Preserving Federated Learning with Homomorphic Encryption: Alzheimer’s Detection Use-Case

Publications

Privacy-Preserving Federated Learning with Homomorphic Encryption: Alzheimer’s Detection Use-Case

Privacy-Preserving Federated Learning with Homomorphic Encryption: Alzheimer’s Detection Use-Case

Year : 2025

Publisher : Springer Science and Business Media Deutschland GmbH

Source Title : Studies in Computational Intelligence

Document Type :

Abstract

Machine learning has shown significant potential in medical diagnosis, particularly for Alzheimer’s disease, which accounts for 60–70% of dementia cases. However, traditional machine learning models rely on centralized data collection, raising concerns about privacy and security, especially when dealing with sensitive medical information. To address these concerns, Federated Learning (FL) has emerged as a promising solution. FL enables collaborative model training across multiple devices or institutions without the need to share raw data, thereby enhancing data privacy. Although Federated Learning offers privacy benefits through decentralized model training, breaches can still occur during the transmission of parameters to the central server. Attackers may infer sensitive information from shared model parameters, compromising data privacy through reconstruction attacks. To mitigate this vulnerability, Homomorphic Encryption (HE) is employed as a safeguard, allowing secure computations on encrypted data without revealing the underlying information. This paper presents a Federated Learning technique enhanced with Homomorphic Encryption for the detection of Alzheimer’s disease, ensuring both accuracy and robust privacy protections. The approach utilizes a publicly available Alzheimer’s dataset sourced from GitHub. The study evaluates the performance of this technique using three machine learning algorithms: Decision Tree, Random Forest, and Logistic Regression. Among these, Logistic Regression demonstrated the highest accuracy, achieving 87.44%.