Privacy-Preserving Deep Learning Using Secure Multi-Party Computation

Authors

  • Maloy Jyoti Goswami Author

Keywords:

Secure Multi-Party Computation, Privacy-Preserving Deep Learning, Cryptographic Protocols, Decentralized Data Training, Confidential Gradient Computation

Abstract

Privacy concerns in deep learning have become increasingly prominent with the proliferation of sensitive data used for training models. Secure Multi-Party Computation (MPC) offers a promising solution by enabling multiple parties to jointly compute a function over their private inputs while keeping those inputs confidential. This paper explores the application of MPC techniques to deep learning tasks, focusing on preserving the privacy of both model parameters and training data. We present a framework where participants can collaborate on training deep neural networks without exposing their individual datasets. Our approach leverages cryptographic protocols to compute gradient updates securely, ensuring that no party learns anything beyond the final model parameters. We demonstrate the feasibility and performance of our method through experiments on standard datasets, showing competitive results compared to traditional centralized training methods. By integrating MPC with deep learning, we provide a pathway towards scalable and privacy-preserving AI applications in sensitive domains.

Downloads

Published

2024-06-04

How to Cite

Privacy-Preserving Deep Learning Using Secure Multi-Party Computation. (2024). International IT Journal of Research, ISSN: 3007-6706, 2(2), 50-55. https://itjournal.org/index.php/itjournal/article/view/19

Similar Articles

1-10 of 32

You may also start an advanced similarity search for this article.