Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Privacy-Preserving Machine Learning
  • Table Of Contents Toc
  • Feedback & Rating feedback
Privacy-Preserving Machine Learning

Privacy-Preserving Machine Learning

By : Srinivasa Rao Aravilli
5 (8)
close
close
Privacy-Preserving Machine Learning

Privacy-Preserving Machine Learning

5 (8)
By: Srinivasa Rao Aravilli

Overview of this book

– In an era of evolving privacy regulations, compliance is mandatory for every enterprise – Machine learning engineers face the dual challenge of analyzing vast amounts of data for insights while protecting sensitive information – This book addresses the complexities arising from large data volumes and the scarcity of in-depth privacy-preserving machine learning expertise, and covers a comprehensive range of topics from data privacy and machine learning privacy threats to real-world privacy-preserving cases – As you progress, you’ll be guided through developing anti-money laundering solutions using federated learning and differential privacy – Dedicated sections will explore data in-memory attacks and strategies for safeguarding data and ML models – You’ll also explore the imperative nature of confidential computation and privacy-preserving machine learning benchmarks, as well as frontier research in the field – Upon completion, you’ll possess a thorough understanding of privacy-preserving machine learning, equipping them to effectively shield data from real-world threats and attacks
Table of Contents (17 chapters)
close
close
Free Chapter
1
Part 1: Introduction to Data Privacy and Machine Learning
4
Part 2: Use Cases of Privacy-Preserving Machine Learning and a Deep Dive into Differential Privacy
8
Part 3: Hands-On Federated Learning
11
Part 4: Homomorphic Encryption, SMC, Confidential Computing, and LLMs

FL algorithms

FL algorithms, such as FedSGD, FedAvg, and Adaptive Federated Optimization, play a crucial role in the distributed training of ML models while ensuring privacy and security. In this section, we will explore these algorithms and their key characteristics.

FedSGD

Federated stochastic gradient descent (FedSGD) is a fundamental algorithm used in FL. It extends the traditional SGD optimization method to the federated setting. In FedSGD, each client (entity) computes the gradients on its local data and sends them to the central server. The server aggregates the gradients and updates the global model parameters accordingly. FedSGD is efficient for large-scale distributed training but may suffer from issues related to non-IID data and communication efficiency.

Figure 6.7 – The FedSGD model weights exchange with the server

Figure 6.7 – The FedSGD model weights exchange with the server

Let’s look at the FedSGD algorithm:

Server-side algorithm...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY