Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Privacy-Preserving Machine Learning
  • Table Of Contents Toc
  • Feedback & Rating feedback
Privacy-Preserving Machine Learning

Privacy-Preserving Machine Learning

By : Srinivasa Rao Aravilli
5 (8)
close
close
Privacy-Preserving Machine Learning

Privacy-Preserving Machine Learning

5 (8)
By: Srinivasa Rao Aravilli

Overview of this book

– In an era of evolving privacy regulations, compliance is mandatory for every enterprise – Machine learning engineers face the dual challenge of analyzing vast amounts of data for insights while protecting sensitive information – This book addresses the complexities arising from large data volumes and the scarcity of in-depth privacy-preserving machine learning expertise, and covers a comprehensive range of topics from data privacy and machine learning privacy threats to real-world privacy-preserving cases – As you progress, you’ll be guided through developing anti-money laundering solutions using federated learning and differential privacy – Dedicated sections will explore data in-memory attacks and strategies for safeguarding data and ML models – You’ll also explore the imperative nature of confidential computation and privacy-preserving machine learning benchmarks, as well as frontier research in the field – Upon completion, you’ll possess a thorough understanding of privacy-preserving machine learning, equipping them to effectively shield data from real-world threats and attacks
Table of Contents (17 chapters)
close
close
Free Chapter
1
Part 1: Introduction to Data Privacy and Machine Learning
4
Part 2: Use Cases of Privacy-Preserving Machine Learning and a Deep Dive into Differential Privacy
8
Part 3: Hands-On Federated Learning
11
Part 4: Homomorphic Encryption, SMC, Confidential Computing, and LLMs

Key concepts/terms used in LLMs

LLMs are a complex field of NLP, and there are several terms associated with them.

Some key terms and concepts used in the context of LLMs are the following:

  • Transformer architecture: The foundational architecture for most LLMs, known for its self-attention mechanism, which allows the model to weigh the importance of different words in a sentence.
  • Pre-training: The initial phase in which the LLM is trained on a massive corpus of text data from the internet to learn language patterns and context. This pre-trained model is often referred to as the “base model.”
  • Fine-tuning: The subsequent phase where the pre-trained model is adapted to perform specific NLP tasks, such as text classification, translation, summarization, or question answering. Fine-tuning helps the model specialize in these tasks.
  • Parameters: These are the trainable components of the LLM, represented by numerical values. The number of parameters is a...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY