Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Mastering NLP from Foundations to LLMs
  • Table Of Contents Toc
  • Feedback & Rating feedback
Mastering NLP from Foundations to LLMs

Mastering NLP from Foundations to LLMs

By : Gazit, Meysam Ghaffari
4.9 (24)
close
close
Mastering NLP from Foundations to LLMs

Mastering NLP from Foundations to LLMs

4.9 (24)
By: Gazit, Meysam Ghaffari

Overview of this book

Do you want to master Natural Language Processing (NLP) but don’t know where to begin? This book will give you the right head start. Written by leaders in machine learning and NLP, Mastering NLP from Foundations to LLMs provides an in-depth introduction to techniques. Starting with the mathematical foundations of machine learning (ML), you’ll gradually progress to advanced NLP applications such as large language models (LLMs) and AI applications. You’ll get to grips with linear algebra, optimization, probability, and statistics, which are essential for understanding and implementing machine learning and NLP algorithms. You’ll also explore general machine learning techniques and find out how they relate to NLP. Next, you’ll learn how to preprocess text data, explore methods for cleaning and preparing text for analysis, and understand how to do text classification. You’ll get all of this and more along with complete Python code samples. By the end of the book, the advanced topics of LLMs’ theory, design, and applications will be discussed along with the future trends in NLP, which will feature expert opinions. You’ll also get to strengthen your practical skills by working on sample real-world NLP business problems and solutions.
Table of Contents (14 chapters)
close
close

Understanding transformers

Transformers are a type of neural network architecture that was introduced in a paper called Attention is All You Need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin (Advances in neural information processing systems 30 (2017), Harvard). They have been very influential in the field of NLP and have formed the basis for state-of-the-art models such as BERT and GPT.

The key innovation in transformers is the self-attention mechanism, which allows the model to weigh the relevance of each word in the input when producing an output, thereby considering the context of each word. This is unlike previous models such as RNNs or RNNs, which process the input sequentially and, therefore, have a harder time capturing the long-range dependencies between words.

Architecture of transformers

A transformer is composed of an encoder and a decoder, both of which are made up of several...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY