Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Python Deep Learning
  • Toc
  • feedback
Python Deep Learning

Python Deep Learning

By : Ivan Vasilev
4.9 (15)
close
Python Deep Learning

Python Deep Learning

4.9 (15)
By: Ivan Vasilev

Overview of this book

The field of deep learning has developed rapidly recently and today covers a broad range of applications. This makes it challenging to navigate and hard to understand without solid foundations. This book will guide you from the basics of neural networks to the state-of-the-art large language models in use today. The first part of the book introduces the main machine learning concepts and paradigms. It covers the mathematical foundations, the structure, and the training algorithms of neural networks and dives into the essence of deep learning. The second part of the book introduces convolutional networks for computer vision. We’ll learn how to solve image classification, object detection, instance segmentation, and image generation tasks. The third part focuses on the attention mechanism and transformers – the core network architecture of large language models. We’ll discuss new types of advanced tasks they can solve, such as chatbots and text-to-image generation. By the end of this book, you’ll have a thorough understanding of the inner workings of deep neural networks. You'll have the ability to develop new models and adapt existing ones to solve your tasks. You’ll also have sufficient understanding to continue your research and stay up to date with the latest advancements in the field.
Table of Contents (17 chapters)
close
1
Part 1:Introduction to Neural Networks
5
Part 2: Deep Neural Networks for Computer Vision
8
Part 3: Natural Language Processing and Transformers
13
Part 4: Developing and Deploying Deep Neural Networks

Understanding the attention mechanism

In this section, we’ll discuss several iterations of the attention mechanism in the order that they were introduced.

Bahdanau attention

The first attention iteration (Neural Machine Translation by Jointly Learning to Align and Translate, https://arxiv.org/abs/1409.0473), known as Bahdanau attention, extends the seq2seq model with the ability for the decoder to work with all encoder hidden states, not just the last one. It is an addition to the existing seq2seq model, rather than an independent entity. The following diagram shows how Bahdanau attention works:

Figure 7.2 – The attention mechanism

Figure 7.2 – The attention mechanism

Don’t worry—it looks scarier than it is. We’ll go through this diagram from top to bottom: the attention mechanism works by plugging an additional context vector, <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msub><mml:mrow><mml:mi mathvariant="bold">c</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math>, between the encoder and the decoder. The hidden decoder state <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msub><mml:mrow><mml:mi mathvariant="bold">s</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math> at time t is now a function not only of the hidden state and decoder output at...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete