Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Hands-On Computer Vision with TensorFlow 2
  • Toc
  • feedback
Hands-On Computer Vision with TensorFlow 2

Hands-On Computer Vision with TensorFlow 2

By : Benjamin Planche, Eliot Andres
3.3 (12)
close
Hands-On Computer Vision with TensorFlow 2

Hands-On Computer Vision with TensorFlow 2

3.3 (12)
By: Benjamin Planche, Eliot Andres

Overview of this book

Computer vision solutions are becoming increasingly common, making their way into fields such as health, automobile, social media, and robotics. This book will help you explore TensorFlow 2, the brand new version of Google's open source framework for machine learning. You will understand how to benefit from using convolutional neural networks (CNNs) for visual tasks. Hands-On Computer Vision with TensorFlow 2 starts with the fundamentals of computer vision and deep learning, teaching you how to build a neural network from scratch. You will discover the features that have made TensorFlow the most widely used AI library, along with its intuitive Keras interface. You'll then move on to building, training, and deploying CNNs efficiently. Complete with concrete code examples, the book demonstrates how to classify images with modern solutions, such as Inception and ResNet, and extract specific content using You Only Look Once (YOLO), Mask R-CNN, and U-Net. You will also build generative adversarial networks (GANs) and variational autoencoders (VAEs) to create and edit images, and long short-term memory networks (LSTMs) to analyze videos. In the process, you will acquire advanced insights into transfer learning, data augmentation, domain adaptation, and mobile and web deployment, among other key concepts. By the end of the book, you will have both the theoretical understanding and practical skills to solve advanced computer vision problems with TensorFlow 2.0.
Table of Contents (16 chapters)
close
Free Chapter
1
Section 1: TensorFlow 2 and Deep Learning Applied to Computer Vision
2
Computer Vision and Neural Networks
5
Section 2: State-of-the-Art Solutions for Classic Recognition Problems
6
Influential Classification Tools
9
Section 3: Advanced Concepts and New Frontiers of Computer Vision
10
Training on Complex and Scarce Datasets

LSTM inner workings

First, let's detail how the gates are computed:

As detailed in the previous equations, the three gates are computed using the same principle—by multiplying a weight matrix (W) by the previous output (h<t-1>) and the current input (x<t>). Notice that the activation function is the sigmoid (σ). As a consequence, the gate values are always between 0 and 1.

The candidate state () is computed in a similar fashion. However, the activation function used is a hyperbolic tangent instead of the sigmoid:

Notice that this formula is exactly the same as the one used to compute h<t> in the basic RNN architecture. However, h<t> was the hidden state while, in this case, we are computing the candidate cell state. To compute the new cell state, we combine the previous one with the candidate cell state. Both states are gated by the forget and input gates, respectively:

Finally, the LSTM hidden state (output) will be computed from the cell...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete