-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Deep Learning for Natural Language Processing
By :

The two types of gradients that have been identified are:
As the name indicates, this happens when gradients explode to much bigger values. This could be one of the problems that RNN architectures could encounter with larger timesteps. This could happen when each of the partial derivatives is larger than 1, and multiplication of these partial derivatives leads to an even larger value. These larger gradient values cause a dramatic shift in the weight values each time they are adjusted using back propagation, leading to a network that doesn't learn well.
There are some techniques used to mitigate this issue, such as gradient clipping, wherein the gradient is normalized once it exceeds a set threshold.
Whether it is RNNs or CNNs, vanishing gradients could be a problem if calculated loss has to travel back a lot. In CNNs, this problem could occur when there are a lot of layers with activations...
Change the font size
Change margin width
Change background colour