-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

The TensorFlow Workshop
By :

The first formulation of a recurrent-like neural network was created by John Hopfield in 1982. He had two motivations for doing so:
Essentially, an RNN processes input data at each time step and stores information in its memory that will be used for the next step. Information is first transformed into vectors that can be processed by machines. The RNN then processes the vector sequence one at a time. As it processes each vector, it passes the previous hidden state. The hidden state retains information from the previous step, acting as a type of memory. It does this by combining the input and the previous hidden state with a tanh function that compresses the values between -1
and 1
.
Essentially, this is how the RNN functions. RNNs don't need a lot of computation and work well with short sequences.
Figure 9.16: RNN data flow
Now turn your attention...