In networks, the information flows from the input layer to the output layer, with one or more hidden layers in-between. In Figure 1.13, the three neurons A, B, and C belong to the input layer, the neuron H belongs to the output or activation layer, and the neurons D, E, F, and G belong to the hidden layer. The first layer has an input, x, of size 2, the second (hidden) layer takes the three activation values of the previous layer as input, and so on. Such layers, with each neuron connected to all the values from the previous layer, are classed as being fully connected or dense:
Once again, we can compact the calculations by representing these elements with vectors and matrices. The following operations are done by the first layers:
This can be expressed as follows:
In order to obtain the previous equation, we must define the variables as follows:
The activation...