Back to backpropagation
We have covered the forward propagation in detail in Chapter 1, Neural Network and Artificial Intelligence Concepts, and a little about backpropagation using gradient descent. Backpropagation is one of the important concepts for understanding neural networks and it relies on calculus to update the weights and biases in each layer. Backpropagation of errors is similar to learning from mistakes. We correct ourselves in our mistakes (errors) in every iteration, until we reach a point called convergence. The goal of backpropagation is to correct the weights in each layer and minimize the overall error at the output layer.
Neural network learning heavily relies on backpropagation in feed-forward networks. The usual steps of forward propagation and error correction are explained as follows:
- Start the neural network forward propagation by assigning random weights and biases to each of the neurons in the hidden layer.
- Get the sum of sum(weight*input) + bias at each neuron.
- Apply...