Implementing a feed-forward backpropagation Neural Network
In this recipe, we will implement a willow neural network with backpropagation. The input of the neural network is the outcome of the third (or last) RBM. In other words, the reconstructed raw data (trainX) is actually used to train the neural network as a supervised classifier of (10) digits. The backpropagation technique is used to further fine-tune the performance of classification.
Getting ready
This section provides the requirements for TensorFlow.
- The dataset is loaded and set up
- The
TensorFlowpackage is set up and loaded
How to do it...
This section covers the steps for setting up a feed-forward backpropagation Neural Network:
- Let's define the input parameters of the neural network as function parameters. The following table describes each parameter:

The neural network function will have a structure as shown in the following script:
NN_train <- function(Xdata,Ydata,Xtestdata,Ytestdata,input_size, learning_rate=0.1,momentum = 0...