Using a random forest
Random forest algorithms are built on aggregating decision trees on randomly selected observations and/or randomly selected features. We will not cover how decision trees are trained, but will show that there are types of random forests that can be trained using gradient boosting, which TensorFlow can calculate for us.
Getting ready
Tree based algorithms are traditionally non-smooth, as they are based on partitioning the data to minimize the variance in the target outputs. Non-smooth methods do not lend themselves well to gradient based methods. TensorFlow relies on the fact that the functions used in the model are smooth and that it automatically calculates how to change the model parameters to minimize the function loss. The way that TensorFlow gets around this obstacle is to do a smooth approximation to a decision boundary. One can approximate a decision boundary with a softmax function or similar shaped function.
Decision trees will provide a hard split on a dataset...