Using TensorFlow Serving
In this section, we will show you how to set up your RNN model to predict spam or ham text messages on TensorFlow. We will first illustrate how to save a model in a protobuf format, and will then load the model into a local server, listening on port 9000
for input.
Getting ready
We start this section by encouraging reader to read through the official documentation and the short tutorials on the TensorFlow Serving site available at https://www.tensorflow.org/serving/serving_basic.
For this example, we will reuse most of the RNN code we used in the on Predicting Spam with RNNs recipe in Chapter 9, Recurrent Neural Networks. We will alter our model saving code to save a protobuf model in the correct folder structure that is necessary to use TensorFlow Serving.
Note
Note that all scripts in this chapter should be executed from the command line bash prompt.
For the updated installation instructions, visit the official installation site at: https://www.tensorflow.org/serving...