Using the checkpoint callback in Keras
In Chapter 2, Using Deep Learning to Solve Regression Problems, we saw the .save()
method, that allowed us to save our Keras model after we were done training. Wouldn't it be nice, though, if we could write our weights to disk every now and then so that we could go back in time in the preceding example and save a version of the model before it started to overfit? We could then stop right there and use the lowest variance version of the network.
That's exactly what the ModelCheckpoint
callback does for us. Let's take a look:
checkpoint_callback = ModelCheckpoint(filepath="./model-weights.{epoch:02d}-{val_acc:.6f}.hdf5", monitor='val_acc', verbose=1, save_best_only=True)
What ModelCheckpoint
will do for us is save our model at scheduled intervals. Here, we are telling ModelCheckpoint
to save a copy of the model every time we hit a new best validation accuracy (val_acc
). We could have also monitored validation loss or any other metric we had specified....