Evaluating our image captioning deep learning model
Training a model and not evaluating its performance makes no sense at all. Hence, we will now be evaluating our deep learning model's performance on the test dataset, which has a total of 1,000 different images from the Flickr8K
dataset. We start off by loading up the usual dependencies in case you don't already have them:
import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.options.display.max_colwidth = 500 %matplotlib inline
Loading up data and models
The next steps include loading up the necessary data, model, and other assets from disk into memory. We first load up our test dataset and our trained deep learning models:
# load test dataset test_df = pd.read_csv('image_test_dataset.tsv', delimiter='\t') # load the models from keras.models import load_model model1 = load_model('ic_model_rmsprop_b256ep30.h5') model2 = load_model('ic_model_rmsprop_b256ep50.h5')
We now need to load up necessary metadata...