Style transfer in action
The final piece of the puzzle is to use all the building blocks and perform style transfer in action! The art/style and content images are available from the datadirectory for reference. The following snippet outlines how loss and gradients are evaluated. We also write back outputs after regular intervals/iterations (5
, 10
, and so on) to understand how the process of neural style transfer transforms the images in consideration after a certain number of iterations as depicted in the following snippet:
from scipy.optimize import fmin_l_bfgs_b from scipy.misc import imsave from imageio import imwrite import time result_prefix = 'st_res_'+TARGET_IMG.split('.')[0] iterations = 20 # Run scipy-based optimization (L-BFGS) over the pixels of the # generated image # so as to minimize the neural style loss. # This is our initial state: the target image. # Note that `scipy.optimize.fmin_l_bfgs_b` can only process flat # vectors. x = preprocess_image(TARGET_IMG, height=img_height...