Using the image captioning model in iOS
As the CNN part of the model is based on Inception v3, the same model we used in Chapter 2, Classifying Images with Transfer Learning, we can and will use the simpler TensorFlow pod to create our Objective-C iOS app. Follow the steps here to see how to use both the image2text_frozen_transformed.pb
and image2text_frozen_transformed_memmapped.pb
model files in a new iOS app:
- Similar to the first four steps in Chapter 2, Classifying Images with Transfer Learning, in the Adding TensorFlow to your Objective-C iOS app section, create a new iOS project named
Image2Text
, add a new file namedPodfile
with the following content:
target 'Image2Text' pod 'TensorFlow-experimental'
Then run pod install
on a Terminal and open the Image2Text.xcworkspace
file. Drag and drop ios_image_load.h
, ios_image_load.mm
, tensorflow_utils.h
and tensorflow_utils.mm
files from the TensorFlow iOS example Camera app located at tensorflow/examples/ios/camera
to the Image2Text...