Using a simple speech recognition model in iOS with Swift
We created a Swift-based iOS app that uses the TensorFlow pod in Chapter 2, Classifying Images with Transfer Learning. Let's now create a new Swift app that uses the TensorFlow iOS libraries we manually built in the last section and use the speech commands model in our Swift app:
- Create a new Single View iOS project from Xcode, and set up the project in the same way as steps 1 and 2 in the previous section, except set the
Language
as Swift. - Select Xcode
File
|New
|File
... and selectObjective-C File
. Enter the nameRunInference
. You'll see a message box asking you "Would you like to configure an Objective-C bridging header?" Click theCreate Bridging Header
. Rename the fileRunInference.m
toRunInfence.mm
as we'll mix our C, C++, and Objective-C code to do post-recording audio processing and recognition. We're still using Objective-C in the Swift app because to call the TensorFlow C++ code from Swift, we need to have an Objective...