Summary
In this chapter, we first gave an overview of different neural-style transfer methods developed since 2015. Then we showed how to train a second-generation style transfer model that's fast enough to run on mobile devices in a few seconds. After that, we covered how to use the model in both an iOS app and an Android app, built from scratch with a minimalist approach with fewer than 100 total lines of code. Finally, we talked about how to use a TensorFlow Magenta multi-style neural transfer model, which includes 26 amazing art styles in a single small model, in both iOS and Android apps.
In the next chapter, we'll explore another task deemed as intelligent when demonstrated by us humans or our best friends: to be able to recognize voice commands. Who wouldn't want our dogs to understand commands such as "sit," "come," "no," or our babies to respond to "yes," "stop," or "go"? Let's see how we can develop mobile apps that are just like them.