Chapter 11. Using Custom Models in iOS
In Chapter 9, you looked at various scenarios for creating custom models using TensorFlow Lite Model Maker, Cloud AutoML Vision Edge, and TensorFlow using transfer learning. In this chapter, you’ll take a look at how to integrate these into an iOS app. We’ll focus on two scenarios: image recognition and text classification. If you’ve landed here after reading Chapter 10, our discussions will be very similar, because it’s not always as easy as just dropping a model into your app and it just works. With Android, models created with TensorFlow Lite Model Maker shipped with metadata and a task library that made integration much easier. With iOS, you don’t have the same level of support, and passing data into a model and parsing the results back will involve you getting very low level in dealing with converting your internal datatypes into the underlying tensors the model understands. After you’re done with this chapter, you’ll understand the basics on how to do that, but your scenarios may differ greatly, depending on your data! One exception to this will be if you are using a custom model type that is supported by ML Kit; we’ll explore how to use the ML Kit APIs in iOS to handle a custom model.
Bridging Models to iOS
When you train a model and convert it into TensorFlow Lite’s TFLite format, you’ll have a binary blob that you add to your app as an asset. Your app will load this into the TensorFlow Lite interpreter, and you’ll have to code for ...
Get AI and Machine Learning for On-Device Development now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.