Last week, you learnt how to take a TensorFlow model and convert it to a TF light model. This week, you will learn how to take that and get it to run in an Android app. There are a lot of technical challenges to getting these things that run on a mobile device. There's so many, and I think my favorite, that programs will learn is really understanding the different data formats between the data format that something like a mobile that was trained for, and a data format that the device is producing. So mobile net is a 224 by 224 by 3 RGB, right? Chains on image net rate. Drag an image there. Whereas your camera on your device maybe something very different, then how do we bridge the gap between those two? Yeah one of the problems that bedevils a lot of practical machine learning projects is when you train on one distribution of data, but you have to test and evaluate on the somewhat different distribution. So this would be a microcosm of that. Where you have a model trained on ImageNet two, three, four by two, three, four by three RGB normalized images. Now your mobile device gives you different images yet make it work. Yeah. So the great programming skill to learn, and it's one of the things that we'll be going over. So other challenges of course then is when you're running an interpreter on Android, the interpreter is taking input tensor, it's giving output tensors, and how do you pass those output tensors to see what it is that the model is predicting, and then update the user interface around that? So those are some of the skills we'll be learning this week. If the learner has an Android device in the materials from this week, they will be able to get an APK file else that can installed their own android device and run this. That's right. What if they don't have an Android device? So if you don't have an Android device, one of the things that's really need is with the Android Studio tool. It comes with an emulator and that emulator can use a number of different Android images, emulating different Android phones. So you'll be able to run all three apps that you'll be learning about this week on the emulator. In addition to that, they're those that use a camera for the live camera feed. So if your development machine has a camera like a laptop with a webcam, you can use that to emulate the camera even if you don't have a camera, it actually gives you this 3D image of like it's like a living room with furniture and stuff like that in it, and you can do your classification using that one. So you'll be good to go with everything. That's pretty cool. Yeah, cool. So there's a lot going on this week. Those of you that are primarily Machine Learning engineers, I hope that this week will also help you take a small step into mobile development. Let's get started with the next video.