Welcome to week four of this course on TensorFlow Lite. In the first few weeks you saw TensorFlow Lite and how it works to bring models from your developer workstation to mobile devices like Android or iOS. But perhaps even more exciting than this, is the ability to run models on Micro-systems with Raspberry Pi or even micro-controllers. The scenarios these opens are amazing, including being able to run inference at the edge and to have intelligence built into devices like these. This week, you'll take a look at how you can code for these types of devices. It's not just for fun projects like the ones I've been showing here. When you look at the usage of smart IoT devices and their current expected rate of growth, you can see that they're outpacing PCs and smartphones, and are expected to overtake them in numbers by about 2022, and continue to grow from there. There are many reasons why that might be the case but some would like to call out come from things like; ongoing ML research, shrinking models, and technologies such as TF Lite, making smaller faster models possible. Then there's on-device accelerators like the GPUs and the neural network APIs we saw on mobile devices. These used to be in the realm of big powerful desktop computers but now they've shrunk to mobile, and they continue to shrink to the edge. Of course, there are always ongoing scenarios around ML and intelligent apps requiring the inference to be done locally instead of on a remote Cloud server. As devices get smaller, they may not always be connected. So there are many advantages of on-device ML where the inference running locally on the device. You could have access to a high-performance GPU or TPU co-processor to run your models very quickly. You're also, of course, avoiding the latency of a round trip to a Cloud-based model. This can also ensure that you have better privacy. You're not passing sensitive data up to a Cloud service for inference. This can be very useful in business applications that involve flow of transactions and sensitive information where confidentiality and trust will need to be maintained. These devices provide additional information with the help of sensors which can help assure you that data never leaves the device and is accessible whenever you need it. It's not only that these edge devices have better privacy, but they can also be made to work end to end without the need to connect to the web. Another hidden benefit is if you aren't using the Wi-Fi or cellular antenna on a device to communicate with a Cloud service, you're probably having a huge savings in power consumption and battery life. In addition to devices being able to run ML, there's also the ability to extend them with some more ML power. The Coral product is a USB accelerator that allows you to deploy models and execute them on devices that may not have enough power to run them themselves. They come with an Edge TPU built in, and that's a processor that's specifically designed to run TensorFlow based models. The accelerator on the right here is a standalone USB powered device, and the Dev Board on the left is a single-board computer containing an Edge TPU processor.