- [Alana] The idea of IoT, in general, is that we're continuously building devices to make our lives easier. And, as we make these devices smarter, we want them to detect things that happen in our environment, such as a room in our factory getting warmer. But it's no good if they detect that the room gets warmer and then do nothing about it. If we detect the room is getting warmer, we may want to gather data about why, and with all this data collected, you can begin to train a model. When this training is done, you can then start inferencing based off your data. Maybe the room is getting warmer because of outside temperatures, or because a fan was shut off. We then want the edge device to take action quickly to the local events based off of these inferences. This is where the conversation starts to shift. We have to start thinking about how to make these inferences and take that action. This is where machine learning comes into play. You probably already know that training machine learning models requires a lot of computational power in the cloud, but making inferences against these models takes less computational power. And since we're dealing with edge devices in the Industrial IoT world, these inferences needs to happen with little connectivity to the internet and the cloud. So for most Industrial IoT workloads, there may be a clear advantage of why you would want machine learning to occur on your edge device. Some of the most common reasons include latency and bandwidth, as machines often generate a lot of information, and transmitting that data to the cloud may be expensive, and take a long time. It also may be because of availability, since the edge device needs to operate even when the internet goes down, so the devices require a degree of self-sufficiency. And the big one is privacy. This may be because of data regulations and data sovereignty, and customers may have requirements around their data that makes it difficult to transfer it into the cloud. You can imagine that the need to operate machine learning at the edge becomes pretty important in some of these facilities. If you wanted to run machine learning at the edge with AWS, you have a few options, but the one we'll mostly talk about here is AWS IoT Greengrass. Performing inferences locally on connected devices running AWS IoT Greengrass reduces latency and cost. With AWS IoT Greengrass, instead of sending all device data to the cloud to perform machine learning inferences and make predictions, you can run inferences directly on the device, in your local environment. As predictions are made on these edge devices, you can capture the results, and analyze them to detect outliers. From there, that analyzed data can be sent back to Amazon SageMaker in the cloud, which is a fully managed service that provides the ability to build, train, and deploy ML models. You can use SageMaker to tag and reclassify data to improve the machine learning model. Let's use an example of how it can use ML models that are built and trained in the cloud, and then, run their inference locally on devices. Remember the room example I used earlier in the video, where I'm detecting if a room in my factory is getting warmer? Well, using SageMaker, I could build a model where I want to detect if an unexpected object, such as a human, enters this very warm room in my factory. I can optimize this model to run on my cameras, and then deploy it to predict suspicious activity, and send me an alert saying it's too hot for a human to enter. The inference running on AWS IoT Greengrass will gather data and send that back to SageMaker, so that it can improve the quality of the ML model. It's pretty quick to get started with machine learning, as it integrates directly with AWS IoT services, like Greengrass, and running inferences on your local devices saves you a lot of headache. You no longer have to move all your data to the cloud first, before you get started with ML, which saves you both on the financial and security front, and you can now have real-time inferences. Alright, that's it for this one, and I'll see you next time.