GCP provides Storage object life cycle rules. They can use to automatically move objects to different storage classes. These rules can be based on a set of attributes such as their creation date where there live state. However, they can't take into account whether or not the objects had been accessed. One way you might want to control your costs is to move newer objects to Nearline storage if they haven't been accessed for a certain period of time. So in this lab you'll create two storage buckets, and generate loads of traffic against just one of them, and then you create a Stackdriver monitoring dashboard to visualize that buckets utilization or usage. After that, like what we did before, you'll create a Cloud Function to migrate the idle bucket to a less expensive storage class, and then we can test this by using a mock Stackdriver notification to trigger that function. Let's take a look. Now, one of the last optimization strategies that you're going to see here is saying, all right well I've got objects that I'm storing instead of Google Cloud Storage bucket or a GCS bucket. What happens if I have them in a storage class like Regional or Nearline, or there's a more efficient way to store those assets depending upon their usage, and how can I migrate them, move them between those storage classes automatically? So one of the first things that I want to show you is, just what all the difference storage classes are, and you'll experiment with these inside of your lab. So this is just the URL for Google Cloud Storage and the different storage classes that are out there. This all shows just the storage classes that are available. So generally for standard if you just create a Google Cloud Storage bucket it'll be just standard storage that's accessible, and you don't need to specify any particular class when you first creating it it'll default to standard. But if you don't use your data that often, say it's not a public bucket that gets a lot of traffic and you want it enables them to cost-savings like for maybe archival data, or you want to automatically say well if you're not using it let's put it on something that costs a little bit less and is accessed a little bit more infrequently. So that's when you can actually shift data that's stored in a GCS bucket for standard storage, and then reclassified into something like Nearline storage, or even Coldline storage if it's maybe accessed like once a year or once a quarter instead of once a day for something like standard storage. So now you're familiar with the fact that different buckets can have different storage classes, let's get back to the lab. So the lab here is going to walk you through the different types of storage, and then you're going to be creating different storage buckets. So I've already created these buckets a little bit before but you're going to be running through just the same repository before where you're going to be migrating the storage, you're going to creating a public bucket. You will be uploading a text file that just says this is a test, and then you've been creating a second bucket that just doesn't have any data in it, and then spoiler alert we're going to call that the idle bucket, or the buckets not going do anything. So you've got those two buckets, and one of the really cool things that you can do, is you'll set up a Stackdriver workspace and monitoring dashboard that's going to see the usage of each of those different buckets. Similar to how in a previous lab we monitored the CPU usage instead of this lab that you'd be monitoring the usage of the bucket. Again, Stackdriver is very flexible in terms of finding a resource on Google Cloud Platform and monitoring how well it's used. After that, one of my favorite things to do is abusing an Apache library, and this is Apache Bench to serve traffic fetch traffic to that particular text file. So let's do that right now. It's it's kind of fun. So hopefully this will work. Let's see, I don't want to be in an non-attached persistent disk. I will actually want to be in migrate storage. Let's see ls into the migrates storage. This is where the Python code that actually handles the storage actually lives which is cool. So let's see if we can't just generate the traffic. Now the bench command is not found. So one of the things that you ought to do is you have to install Apache Bench serving library. So we'll go ahead and install that, and then once that's available we going to served 10,000 requests to one of our public buckets. So as you can see here I'm in the Google Cloud Storage page. How you can get here is just in the navigation menu under not Compute but Storage this time, just go into the browser. I have a couple of buckets. The two that you'll be creating as part of this lab were the serving bucket which has the text file and you can see it's marked as public which means anyone on the Internet can access them, and the idle bucket which is doing nothing. It's already been reclassified to Nearline storage as opposed to something like standard original. But that's because I've ran the Cloud Function to make sure that this demo work before you record it. Okay, now let's serve tonic traffic. We've done that command and then boom. Benchmarking be patient. A 1,000 requests like 1,000 different people when they hit that text file 4,000, 5,000, so you can see if you're on your Stackdriver dashboard it's like spiking up through the roof. So what you're going to be doing later is going to be saying all right well, this one particular file or this bucket is getting a tonic traffic. So regional storage is perfectly fine for it. But this other one is got nothing, nothing's being accessed and there's nothing in there to be accessed. Let's move it from say Regional to Nearline, and that's exactly what the Python function is going to do that you could be creating a Cloud Function for and then wrapping that inside of a cloud scheduler as well. So back inside the lab after you've done that artificial traffic which is really fun. I like being like DDoSing yourself right. You see the actual code that's going to be during the migration is basically says. All right well let's update it to Nearline storage instead if it's not used that much. Same thing as your previous labs you deploy that function. You give it an HTTP endpoint that Cloud scheduler can then invoke, and then you'll make sure that it actually gets set up with a logging feature where you can see them actually being deployed via that JSON file. Then let's see for us, I've already invoked the function and let's just confirm that it is in Nearline storage, boom. So it was moved from a more frequently accessed storage class likely regional or standard, and it has been reclassified into something that's a little bit cheaper because the thought is you're going to be accessing that data more infrequently as evidenced by the fact that it wasn't given 10,000 units of traffic and it reclassified it automatically to Nearline. All right, that's it for this lab. Good luck with your attempt of it and it keep in mind for Qwiklabs you can execute them more than one time. So don't worry if the timer runs out and if you didn't complete all of your activity tracking objectives, you can always click End Lab and started it again for another fresh hour at the lab. Good luck.