Hi again, Evan here. In this video, I'm going to walk you through three labs that provide solutions to help your engineering teams drive cost optimization. One quick way to optimize cost is to get rid of resources that you're not using. In our first lab, you'll learn how to identify and then delete unused IP addresses automatically. In the next lab, we'll walk you through identifying unused and orphaned persistent disks. Another easy way to reduce your cost. The third lab will show you how to use Stackdriver, Cloud Functions, and then Cloud scheduler to identify opportunities to migrate Storage buckets to the less expensive storage classes. In this lab, we'll use Cloud Functions and Cloud scheduler to identify and clean up wasted Cloud resources. In Google Cloud Platform, when a static IP address is reserved but not used, it accumulates a higher hourly charge than if it's actually attached to a machine. In apps that heavily depend on static IP addresses. In large-scale dynamic provisioning, this waste can become very significant over time. What are you going to do? You create a Compute Engine VM with the static external IP address and a separate unused static external IP address. Then you'll deploy a Cloud Function to sniff out and identify any unused IP addresses and, then you create a Cloud scheduler job that's going to run every night at 2:00 AM to call that function and delete those IP addresses. Once again, just remember that GCP's user interface can change, so your environment might look slightly different, but I'm going to show you in this walk-through. Let's take a look. We find ourselves back in another quick lab. Now this one is all about using Cloud Functions to do magical things. In this realm of world, we're just going to be creating Cloud Functions to clean up resources that aren't being used. This particular case we're going to be creating a function that's going to clean up unused IP addresses, the actual function code, as you're going to see way down there, is just a little bit of, I think it's written in JavaScript, just Functional JavaScript code. The great news is you don't have to write the function yourself, Google Cloud engineers provide a lot of these functions on their external GitHub repository, which is cool. If you wanted to just use literally the things that you're using inside of this lab right now, and you can copy and paste into your own project at work as well. Highlighting the things you're going to be doing, you first need to create a virtual machine and like we said in the introduction, you're co-creating co-couple external IP addresses, one that you're going to use and one that's going to be unused. Then you could be building that code that's actually going to go through, sniff out any that aren't in use and then bring them down. Now that's only, if you manually triggered it. The second part of this lab is actually to schedule that Cloud function to run, in this particular case, nightly at 2:00 AM, I believe. That'll automatically invoke that function and then do that cleanup automatically. Once you set it up, once which is great, it'll just run in perpetuity. A couple of different things that I want to highlight. The first thing you need to do is inside of the lab, you'll be working off of code that already exists inside of this GitHub repository. In fact, this is for the last three labs, anything that has to do with cleaning up things is going to be based, on this repository here. I'll show you just very briefly. The first lab is going to be on the unused IP addresses. The second cleanup lab is the unattached persistent disk or PD and,ft the third lab is going to be a migrating storage to a cheaper storage class if that bucket isn't being used, that actively sounds cool. The code for the unused IP address is just this JavaScript code here. Again, you don't have to write it, you just have to invoke it, deploy it for your own functional code. But you can see what it's doing. It says, all right. You can actually view the watch of this. There are how many IP addresses that are out there for each of them. If it's not used, if it's not reserved, then you can potentially delete it. If you can't, it'll say could not go with the address and then boom. Just 60 or so lines of JavaScript code that basically says, there is statuses associated with these IP addresses. I want you to have some logic around them to basically say, iterate through all the IP addresses that people on my team are created throughout my project and then remove those ones that aren't used. This is the actual nuts and bolts of the lab, is this public JavaScript code here in the GitHub library. Let's take a look at how you actually deploy and invoke them. After you've cloned that project code in, what you need to do then is you need to simulate a production environment. You're going to create the unused IP address and then use IP address. Then you're going to associate them with a particular project and then you want to confirm that they're actually created. I'll actually show you just this one command right here. This says, Hey, G cloud. How these commands are structured, by the way, is Google Cloud. What service or product do you want to use. This is Compute Engine and then for IP addresses, it is just called addresses and that is one of the lists. I only want to filter those for my particular regions, it's just a flag filter that you can have. I've actually already run through this labs. You can see that there is no IP address that doesn't say it's not in use because I actually already ran the function and it deleted it, but as you work your way through your lab, you'll have a long list of unused IP addresses that it will trim down to just the ones that are in use. It's pretty cool. Most of the magic, again, since this is using the command line to deploy the Cloud Function, is going to happen here, but once you actually validate that it works and it cleaned up those IP addresses that weren't in use, what you can then do at the end of the lab is basically say, "Hey, I don't want to come into the command line every time I invoke this function." I'll just show you what the function invokes. Looks like right now, and deploy it, trigger it. Here we go. After you deploy it and you get ready to work, the last part of the lab is actually to schedule it. It uses a G Cloud Scheduler. Cloud Scheduler is a relatively new product. It's essentially just like a glorified cron job that Google will manage all the maintenance and the hardware behind the scenes for you. You can use the VM. I use the SSH command line terminal to create it, but then I also like to go into and see where it actually is. I think it's under like the Admin tools here. Tools, we want Cloud Scheduler the little clock here. One of the things that you can do is this one was the unused IP addresses. The next lab you'll be creating one for the unattached persistent disk jobs. Instead of invoking it via the terminal, you could do their run now as well, which is cool. It goes lightning fast because you're just running that JavaScript code and then just killing it all the IP addresses that are unused, which is great. I'm a big fan of saying after you've created all your work inside of the terminal, you can view all of your jobs either via the terminal or within the UI as well, and then boom, it automatically runs at a frequency and again, much like a cron job. This denotes 2.00 AM every night. There are website utilities out there that help you convert time into a cron job syntax right here. So don't worry about that too much. That's the first deep dive that you've had into using a Cloud Function to do something a little bit more sophisticated than hello-world. In our first clean up use case, we're still removing those unused IP address. Go ahead and try that lab and then all that knowledge that you're going to be learning there will make the next two labs very easy because you can be doing the same things operating off of the same repository. Good luck. In this lab, you'll use Cloud Functions and Cloud Scheduler to identify and clean up wasted cloud resources. In this case, you'll schedule your Cloud Function to identify and clean up unattached and orphaned persistent disks. You'll start by creating two persistent disks and then create a VM that only uses one of those disks. Then you'll deploy and test a Cloud Function like we did before that can identify those orphan disks and then clean them up so you're not paying for them anymore. Let's take a look. Here we're into the quick lab for cleaning up those unused and orphaned persistent disks. Again, one of my favorite things about those quick labs is as you're working your way through the lab, you'll get those points as you complete all those lab objectives automatically. Gloves is smart. It knows whether or not you did the work or not, but it's also really fun to get those perfect scores all the way at the end. As you scroll down and looked at this lab, you're already starting to get familiar with Cloud Functions. Again, these are those magical serverless triggers that can look for things to happen, be triggered, and then do other things. The other lab that you worked on just before this was cleaning up those unused IP addresses and you set that up as running as a Cron job. You had the Cloud Scheduler at 2 AM. Same general concept for this lab, except you don't care about IP addresses. Here you care about persistent disks. Those are those hard drives that are attached to the virtual machines, because again, inside of Google you have the separation of compute and storage. Just because you get a virtual machine doesn't mean that you need to have that virtual machine running 24/7 just to keep that data alive. If you need compute power for an hour and you need persistent storage in perpetuity, you can actually separate those, which is cool. But say you didn't want that data just around when you had no virtual machine associated with it, you can identify those orphaned persistent disks. As we mentioned in the introduction, you'll be creating two of those persistent disks. The VM is only to use one of them when a detached that disk and then we are going to create some code or copy some code from the repository that's going to be able to look through and find any of those disks that were never attached, never used, and basically say, "Hey, why are you paying for stuff that you're not using?" Then you deploy that Cloud Function that will remove all those persistent disks. Then lastly, you don't have to constantly wake up every morning and press that button to say remove persistent disks that will be really boring job, you're going to create a cron job here the Cloud Scheduler to automatically do that for you. Again, if you already did the last lab or you've seen a demo video for the last lab, you'll be working off of the code that's in as public google repository. For a GCF, automated resource cleanup, GCF is just Google Cloud Function. Here we have the unattached persistent disks instead of JavaScript this time it's actually written in Python, which is pretty cool. It's a little bit more involved, so it basically says, "All right, well, I want to find and look at and delete the unattached persistent disks." So much like you iterated through the IP addresses before in the prior lab, here you're getting the list of all the disks and iterating through them. Then if the disk was never attached, and that's some metadata associated with the disks, that it was never attached, there's a timestamp associated with it. In fact, it's actually just last attached timestamp is not present. Then you're basically going to say, "All right, well, this disk was never attached to anything, was never used. We're going to go ahead and delete that." So this code will run and handle all of that code automatically for you. You're not going to be writing Python, don't worry about it. This is just code that you can lift and shift and use on your own public applications. The main argument that you want to be considering here is deploying this code as a repeatable cloud function and then having an invoke at a regular nightly interval, say every night at 2:00 AM as the Cloud scheduler will help you. Back inside of the lab, the orphaned persistent disks, let's take a look at some of the things that we can do. We'll run some of this too. We just looked through the repository after that you're going to actually create those persistent disks. Here is where you give just a orphan disk, create, unused disk, create. You're actually going to create those two disks so I'll go ahead and just run these now. Inside of Cloud Shell, let's see what directory I'm in. I'm in the root directory. I need to go into wherever the code for the unattached persistent disk is. Now I'm in there. As you saw, we were just looking at that Python function before main.py. By the way, if you're not familiar with Unix commands, couple of useful ones are LS, which just lists the contents of a given working directory, CD says change directory. It's kind of double-clicking on a particular thing into the directory so it's double-clicking on unattached PD. Then CAT shows the contents of the file, doesn't do anything with it, but it shows the contents. So that same Python code that you saw before is now it is visible on the screen here. So what do we want to do? We want to create some persistent disks, have some that aren't going to be used and then delete them. We're in that directory, we're going to create some names and this is literally what you're going to be doing inside of the lab, is working your way through, copying and pasting, and hovering over these boxes, clicking on the clipboard to copy, creating all of them. I need to make sure that my project ID is set so let's see. It's probably because I skipped an earlier step inside of the lab. But the great news is if your product ID is not set, there is a command for that as well. We'll set the project ID. It's updated properly. Now we'll try it again. Export is it's basically saying define this as a variable. Create those two disks. No, it's because I didn't run the export project ID up here. Boom, done. This is why it's super helpful to go through the lab in order. Then let's create the disk which should now work, make this a little bit larger. It's creating all of these disks automatically. This is exactly what you could be doing through the UI. Here we go. We got some disks that are ready. Let's validate these disks were created before we blow away the ones that are unattached. What disks do we have? We've got a lot, great. We've got an orphan disk and an unused disk, and I have other stuff that I really don't want to be deleted so hopefully, this code works as intended. Orphaned disk and unused disk, keep your eyes on those. Of course, as you're working your way through a lab, you click on check your progress and your real lab instances as well. I've created the VM already, and then I will give it just a different name this time. Let's see. Here we're going to create that virtual machine instance and then look, we're giving it the disk name orphan disk, which I bet you can tell exactly what we're going to do to it. Right now we have a virtual machine that's using this disk. The next thing in order to get an orphaned, we've got to detach it. Let's see, inspect to make sure that it was actually attached to the disk. Boom, and as last attachment time and everything in there. Now let's orphan it. Detach the disk marked orphan, just a command to detach it. So now it's of in the world. Let's see, detached disk, disk instance, my name for this demo, I just have a dot one. Boom, it's going to detach it. Now, it detached it and I'm going to view the detached disk. It is orphaned, it is detached, Great. The last part of this code is actually deploying that Cloud function that will sniff through and look through all the disks that are out there and then detach them. It's having you inspect that python code just kind of be familiar with it again, you don't have to write any of that python code yourself. But getting with a familiarity with it can't hurt. Okay, so now, I've already deployed the Cloud Function, but before recording this video, I've scheduled it. Now what I want to do, is list all of this will be the magic that you're going to be doing inside of your labs. I'll list all the disks that are there. Shouldn't orphaned disk and an unused disk, and then now, if I got everything set up correctly. I'm going to go into my Cloud Scheduler. I'm going to show you just using the UI here, you can use the command line as you see, as you wish. Unattached persistent disk job boom, run now to collect closeness second run, and let's see if they're still there. Are they gone? All right. So as you see here, we're just about to run that cleanup of the attached unattached persistent disks. We've got an orphan disk and then one that was just never used. Let's see if hopefully that code runs, G Cloud compute disk. We've already run the function. It takes up to a minute for that actually to run. It'll submit the function, but sometimes the code will take it a little bit longer. I've gone ahead and ran that G Cloud just compute disks list shows the disks that are out there, and if you notice there are two disks that are no longer in here. The one that was unused and the one that was orphaned. I can say with certainty that the code works at least when I recorded this video, so go ahead inside of your labs, experiment with that and then maybe create three unused ones with a couple of different ones, and just get familiar with how to create, deploy those Cloud Functions and then get the ability to invoke them manually via Cloud scheduler or automatically via Cloud scheduler a Cron job Frequency, give it a try. GCP provides Storage Object life cycle rules they can use to automatically move objects to different storage classes. These rules can be based on a set of attributes, such as their creation date or their lives state. However, they can't take into account whether or not the objects had been accessed. One way you might want to control your costs is to move newer objects to near line storage if they haven't been accessed for a certain period of time. In this lab, you'll create two storage buckets and generate loads of traffic against just one of them, and then you'll create a Stackdriver monitoring dashboard to visualize that buckets utilization or usage. After that, like what we did before, you create a Cloud Function to migrate the idle bucket to a less expensive storage class. And then we can test this by using a mock Stackdriver notification to trigger that function. Let's take a look. Now one of the last optimization strategies that you're going to see here is saying, all right, well I've got objects that I'm storing inside of Google Cloud Storage bucket or a GCS bucket. What happens if I have them in a storage class like regional or near line,or there's a more efficient way to store those assets depending upon their usage. How can I migrate them, move them between a storage classes automatically. One of the first things that I want to show you is just what all the difference towards classes are, and you experiment with these inside of your lab, this is just the URL for Google Cloud Storage and the different storage classes that are out there. This all shows just the storage classes that are available. So generally for standard, if you just create a Google Cloud Storage bucket, it'll be just standard Storage that's accessible and you don't need to need to specify any particular class when you're first creating it, it'll default to standard. But if you don't use your data that often, say it's not a public bucket that gets a lot of traffic and you want to enables them to cost savings like for me, archival data. Or you want to automatically say, well if you're not using it, let's put it on something that costs a little bit less and is accessed a little bit more infrequently. That's when you can actually shift data that's stored in a GCS bucket for standard storage and then reclass it or reclassified into something like nearline storage, or even coldline storage, if it's maybe accessed once a year or once a quarter instead of once a day for something like standard storage. Now that you're familiar with the fact that different buckets can have different storage classes, let's get back to the lab. The lab here, it's going to walk you through the different types of storage, and then you're going to be creating different storage buckets. I've already created these buckets a little bit before, but you're going to be running through just the same repository before where you're going be migrating the storage. You're going to be creating a public bucket. You'll be uploading a text file that just says this is a test. Then you'd be creating a second bucket that just doesn't have any data in it. Then spoiler alert, we're going to call that the idle bucket, or the bucket's not going to do anything. You've got those two buckets. One of the really cool things you can do, is you'll set up a Stackdriver workspace and monitoring dashboard that's going to see the usage of each of those different buckets. Similar to how in a previous lab you monitored the CPU usage, inside of this lab that you're just monitoring the usage of the bucket. Again, Stackdriver is very flexible in terms of finding a resource on Google Cloud Platform and monitoring how well it's used. After that, one of my favorite things to do, is you'd be using an Apache library, and this is Apache Bench, to serve fake traffic to that particular text file. Let's do that right now, this is fun. Hopefully this will work, let's see. I don't want to be non-attached persistent disk, I will actually want to be in Migrate Storage. Let's see, ls into the Migrate Storage. This is where the Python code that actually handles the storage actually lives, which is cool. Let's see if we can just generate the traffic. Now, the bench command is not found, so one of the things that you'll have to do is you'll have to install Apache Bench serving library. We'll go ahead and install that. Then once that's available, we're going to serve 10,000 requests to one of the Republic buckets. As you can see here, I'm in the Google Cloud Storage page. How you can get here is just in the Navigation menu, under not Compute, but Storage this time. I'm just going to the Browser. I have a couple of buckets. The two that you'll be creating as part of this lab were the serving bucket, which has the text file, and you can see it's marked as public, which means anyone on the Internet can access it, and the idle bucket, which is doing nothing. It's already been reclassified to nearline storage as opposed to something like standard original, but that's because I've ran the Cloud function to make sure that this demo worked before you recorded it. Now let's serve a ton of traffic. We've done that command, and then boom. Benchmarking, be patient. A thousand requests, look 1,000 different people went to hit that text file, 4,000, 5,000. You can see if you're on your Stackdriver dashboard, it's like spiking up through the roof. What you're going to be doing later is you're going to be saying,well, this one particular file or this bucket is getting a ton of traffic, so regional storage is perfectly fine for it. But this other one is got nothing, nothing is being accessed and there's nothing in there to be accessed. Let's move it from, say, regional to nearline. That's exactly what that Python function is going to do, that you can be creating a Cloud Function for and then wrapping that inside of a Cloud Scheduler as well. Back inside the lab, after you've done that artificial traffic, which is really fun. It's like DDoSing yourself. You see the actual code that's going to be doing the migration is basically says, well, let's update it to nearline storage instead if it's not used that much. Same thing as your previous labs, you deploy that function, you give it an HTTP endpoint, that Cloud Scheduler can then invoke, and then you'll make sure that it actually gets set up with a logging feature where you can see them actually being deployed via the JSON file. Then let's see. For us, I've already invoked the function, and let's just confirm that it is in nearline storage. Boom. It was moved from a more frequently accessed storage class, likely regional or standard, and it has been reclassed into something that's a little bit cheaper, because the thought is you're going to be accessing that data more infrequently, as evidenced by the fact that it wasn't given 10,000 units of traffic, and it reclassified it automatically to nearline. That's it for this lab. Good luck with your attempt of it. Keep in mind for quick labs, you can execute them more than one time. Don't worry if the timer runs out if you didn't complete all of your activity tracking objectives, you can always click End Lab and start it again for another fresh hour at the lab. Good luck.