Okay, let's talk about production. And I guess maybe the way to think about some of the topics that we're introducing here. We're really trying to go after breadth on some of these videos and slides and really to kind of set the landscape for things that you'll bump into. As you're thinking about building software, as you're thinking about architecting systems and as you're thinking about Architecting systems as it relates to big data and whatnot. So production is important and I would say much like testing, we probably had it wrong back in the day and we thought about production last. And it got us into hot water, and so we do think about production a little bit early differently these days and we really think about it as one of the first things we do. So a lot of times we might stand up a small hello World application that really doesn't do much in terms of what we're getting at in terms of functionality, right. That we're going after or analytics or how we think about going after data, but we get a system or we get an application in production and then we iterate on that application in production. And the way to do that right, is to ensure we have a handful of things in place, right? Mainly we're able to monitor and get metrics out of our production system, so there's a couple of slides, right? Well I guess this is the one a tweet from me, right that when we think about a minimum viable product, especially when we think about cloud native applications. And so cloud native again, are those applications that are going to end up on the public cloud potentially, right. So we think of things like Amazon web services, Google cloud or Microsoft Azure, right. So we want a hint of scalability, sure, basic monitoring and we'll get into that here in a second. We want to at least have an idea of what a service level agreement might look like, how available should our application be. And then also when we deploy, we want to be able to do deploy such that there's not any downtime and those are blue green deploys. And again, some of this slide just introducing a handful of topics really just around awareness. So when you bump into some of these things you say I got it, okay, I saw that in the lecture slides or the lecture video. Okay, so let's keep going. So what is performance monitoring, right? We're really looking at where time is spent, so whether that's a web transaction or a database transaction, right? We want to know where time is spent potentially, right? So we can also tackle where things might be going wrong, right? Long database transactions for example, are a usual suspect here, right? We forgot to introduce a index and without that index we have slow queries or were using a web framework that lets us Bacon, one to many relationships but we forgot to put in that foreign key, right. Or we just get into a situation where we bump into a web transaction that's just super slow because we might be bouncing around collaborating services or rest APIs that we're going after on the back end. So we want to monitor performance and we want to get that baked in early, much like testing, right? We wana get tests going early so that we know our system is testable. We want to get monitoring in early, so we know our system is going to be something we can monitor, right? Seems straightforward, but again we kind of got it wrong early days of building software. So it's really this book, Site Reliability that I think turned things upside down right. It also created a handful of different roles for individuals who spend time on the platform side or who spend time in that area between apps and platforms right to make sure things are running. So there's sure reference, a great resource, but also a few of the early chapters are great to take a look at and it's a free read online. So I do encourage folks to to take a look, a couple of things that will come out of that reading are going to be some common terms. So service level agreements, service level indicators and objectives and, maybe just thinking about uptime differently, right. So we used to think about uptime in terms of, well I guess for the moment we think about it the same way, but it's mainly the fact that you get an error budget and you get to have some downtime, right? Is maybe the shift there, right. So we would talk about uptime is something where we okay, we'd never be down, right. We always want to be up whether that's, four nines or five nines, right. And when you hear those numbers, right? 4 9s is 99.99% up time or 99.999% up time. 5 9s is hard, right? So maybe one thing to call out is even Amazon only talks about being 4 9s available for their web services platform. The shift though, sorry, getting back to it, right, is that error budgets is something we should use, right, potentially. So whether it's 3 9s or 4 9s, right? We have some budget of downtime and we could use that for deployments, we could use that for outages, we could use it to simply just let us go faster, right? And to innovate faster. So I think that was one shift for me, is saying, well you actually have some downtime based on your error budget and we have our service level objectives and indicators in place. So let's build a system early, right? That captures some of these common terms. Okay, so that's a bit on site reliability and error budget, those site reliability engineers out there who might be going through this lecture? Yeah, feel free to shoot us a note and we'll correct. What that looks like in terms of some of those indicators, a handful of tools out there, whether you're using things like Prometheus and Grafana or some common open source tools. One is to collect data, the other is to visualize data and they kind of go hand in hand, and then there's a handful of companies out there, right? That specialize in just this, there's a public company called data dog, right? That really all they do, well they do a lot, but their main thing, right, is they're getting out log data and surfacing those metrics or that information to folks who might be running that network operations center. So service at level indicators and then, maybe diving in slightly to those, there's a few different types, right? So there might be meters, there might be gauges, counters, how many things happened, right, while we're up. Or meters how fast, are we turning around web requests or how fast are we turning around those database queries, what else? Timers, health checks are pretty common, right to bake in early and a lot of folks are looking at container orchestration platforms like Kubernetes. And Kubernetes, right, you actually need to expose an endpoint for a health check and basically making sure that the system right, knows that your application is healthy. Or your data store or your key value, data store, right, is healthy. And some of those are how it deals with recovery, right, and resiliency. So you might bump into kind of the five hours, right? Those are two of them that I talked about just then. So service level indicators are important and I guess this is a small nod to at least where I bumped into them first, which is the drop wizard crew. So drop wizards is a framework and the metrics are actually their own library and I actually was using something similar, right? Or even just the drop wizards metrics very early on and getting at things like meters and gauges and whatnot. But take a look depending on the stack that you're running right, whether it's a ruby python, java go stack right? There's a handful of things available out there and the provenance exercise, that we have, is looking at just that those metrics and standing up Grafana as well as Prometheus to get at some of those metrics. So that is yeah, a quick look at production and how we get ready for production and the main takeaway should be much like testing, do it early, right? So get those production metrics and monitoring bits in there early on, right, so that they're not a challenge to get in later. Okay, thanks.