So here's a really interesting topic. So behind the scenes BigQuery, once you execute that query, it's passed to that query engine. And before it retrieves all those files from Google colossus, right, this gigantic hard drive in the cloud, the amount of work that the query is going to execute and do and process massively in parallel, which is the foundation for BigQuery, and a lot of these other cloud products, is done by many, many, many workers. And again, back in the technologies, a different version of this, an earlier version, was MapReduce, which Google invented. And then Open Source, which later on became Hadoop. But Google and the dremel technology that underlies it has this concept of many workers, which are is synonymous with these slots. And if you're interested in that architecture and how to optimize worker performance and all that good conversation, that is the third course in this specialization which is achieving advanced insights. Here we're just mainly going to be talking about slots or workers in a general pricing sense. So right now, you are guaranteed up to almost 2000 workers to actually process your queries. So behind the scenes, you can even see it, you execute a query and fully managed behind the scenes there are workers that actually spin up and get bits and pieces of your data and process your query on it massively in parallel and then collect those results and return those to you. So that is the resource that you're consuming. Now, the concept here of this slide and this topic is reserving a certain about of slots. If you want to guarantee that you will always have a certain amount of query resource or throughput for you. You can actually say, all right I'm going to contractually engage with Google and say I need to have 1,000 or 1500 slots available at this particular point in time, like on a Friday night when I'm running this query. Because I have strict SLAs that I need to meet for my organization. You can actually set up a reserve slot pricing model, but the default model is this shared pool model. Where if you're not using the worker for your queries, the worker is going to go off and work on somebody else's queries. But this concept of reserved slots has its separate pricing, and again, the core concept here is that you have guaranteed slots, or workers, regardless of the demand that other people are using as well. The trade off here is that you reduce the variability in your query performance. And because you're guaranteed workers at the expense of, again you're paying for those reserved slots.