Since a data warehouse also serves other teams, it is crucial to learn how to partner effectively with them. Remember, that once you've got data where it can be useful, and it's in a usable condition, we need to add new value to the data through analytics and machine learning. What teams might rely on our data? There are many data teams that rely on your data warehouse and partnerships with data engineering, to build and maintain new data pipelines. The three most common clients are, the machine learning engineer, the data or BI analysts, and other data engineers. Let's examine how each of these roles interacts with your new data warehouse, and how data engineers can best partner with them. As you'll see in our course on machine learning, and ml teams models rely on having lots of high quality input data, to create, train, test, evaluate, and serve their models. They will often partner with data engineering teams to build pipelines and data sets for use in their models, two common questions you may get asked are, how long does it take for a transaction to make it from raw data all the way into the data warehouse? They're asking this because any data that they train their models on, must also be available at prediction time as well. If there is a long delay in collecting and aggregating the raw data, it will impact the Ml team's ability to create useful models. A second question that you will definitely get asked is, how difficult it would be to add more columns or rows of data into certain data sets? Again, the ml team relies on teasing out relationships between the columns of data and having a rich history to train models on, you will earn the trust of your partner Ml teams, by making your data sets easily discoverable, documented and available to Ml teams to experiment on quickly. A unique feature of big query, is that you can create high performing machine learning models directly in big query, using just sequel by using big query Ml. Here is the actual model code to create a model, evaluate it and then make predictions. You'll see this again in our lectures on machine learning later on. Other critical stakeholders are your business intelligence and data analyst teams, that rely on good clean data to query for insights and build dashboards. These teams need data sets that have clearly defined schema definitions, the ability to quickly preview rose and the performance to scale too many concurrent dashboard users, one of the google cloud products that helps manage the performance of dashboards is big query, BI engine. BI engine is a fast in memory analysis service, that is built directly into big query and available to speed up your business intelligence applications. Historically, BI teams would have to build, manage and optimize their own BI servers and OLAP cubes to support reporting applications. Now, with BI engine, you can get sub second query response time on your big query data sets, without having to create your own cubes. BI engine is built on top of the same big query storage and compute architecture and servers, as a fast in memory intelligent cashing service that maintains state. One last group of stakeholders are other data engineers, that rely on the up time and performance of your data warehouse and pipelines for their downstream data lakes and data warehouses. They will often ask, how can you ensure that the data pipeline we depend on will always be available when we need it, or we are noticing high demand for certain really popular datasets, how can you monitor and scale the health of your entire data ecosystem? One popular way, is to use the built in cloud monitoring of all resources on google cloud, since google cloud storage and big query our resources, you can set up alerts and notifications for metrics like query count or bites of data processed, so you can better track usage and performance. Another two reasons why cloud monitoring is used for tracking spending of all the different resources used, and what the billing trends are for your team or organization. And lastly, you can use the cloud audit logs to view actual query job information, to see granular level details about which queries were executed and by whom, this is useful if you have sensitive data sets that you need to monitor closely, a topic we will discuss more next.