Congratulations. You've made it to the end of the operations course. In the final installment of the dataflow series, you're ready to build a modern data platform now. Before we do that, let's summarize the main concepts we covered, in each of the modules in the operations course. We started this course with the walk-through of the dataflow monitoring experience. We learned how to use the jobs list page to filter for jobs that we wanted to monitor or investigate. We looked at how the job graph, job info, and job metrics tabs, collectively provide a comprehensive summary of your dataflow job. Lastly, we learned how we can use dataflows integration, with metrics explorer, for creating alerting policies for dataflow metrics. We explored two important integrations in the dataflow operational toolkit. Logging and error reporting. The logging panel helps you sift through job and worker logs, provides a diagnostics tab that surfaces errors. You can click through to the error reporting interface to investigate the frequency of these errors, and examine the full stack traces of these errors. We use the monitoring, logging, and error reporting capabilities and incorporating them into our recommended troubleshooting workflow, which leverages dataflow's integrated error reporting and jobs metrics tab. We then reviewed four common modes of failure for dataflow. Failure to build the pipeline, failure to start the pipeline and dataflow, failure during pipeline execution, and performance issues. Performance is a key consideration for any data engineer operating a data processing system. In this module, we review how pipeline design can impact your performance. The topology, coders, windows, and logging that you implement, can have adverse impacts on your pipeline performance, if not taken into careful consideration. The shape of your data, specifically, if your key space is skewed, can cause worker imbalances and cause underutilization for your pipeline. Your dataflow pipeline will interact with sources, sinks, and external systems. A well-tuned pipeline will take limitations and constraints of these pieces into account. Lastly, shuffle and streaming engine can help offload data storage from worker attached disks, onto a highly scalable backend that would deliver performance benefits to your pipeline. As your data requirements evolve, so do your dataflow pipelines. A robust dataflow architecture implements testing at various abstraction layers, starting with the due functions at the lowest level, then P-transforms, then pipelines, and finally, for entire end-to-end systems. Dataflow's continuous integration, continuous deployment model, requires using the direct runner, to validate your pipeline in a local environment, followed by testing it on a production runner before it is pushed to production. Beam provides helpful functions like P-Assert, test pipeline, and test stream, to help implement this testing architecture. Dataflow offers features such as update, drain, snapshots, and cancel, so that you can adjust the deployment of your streaming pipelines as needed. Next, we discussed how to implement reliability best practices for your dataflow pipelines. Monitoring dashboards and alerts, can help notify you when your system is encountering a bottleneck. Using dead-letter queues and error logging can prevent pipelines, from going down when corrupted data enters the pipeline. Protecting your pipelines from zonal and regional outages, require thoughtfulness about how you specify the location of your sources, sinks, and dataflow job. But data loss can be mitigated with pub, sub and dataflow snapshots. High availability can be implemented by running redundant pipelines in different zones or regions. Our last modules covers flex templates, which makes it easy to share and standardize dataflow pipelines for your organization. Templates allow you to call it dataflow pipelines by making an API call, without the fuss of installing runtime dependencies in your development environment. Google offers a variety of templates directly in cloud console, which allows you to launch dataflow job without writing a single line of code. Flex templates offers advantages over classic dataflow templates, and are encouraged for all templating needs. To conclude, dataflow offers a whole suite of features that makes it easy, to manage your data processing system. This operational toolkit will help you focus your efforts on insights, not infrastructure, and ensure that you can spend your time creating value for your customers, not keeping the lights on. We're excited to see what your organization builds on dataflow.