Welcome to Module 4, processing with Azure. First, we will discuss schedules and triggers. In this module, we will discuss schedules, triggers and events, authoring tools using IDE, creating a trigger that runs a pipeline and a schedule, scheduling a trigger for Azure Data Factory, pipeline execution and triggers in Azure Data Factory, and a use case on Azure schedules, triggers, and events. In a database system, we can have any number of transactions processing. Related transactions will be processed one after the other. There are some transactions that will process in parallel, a schedule is a process of grouping the transactions into one and executing them in a predefined order. A serial schedule is a schedule in which transactions are aligned in such a way that one transaction is executed first. When the first transaction completes its cycle, then the next transaction is executed. Transactions are ordered one after the other. An event trigger occurs when a blob is created or replaced. Specifically, this event is triggered when clients use the Create File and flush with close operations that are available in the Azure Data like storage into REST API. You should understand that event triggers are triggered only by the creation or deletion of a blob in a blob container. But since we are using Event Grid, we may have the option in the future to enable triggers through other events. Event triggers work when a blob or file is placed into blob storage or when it's deleted from a certain container. When you place a file in a container, that will kick off an Azure Data Factory pipeline. These triggers use the Microsoft Event Grid technology. What if you could learn about upcoming events which may impact the availability of your VM and plan accordingly? With Azure Scheduled Event, you can. Scheduled events is one of the sub-services under Azure Metadata Service that surfaces information regarding upcoming events. Scheduled events give your application sufficient time to perform preventative tasks to minimize the effect of such events. Scheduled events are surfaced using a rest endpoint from within the VM and the information is made available via a non-routable IP, so that it is not exposed outside the VM. While most updates have little to no impact on virtual machines, there are cases where we do need to reboot your virtual machine. With scheduled events, your application can detect such scenarios, with event type being set to reboot or redeploy. You may not need to reboot your production servers manually, but you can try to reboot or redeploy to test your VMs or to test your failover logic. In both cases, a scheduled event is surfaced with the event typing set to reboot or redeploy. This is the default value that starts the event scheduler, the event scheduler thread runs and executes all scheduled events. This value stops the event scheduler. The event scheduler thread does not run and is not shown in the output of the show process list, no scheduled events are executed. This value puts the event scheduler thread to sleep, the event scheduler is non-operational and the thread is not shown in the output of the show process list. Authoring tools are usually higher level abstractions of lower level programming tools like compilers. At the lowest level, we'd classify these development tools as an editor, compiler, and debugger. These components allow you to write code directly by hand, then create an executable program from this code once debugged. This level gives you the most flexibility in what you can do with the program. An Integrated Development Environment or IDE, is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editor, build automation tools, and a debugger. Object-oriented programming is a programming language model, organized around objects rather than actions, and data rather than logic. The first step in OOP is to identify all the objects the programmer wants to manipulate and how they relate to each other and exercise often known as data modeling. The authoring environment is a software tool to create scenarios that try to mimic the navigation of real users. To this end, it provides facilities to record, edit, and debug database scripts, which are then used to define the scenarios of workload characterization. This ends this section. Up next, we'll create a trigger that runs a pipeline on a schedule.