To get started with AutoGraph, you can start with implementing a function that performs some operation in [inaudible] mode, like you always want. For example, here is a simple function that takes two parameters, a and b and returns the sum of them. You then decorate your add function by adding @tf.function in the line above your function definition. You can think of this decorator as taking your custom code, a plus b, and wrapping it inside a pre-built tf.function. So your add function now has the features of tf.function combined with your custom code. Your decorated add function now has graph codes that you can take a look at. If you want to explore what the graph code would look like, you can use tf.autographsto_code method and then pass in the add function or whatever function you want that was defined using that tf.function. There's a lot of plumbing here that makes this Op within graph mode. But because it's a very simple Op that just adds a plus b, you can see that in here. You can compute your gradient operations on graph style code 2. Here's an example of computing a gradient on the add function that you just defined. You can define a variable V with a value of one, inside tf.GradientTape width block, you can then call your custom add function, passing it the variable v and one, and storing the result. To calculate the gradient, you can then call tape.gradient, passing in the result of v. This calculates the gradient of results with respect to v; and this gives us one. If your code uses multiple functions, you don't need to annotate them all. Any functions called from within an annotated function will also run in graph mode. Here, deep net is decorated with tf.function, but linear_layer is not. However, since linear_layer is called from within deep_net, linear_layer will also be converted to graph mode by AutoGraph. Python's Functions are polymorphic, which means that a function which takes in a function of parameter can work whether the parameter passed as an integer or a float, or even a string. This polymorphism applies to graph style code as well. You can see all of the operations here work like they would when just coded in Python. Here, we defined a decorated a function named double, which takes a parameter and adds it to itself. They can work if we give it an integer one for which it will return the number 2, it'll work with a float of 1.1 for which it will return 2.2, and it also works with a string, for example, with the letter a for which it will return the string aa. Note that the prefix b that appears before the string is meant to denote that aa is stored in a byte literal. Don't worry too much about this, but when data is stored as a byte literal, it just means that it's storing numbers instead of strings. When defining a subclass of Keras classes as you've done earlier in this course, you can also use graphs. Here, I've defined a class called custom model, which inherits from a Keras model class. Within the class, I've defined the call method decorated with tf.function so it will be converted into graph mode for me too. For quick note on performance, using AutoGraph will have its biggest performance gain for code that uses lots and lots of Ops. That code doesn't have to be inherently complex, and often a very simple piece of pythonic code can use a lot more Ops than you might think. Consider the simple game of FizzBuzz. It's a simple algorithm looped through seven numbers. If it's a multiple of three, print Fizz, if it's a multiple of five print Buzz, and if it's a multiple of both, print FizzBuzz. But look how many Ops are there to do this, not to mention the conditionals and the control flow. This type of code can look very complex in graph mode, but can execute much quicker and is the best type of scenario for using graph. Remember that code that uses lots of small Ops tends to have the best performance improvement. If you look at FizzBuzz in graph mode, it will be very hard to code by hand. Let's take a look at the code. There's a lot of code here. Let's go through it in detail line-by-line. Now, don't worry, I'm just kidding. You don't need to worry if you can't read this code or if you don't feel like reading through this code. The point is that by using tf.function, AutoGraph can generate all of this for you.