0:02

Hi, and welcome back to the introduction to deep learning.

Â You already know the mathematics and the basic principles behind deep learning.

Â Now, it's time to get your hands dirty.

Â Before we do that, I have a question for you.

Â What do you think are the requirements from a deep learning framework?

Â What capabilities for APIs do you expect from it?

Â Thank you for your answers.

Â For us, the answer

Â looks like that it should or provide fast computation, fast matrix operations.

Â It should provide symbolic differentiation

Â and it should provide optimization and preferably run on GPUs.

Â There are several frameworks which are more or less equal.

Â We've chosen TensorFlow for this specialization because of its largest user community,

Â ability to run on distributed systems,

Â and ease of integration with production,

Â and also it's terrific visualization capabilities namely as a TensorBoard.

Â Let's go.

Â If you're not running Outlook,

Â you should probably install TensorFlow.

Â But now, it's TensorBoard, relaunch it.

Â It's running locally.

Â It should be available under this link.

Â It's also empty.

Â There is nothing to see here.

Â What do you- but we'll be filling it up as we go.

Â So, the first thing we do,

Â we import TensorFlow and we create a session.

Â So session is our interface to the computing engine.

Â If you want to use GPU source then it should pass options here.

Â To get a feeling of what's going on,

Â let's do a simple exercise namely use

Â our normal function to compute the sum of squares of numbers from zero to N-1.

Â Well, and please don't tell me it has analytical selection.

Â It's not the point here.

Â The point here is to do the same thing with TensorFlow.

Â Now, we have N,

Â it was N, it was byte and integer.

Â Now, N is tf.placeholder, let's strongly type,

Â so we know that it should be an integer with a value is not specified here.

Â So is it our TensorFlow things of computations, sequence of operations.

Â Then what we call tf.reduce_sum(tf.range(N)**2.

Â So this just looks hopefully the same as NumPy.

Â And we run it, we run it,

Â and it runs, and runs in the TensorBoard.

Â Well, what happened here?

Â The TensorFlow has a difference,

Â it's different from say NumPy,

Â in the sense that the definition of your computation and execution are separate.

Â If you're into functional programming,

Â you should like it a lot,

Â because the basic building block of TensorFlow is a symbolic graph.

Â A symbolic computation graph in which you define inputs and

Â we define transformations to be applied on those inputs.

Â What we did here, we had N,

Â so this is our N. Remember,

Â you put a name, input function.

Â That's why it appears like this.

Â Then we have a Range.

Â So, Range is just another function.

Â And then a vector from range,

Â it get passed to Power with square root,

Â and then get passed to Sum and it's reduced to a number.

Â Now, in order to define those graphs,

Â so what we do with this is we define placeholders for the inputs.

Â We define graphs, we combine them from operations and when

Â we need to run it we just call eval or run methods.

Â Now, TensorFlow

Â supports all the standard numerical data types.

Â It also has most of the functions you'll find in NumPy.

Â So, since you're at advance machine learning specification you should

Â not have trouble switching from one to another.

Â Well, last point is that TensorFlow,

Â this is obviously not a complete introduction to TensorFlow.

Â And it has many,

Â many features, some of them are useful and high-level.

Â And you should take a look at tf.contrib before reinventing some wheel.

Â Let's see some more TensorFlow stuff.

Â I'll begin with just placeholders.

Â So this is an input of- there is a float input.

Â It can have any size, any shape.

Â We can require our input to be a vector,

Â a vector of any length,

Â so if an input is non into shape,

Â it means that it can take any value.

Â You are going to have a vector of fixed size,

Â you're going to have a matrix with fixed number of columns but any number of rows.

Â You can have a multidimensional Tensor.

Â And you can freely combine Nones and numerical dimension.

Â The operations, operations are defined in a very,

Â very user friendly fashion.

Â So, here you take each element of input vector and double it, element wise.

Â Here we take each element of input vector to compute the cosine function.

Â And here, you have input vectors, clear each component,

Â you'll then subtract the original vector and you add one to each component.

Â Now, we run. Now, this is an example of a more complex transformation.

Â We defined two vectors and then we multiply,

Â we do element wise multiplication and element wise division.

Â So here we see that it is,

Â the result is again a Tensor, here we evaluate.

Â So we take the transformation, we call eval,

Â and we pass a dictionary of input value.

Â So we write it to TensorBoard.

Â Here are our exercises as a TensorBoard.

Â Well, if you ask me,

Â it looks like an unholy mess.

Â And do you recognize this part but everything else is very cluttered and more away,

Â you can even expand it to see that,

Â you have many placeholders.

Â There are 10 nodes. Clutter, clutter, clutter, clutter.

Â How do we deal with it?

Â TensorFlow provides you with capabilities to group stuff and name stuff.

Â So all those exercises is the problem,

Â we can put into a name scope and also we can not

Â have an anonymous vector but have

Â a vector with a name to recognize it from other placeholders.

Â Now, let's begin from the beginning.

Â So the TensorFlow, and now for transformation and reload the TensorBoard,

Â now reload the TensorBoard,

Â we shall see that instead of whole screen of clutter.

Â We now have all the examples of placeholders,

Â examples neatly grouped into a collapsible box,

Â and we have my transformation here with

Â a vector we can easily recognize by name and the operations.

Â Here is also a vector which still retains its default name.

Â Now to summarize, TensorFlow

Â is a framework we'll be using throughout this presentation to implement deep learning,

Â probably you should get familiar with it.

Â As a major principle component of TensorFlow is a computation graph.

Â It's graph of transformations,

Â which apply to numerical data.

Â Now to your assignment,

Â I suggest to you to implement mean squared error computation in TensorFlow.

Â So here are some tests which will help you making sure

Â it's correct against the baseline implementation from a scalar.

Â So, thank you and

Â see you in the next video,

Â where we'll learn how to implement actual machine learning models in TensorFlow.

Â