0:13

To help understanding lectures, we provide to you several complementary materials.

Â In this complementary material, I am going to introduce you to the tensors.

Â So we will use English proverb, keep task small.

Â So to study tensors,

Â we will consider two-dimensional tensors which is much simpler, and

Â most of the expressions, intermediate expressions can be written explicitly.

Â So we will consider flat space two-dimensional tensors.

Â To introduce you into what means tensors, let me start with a vector.

Â All of you know what means vector.

Â Vector is, in two dimensions, is something which carries two components.

Â Two components.

Â But there is a distinction between the role of numbers and the vector.

Â Vector is some quantity which appropriately transforms

Â under the coordinate transformations.

Â Namely, if we have a vector V1,

Â V2, it transforms according

Â to the following rule, M12,

Â M11, sorry, M12,

Â M21, M22, V1, V2.

Â Well, this is a coordinate transformation matrix.

Â We consider only linear transformations in flat space.

Â And most frequently, we actually consider notations, and this matrix is just

Â famously known cosine phi- sine phi sine phi cosine phi.

Â 2:02

So this is just a matrix of rotation on angle phi.

Â So this is a two-dimensional vector.

Â Well, what is the difference?

Â In tensor notation, vector is represented as follows.

Â It's some quantity which carries index a, and a is ranging from 1 to 2.

Â Then in tensor notation,

Â this equality is written as follows,

Â it's Va bar = MabVb.

Â 2:36

So in tensor notations,

Â if there is a repeated index, it assumes that there is a summation.

Â So this is literally means there is

Â a sum over B from 1 to 2, MabVb.

Â And let us see that this is actually this one.

Â So explicitly, this is just two equations for each value of a.

Â So we have V1, which is just M1bVb,

Â which is just M11V1 + M12V2.

Â And similarly, for V2 bar M2bVb, and so forth, obviously.

Â So this is actually the same as this one.

Â So this in terms of notations, has the same meaning as this one.

Â 3:39

So what means tensor?

Â Tensor is a quantity which carries indices.

Â So for example,

Â n-tensor is a quantity which

Â has n indices, a1, an.

Â I mean, I restrict myself to the tensor which carries only lower indices.

Â We will see the difference between lower and high and

Â upper index tensors a bit later.

Â So tensor is some quantity which carries indices, similarly to the vector.

Â But there can be many indices.

Â So tensor quantity is some collection of quantities which transform

Â under rotations, under coordinate transformations as follows.

Â It's Ma1b1,

Â Manbn times Tb1bm.

Â So on each index, among the n indices,

Â there is an action of this, the same matrix.

Â 5:22

In fact, it has two indices, and

Â then the rotations transformed according to this rule.

Â Each of these two vectors transforms with this matrix, so the rule is the same.

Â 5:34

So in principle, if you have a tensor with

Â two indices, tensor with two indices,

Â it is convenient to place it in a matrix,

Â for example, T11, T12, T21.

Â 5:55

21, T22.

Â But if you have a tensor with more indices,

Â its placement into a matrix is already kind of cumbersome.

Â In fact, well, in principle,

Â we can place it in a cubic matrix of the following form,

Â T111, T112, T121, T122.

Â And then there is second layer, T211, T212, and etc.

Â So but who needs this placement?

Â I mean, in fact, if you have tensor with many indices,

Â you can place it in the hypercubic lattice, the hypercubic matrix.

Â But there is no point of doing that.

Â So even this placement is pointless because

Â one has to get rid of this way of notations.

Â One has to use tensor notations because they're convenient for

Â many reasons, and we will explain why during this discussion.

Â 7:06

To clarify why we need tensors, let me introduce the notion

Â of scalar product or norm in the spatial vectors.

Â So we all know that if we have two vectors,

Â we can multiply them in the scalar product

Â [COUGH] which is just V1W1 + V2W2.

Â Let me just stress why I use upper and lower index,

Â in this case, it's like in a sense meaningless but

Â just to stress that we have something like (V1,

Â V2) multiplying (W1, W2).

Â And for their row, we use lower index.

Â For column, we use upper index.

Â So this is equivalent to this one.

Â Now in tensor notation then scalar product can be written like this.

Â 8:27

Well for that we use metric tensor, bilinear form which specifies the norm.

Â For example, the norm of a vector V.

Â So a scalar product of V on itself can be written

Â like this, it can be written like this,

Â and/or like this, delta ab Va Vb.

Â Where how do we obtain lowercase from upper?

Â So if we have a lowercase index vector we can multiply it to a tensor

Â with upper indices delta ab, who is this, I will explain in a moment.

Â Then we have obtain an uppercase index.

Â And if we multiply, if we have uppercase index we multiply

Â to this ba and we obtain Va.

Â So this is inverse of this, which in our case is trivial, delta ab

Â times delta bc is delta ac.

Â So this is just tautologically written thing, what is delta ab?

Â Delta ab is just the unit matrix.

Â 9:51

Delta bc is just the inverse of a [INAUDIBLE].

Â So its matrix also is unit.

Â We'll encounter a bit different situation in a moment later.

Â So its matrix is also like this and

Â so delta ac is just,

Â also it's a Kroenicker symbol.

Â So it's a unit matrix.

Â 10:19

So using these things, we can map lowercase and

Â uppercase indices to each other.

Â So that's the reason we need them.

Â Hence, this can be written in many different ways.

Â So it can be written like this, VaVb, etc.

Â Vb Vb, so this are different ways of writing the same things.

Â And it means the same, literally this.

Â And notice that due to these relations the map

Â between upper and lower case indices is trivial.

Â So if we have Va so the components are V1, V2.

Â And we have vector Va, its components are v1 and v2.

Â Due to these relations one can observe that V1

Â is just equal to V1 and V2 is just equal to V2.

Â So the difference between upper and

Â lowercase indices in this case is tautological, and

Â we just keep it to stress that this transforms as a column.

Â This transforms as a row.

Â So it means that this transforms according to matrix M,

Â while this transforms according to inverse matrix of M.

Â And inverse matrix of M, in case of a rotation, is just transposed matrix.

Â 11:55

So this is what concerns the tensors.

Â And now one can see the reason why tensors are convenient.

Â For example suppose we have a tensor like this, a, bc, def,

Â g, like this.

Â So we have a tensor with many indices, the order of indices is important.

Â Because the tensor doesn't have to be symmetric.

Â Some of the indexes are upper case, some of them lower case.

Â And then we can take a product of this tensor to a different tensor,

Â say, with letter B which has index g,

Â which has index a, like this d.

Â For example b like this.

Â So this product of tensors carries as

Â one can see this index is contracted so we have a summation over this index.

Â This index is contracted, we have a summation over this index.

Â This index also contracted and this index is contracted.

Â So the result of this is sum tensor which carries three indices.

Â So because transformation of this index compensated with this index, and etc.

Â We have something which transforms according to the rule how tensors with

Â three indices transform.

Â So, we in tensor notation,

Â all the transformation properties under rotations are obvious.

Â That's one of the reasons why tensors are convenient.

Â 13:28

Well, another option is just if we have tensor with two indices, Tab,

Â we can multiply to Va Wb and this will be a scalar quantity,

Â which doesn't have any indices.

Â So this is another example of similar situation.

Â 13:45

And what else?

Â So using this things we can lower and higher indices.

Â For example, if we have a tensor like this,

Â we can multiply it by a metric tensor.

Â And the result will be tensor with three lower indices.

Â And similarly, we can, higher for

Â example Tabc we can multiple delta bd this will be a tensor with indices adc.

Â So these are the ways we can lower and

Â upper case indices to each other.

Â Perhaps, we need to clarify further on notations.

Â Namely we have a tensor like this,

Â it transforms according to the rule as follows.

Â That there is Mac and Mbd, Tcd,

Â and what is the difference between this and

Â this matrix, is that the one

Â is the inverse of the other.

Â Namely if we have Mab Mbc, this is delta ac,

Â and what does it mean that we have an invariant expression?

Â It means that if we have a quantity like this,

Â for example Tab Vb, then we have the following

Â transformation rule for this quantity.

Â That this bar quantity

Â is just Mac T bar cb Vb bar,

Â without bar sorry, Vb.

Â So why do we have this relation because according to this And

Â this rule of the transformation of this index compensate transformation lessened.

Â Lessened, confirmation of this index compensates this.

Â And we have only one M gone out and

Â which states that this quantity transforms as a vector was one index.

Â So and all the rest follows.

Â 16:23

And several things are in order.

Â So we have a metric.

Â It means that if we have, in our space,

Â two points, x and nearby points, x and

Â x + dx or notations, xa and xa + dxa.

Â 16:47

And then we can define distance between these

Â two points according to the formula like this.

Â dl squared = dxadxa

Â = delta ab dxadxb.

Â And the same as delta ab dxadxb.

Â So this is the same thing.

Â And as you know this is just dx1 squared + dx2 squared.

Â So, finally, we should stress that because,

Â as we all know, on the rotations, this bilinear form doesn't change.

Â It means that, after rotation, we have the dxa bar

Â dxa bar, is equal to dxa dxa.

Â Namely, this is equal to dx1 bar squared + dx2 bar squared and

Â this is equal to dx1 squared + dx2 squared.

Â So under rotations, the bilinear form doesn't change,

Â it means that delta ab, which is a metric tenser, so

Â it's a quantity which transforms according to the rule s tenser should transform.

Â MacMad delta cd but,

Â the components of the bard coincide with the regional component.

Â So the matrix of this guy is the same as the matrix of this guy.

Â 18:34

This is not the case for an average, the tensor with two indices or

Â with many indices.

Â So, this statement just means that we have invariant tensor.

Â Another invariant tensor in two dimensions is totally anti symmetric tensor.

Â Epsilon ab is invariant.

Â In fact, it has this property ba, so it's antisymmetric.

Â And if we specify that epsilon 12 is 1,

Â then one can obviously from these properties, find that epsilon 11 is 0.

Â And epsilon 22 is also a 0 because under symmetry a and

Â this change of indices is changes in sine but it's equal to itself, so

Â it's 0 and epsilon 21 is just minus 1.

Â So and why is it invariant?

Â Tensor is just because we have two vectors Va and Wb.

Â They call a responding quantity

Â epsilon abVaVbbb is nothing but

Â the area of this parallelogram.

Â And the area of the parallelogram, after rotation, not only the area doesn't

Â change but even the formulae expressing is, after rotation doesn't change.

Â 19:57

So it means that this tensor is invariant under rotations.

Â Similarly, in three dimensions, we have anti symmetric

Â tensor epsilon ijk, where ij and k round from 1 to 3.

Â And this tensor, antisymetric and

Â the exchange of any two indices, its two indices, so it's minus for example.

Â And the rate exchange of the first three indices, etc., etc.

Â So it's uniquely fixed by its symmetry properties.

Â And it's also invariant because epsilon ijkViVjWjUk for

Â three vectors, which are not colinear,

Â three non-colinear vectors in three dimensions.

Â This guy specifies the volume of how to say

Â this is parallel, not parallel repeated,

Â but its faces are parallelograms.

Â So anything analogous to parallel.

Â And so, the area for this parallel repeated doesn't change and

Â the formula expressing the area doesn't change.

Â So these are very similar in four dimensions,

Â in four dimensional spacetime we also have epsilon mu, nu, alpha, beta,

Â which specifies antisymmetric tensor.

Â What else should I say here about the tensors?

Â Is the difference between space and space time.

Â And that I will clarify in a moment.

Â [MUSIC]

Â