[BOŞ_SES] Hello.

Last session we determinants

and we have seen the reverse of the matrix and the equation is a package.

Now we can define a package again

We will work with similar concepts.

Eigenvalues and eigenvectors basic concept here.

Diagonalization of the matrix in the implementation of this basic concept,

on matrix functions and functions of matrices

We will tell differential equations.

Eigenvalues of the key issues as we go to the square matrix and eigenvalues.

What does that mean eigenvalues and eigenvectors once?

What does this mean?

These are determined in the, their independence from each other, and also, in particular,

When symmetric matrices emerging high-skilled

examples, characteristics that can be called.

Now let's see the concept of eigenvectors and eigenvalues.

The matrix of a linear transformation

it is that of a linear transformation

We looked at the matrix can display.

He was going on: From a space

processor, a processor to get a base vector in the definition of space.

e, i, and j show.

Rather vectors open this space so we can look at more general.

When we call them necessarily linearly independent base

it seems that it should be.

But not in the open space independent vectors is also acceptable.

Anyway, leaving aside the details of any e

i, j is a set of vectors from open space

When we look at the transformation coefficients

in terms of basis vectors of the target space,

f we call them, we could show.

This display coefficients a, i, j are two-dimensional array in which

we were getting when we sort the matrix.

The size of the space in the square matrix definition m

so that e, i, j, the number of the grain.

but also for the number of the grain

Does that with this definition in the square matrix

The size of the target space the size of the space is the same.

Thus, this vector in the target space and space definition,

We can choose the same vector for the vector e.

Therefore, e, i, j, when we convert,

again it is expressed in terms of the number of times some of these e.

Here they constitute our square matrix.

We give it with an input-output expression,

e, i, j, when we give vector

A transformation of this e out a linear

It shows the composition gives.

However, as it may be special circumstances: j inside you give.

Only good can come of j'l.

We will see examples of this.

That does not change the nature of the vector of this kind only

which changed the length of the vector conversion

eigenvectors called because certain properties are vectors.

Characteristic vectors said.

These coefficients thereof in a special numbers, as is also

eigenvalues, called characteristic values.

As we show in a visual sense that: e, i, j again, you give e, i,

j comes off a little size, but changing.

i, j, j in no need to use so many indicators.

Because there is only one lambda we can state her jar.

That describes the eigenvalues.

So what's so important that we ask, does it matter as follows: This e vectors

Each one case, shows an event.

in a processor that converts these events, such as a natural rule.

So if we can select each of these vectors

When we turn to time

If only it turns out, we identified at that time the size of a

The one problem we have reduced the problems identified in size.

Of course, much easier to understand a dimensional problems.

difficult to understand the many complex problems that interact with each other.

But the one time you devote to this component separately problems

of which you are solved much easier to understand each other.

That's important for it eigenvalues and eigenvectors.

Now let's example as follows: If E's

If consists of eigenvalues, so the diagonal matrix

On some numbers will, others will not.

Because when we look at where e, i, j normally when we convert

This is not specific to a full matrix vectors

but it will be very special if we can find one only in each row vectors

will be an odd number, it will be found on the diagonal of the matrix transformation.

This is obviously a very important thing because IF YOU on the 1000 lines,

A 1000-column matrix.

1000-dimensional space are you doing to transform 1000-dimensional space.

You need one million.

However, if you find a suitable base vectors, only on the diagonal

You can solve the problem with the number 1000 to be numbers.

Both would be easier to understand and more simple in that process is paramount,

It would have landed.

Let's start with an example.

Let there be a transformation: from a two-dimensional space on a two-dimensional space

a conversion.

When we convert the vector comprising an x x two,

x plus x two x two, get three x four x plus two.

Base vectors in two dimensions to two dimensions

We can use the same base vector to the left.

Let standard vectors.

A zero, a zero means that we convert to a zero

when x is a zero instead of an x instead of two, when we put out a four.

Conversely, when we convert the vector x is a zero instead of a zero,

x time in two instead of a sorry we gave two

When we gave out two and number three.

It also works when we write the column this transformation

We have obtained a matrix showing the numbers.

You can see the first part was made for that very detailed presentation.

You can look at the summary of this given that at the beginning,

In the early part of the section in the summary.

Now we will work with them on the ground floor,

in place to work with standard rectangular base, a two and a minus

Let's work with the vector, as well as the base for the target space.

When we convert a two vectors,

instead of taking a one or two components x x two instead of two

See here when we put an X instead of a five,

x time we put two instead of two,

again the number ten as you get here.

Taking five as a common factor, you see

this would be that a vector to one to five times the conversion still to be output as one.

When we get to two, x instead of one, rather than two minus x

See a sheep x instead of an x minus a two sheep instead of a minus.

an x instead of x when we put a minus instead of two.

So a minus was a minus transformation of a vector.

This merger is a see again this vector minus çarpılmış.

When we put it to two again, only to have found two.

That means these eigenvectors.

Now that the matrix composition, the matrix representation find

If we want the first column of five zero zero, zero, minus one

the second column of the matrix means that the output will put diagonally.

That means these vectors, a special nite relations.

We found it here as if it were random, but of course before

We know that we are off the diagonal of these vectors for this account.

But we can ask the question in general: How do we know

This time, the future of which we received vectors diagonal structure?

How do we know we can not predict the beginning.

We have to find a way for it with his systematic account.

This is calculated as follows: An e transformation,

When it started off again to transform an E vector with a conversion.

This matrix representation, this e conversion

E the time we hit to the opposing matrix will be solid.

Now we do not know where the lambda, we do not know the eu.

This e and lambda, which features the random vector and not numbers.

Because of this you take a random vectors generally all vectors

component will be.

It will create a line of a column vector sorry.

Now here is simply that column, a column

over a number of rest will be zero.

Here's how we found this number?

These vectors How do we find?

As a way to do this: We say that a given matrix

so we do not know with an e vector multiply one yet,

We do not know yet the resulting vector to get this number as a lamp.

That's not the usual type of equation

unknown because we generally we prefer to collect on the left.

So lambda to the left çekelim income equal to zero.

There are also common to these two.

e'yi a kere lam, ek a, a'dan

We have less to give to removing the lambda lambda.

This merger because we need to multiply this merger,

you are doing a unit matrix çarpmasa your number from a matrix.

It would not be appropriate.

If you think this product is already in the unit when we hit it comes to the matrix E.

Therefore tutarlıyız.

So the equation of the initial representation of the equation here.

Let's try to figure it out visually.

We write the matrix, we are removing the lambda across the diagonal.

Our work this unknown numbers

We are writing on the right side and is equal to zero.

Now, let's take a break.

We will then try to figure out how to carry out these calculations.