Sunday, 13 January 2013

Eigenmatrices, Decompositions and Conjugation(Matrices)

 The definition of eigenvalues and eigenvectors states that for a matrix M, we can find pairs of vectors (v) and scalars (λ) that satisfy the following rule:
Mv = vλ

We can extend this concept to an eigenmatrix by combining all the n eigenvectors into an n x n matrix that we will call V, and replacing λ with a diagonal matrix D.


 These two matrices, and the original matrix M, obey the eigenmatrix equation:
MV = VD

It took me some time to understand why V has to alternate left and right in this equation, but it becomes apparent if you matrix-multiply out the right hand side. Doing so recovers the eigenvalue/vector definition we started with, i.e. λ0 only multiplies the first column of V etc.

If V is non-singular (i.e. det(V)≠ 0) this leads us to a very useful set of operations called diagonal decomposition and conjugation (in the matrix sense, not the complex number sense). We can rearrange the eigenmatrix equation two ways:
M = VDV-1 and V-1MV = D

Edit: Turns out there's still a lot for me to learn. For matrices where V is singular, you can still usually decompose M into CDC-1 where D is an upper (or lower) triangular matrix and C is different from the matrix of eigenvectors V. Using this decomposition, it's still possible to prove lots of things involving the eigenvalues, determinant and trace.

I.e, the first equation states that a square matrix M can be rewritten  as a diagonal matrix D, pre-multiplied by V and post-multiplied by the inverse of V. 
This is referred to as the diagonal decomposition of M. Usually we're more interested in the value of D alone but the full equation will be of value in future posts.

The second equation states that any matrix M can be converted to diagonal form by pre and post-multiplication by the inverse of V, and V itself. 
This operation is referred to as conjugation and is used to prove several important matrix properties.

No comments:

Post a Comment