A quick shorthand table to remember
which trig functions to use for given problems. Note that you really want to learn how to do this by reorganizing the trigonometric definitions themselves, but this serves as a quick reminder.
Sunday, 13 January 2013
Why Linear Differential Equations are First Order
First, a note on format. Dn(y)x is the n-th differential of y with respect to x.
I've used this format to keep it somewhat close to the format used by sagemath. However, I have broken down here and there and used y` to represent the first differential of y w.r.t. x.
It's a well-known fact that linear differential equations of the form:
I've used this format to keep it somewhat close to the format used by sagemath. However, I have broken down here and there and used y` to represent the first differential of y w.r.t. x.
It's a well-known fact that linear differential equations of the form:
Dn(y)x+ ... + a D(y)x + by =f(x)
are particularly easy to solve.
Since:
But matrices allow us to rearrange linear differential equations of any order into a simple first order differential equation of the form:
Since:
Dn(y+z)x = Dn(y)x + Dn(z)x
it is easy to split the problem into several subproblems and add them all at the end.But matrices allow us to rearrange linear differential equations of any order into a simple first order differential equation of the form:
D(y)x = ky
Cayley-Hamilton Theorem
My final post (so far) on matrix conjugation is the most useful of matrix theories and one of my personal favourites.
"In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Hamilton) states that every square matrix over a commutative ring (such as the real or complex field) satisfies its own characteristic equation."
This means that once you have the characteristic polynomial of a matrix M (for a 3x3 matrix in this example):
"In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Hamilton) states that every square matrix over a commutative ring (such as the real or complex field) satisfies its own characteristic equation."
This means that once you have the characteristic polynomial of a matrix M (for a 3x3 matrix in this example):
P(x) = a0 + a1x + a2x2 + x3= 0
then replacing x with M:
P(M) = a0I + a1M + a2M2 + M3= 0
will also be true.
Notice that we've replaced the constant term a0 with a0I, guaranteeing that this will give us a square matrix.
Notice that we've replaced the constant term a0 with a0I, guaranteeing that this will give us a square matrix.
Matrix Conjugation Continued.
As mentioned in an earlier post, nearly every square matrix is linked to a diagonal matrix consisting of that matrix's eigenvalues.
M = VDV-1 and V-1MV = D
This pre/post multiplication by a matrix and it's inverse is referred to as conjugation and interesting things happen if we use any non-singular square matrix U for this operation.
Matrix Powers - Uses of Diagonal Decomposition and Conjugation
As mentioned in a previous post, nearly any square matrix M can be decomposed into a diagonal matrix D, using the eigenmatrix V:
This process is referred to as diagonal decomposition and has some useful consequences. In this post, we'll focus on taking the matrix powers of M.
Say we want to calculate the 100th power of M:
M = VDV-1
This process is referred to as diagonal decomposition and has some useful consequences. In this post, we'll focus on taking the matrix powers of M.
Say we want to calculate the 100th power of M:
M100
Eigenmatrices, Decompositions and Conjugation(Matrices)
The definition of eigenvalues and eigenvectors states that for a matrix M, we can find pairs of vectors (v) and scalars (λ) that satisfy the following rule:
Mv = vλ
We can extend this concept to an eigenmatrix by combining all the n eigenvectors into an n x n matrix that we will call V, and replacing λ with a diagonal matrix D.
Matrix Trace, Determinants and Eigenvalues
This is the start of a quick series of posts showing some powerful properties (and proofs) of matrix eigenvalues. It is assumed that the reader can calculate eigenvectors/values.
However, the series refers to some basic algebra and matrix properties that, while used everywhere, usually don't have accessible or comprehensive proofs available on the Internet (You can find them in good linear algebra textbooks). It is also a good place to show some neat tricks for sagemath.
There are better and more comprehensive proofs out there, but these are a good start.
We will start today with matrix traces and determinants, and their relationship to matrix eigenvalues. Basically we will be proving that for any n x n matrix, the eigenvalues add up to the sum of the diagonal elements, and multiply up to the determinant of the matrix.
However, the series refers to some basic algebra and matrix properties that, while used everywhere, usually don't have accessible or comprehensive proofs available on the Internet (You can find them in good linear algebra textbooks). It is also a good place to show some neat tricks for sagemath.
There are better and more comprehensive proofs out there, but these are a good start.
We will start today with matrix traces and determinants, and their relationship to matrix eigenvalues. Basically we will be proving that for any n x n matrix, the eigenvalues add up to the sum of the diagonal elements, and multiply up to the determinant of the matrix.
Subscribe to:
Posts (Atom)