^{n}(y)

_{x}is the n-th differential of y with respect to x.

I've used this format to keep it somewhat close to the format used by sagemath. However, I have broken down here and there and used y` to represent the first differential of y w.r.t. x.

It's a well-known fact that linear differential equations of the form:

D

^{n}(y)_{x}+ ... + a D(y)_{x}+ by =f(x)
are particularly easy to solve.

Since:

But matrices allow us to rearrange linear differential equations of any order into a simple first order differential equation of the form:

We'll ignore the f(x) components for this post, as once the idea is clear, they're pretty easy to add back in.

The trick is to define a new solution

We define u as a vector function of x (y and z), and we find out that the derivative of u can be written as linear equations of y and z.

In fact, we can write it as a matrix equation thus:

Now this is where it gets schmick. Differentiation (as defined as a limit) works fine on vectors (and matrices) as all it really requires is vector addition/subtraction, and division by scalars. So we can ignore the fact that we have a great big matrix in this equation, and use 1-dimensional calculus to get a general solution for u:

Now all we need to do is:

And we end up with the equation of y. (As well as y`)

It shouldn't be too hard to see that we can extend this to third-, fourth- etc. order linear equations, with a new phase space variable for each differential of y.

Thus all linear differential equations are essentially first order, which is part of why they're so easy to solve, and why they always have exponential (or trig.) factors in their solutions.

This also extends nicely to catastrophe and chaos theory, where the first order differential matrix can be examined at stationary points to identify their behaviour as parameters change. That's a future post though.

Since:

D

it is easy to split the problem into several subproblems and add them all at the end.^{n}(y+z)_{x}= D^{n}(y)_{x}+ D^{n}(z)_{x}But matrices allow us to rearrange linear differential equations of any order into a simple first order differential equation of the form:

We'll ignore the f(x) components for this post, as once the idea is clear, they're pretty easy to add back in.

The trick is to define a new solution

__that consists of y, and differentials of y.__**phase space**### Example:

So we start with a 2nd order linear differential equation f, and we define z as the differential of y with respect to z. The rest (rearr and zin) are just some sagemath fudging to get expressions we can substitute in later.We define u as a vector function of x (y and z), and we find out that the derivative of u can be written as linear equations of y and z.

In fact, we can write it as a matrix equation thus:

The last line is a total fudge to get sagemath to do the pretty formatting for me. |

Now this is where it gets schmick. Differentiation (as defined as a limit) works fine on vectors (and matrices) as all it really requires is vector addition/subtraction, and division by scalars. So we can ignore the fact that we have a great big matrix in this equation, and use 1-dimensional calculus to get a general solution for u:

log(u) = Mx + C

u(x) = e

^{Mx }CNow all we need to do is:

- Multiply M by the scalar x.
- Find the exponent of the matrix Mx
- Multiply the result by a vector C that represents initial conditions of y and y`.

And we end up with the equation of y. (As well as y`)

It shouldn't be too hard to see that we can extend this to third-, fourth- etc. order linear equations, with a new phase space variable for each differential of y.

Thus all linear differential equations are essentially first order, which is part of why they're so easy to solve, and why they always have exponential (or trig.) factors in their solutions.

This also extends nicely to catastrophe and chaos theory, where the first order differential matrix can be examined at stationary points to identify their behaviour as parameters change. That's a future post though.

## No comments:

## Post a Comment