Sunday, 2 June 2013

Alternative Conceptions

One of the basic facts of science and mathematical education is that you have to deal with Alternative Conceptions in your students.
Basically humans are very good at finding patterns in the world, but we're very reluctant to give up on discovered patterns, even with plenty of evidence that they don't work. We also have a weird tendency to only seek evidence that confirms our theory, and discount or dismiss evidence that rules our theory out.
The following link lists a series of common Alternative Conceptions.

Van Der Graaf Machine - Preparation and Safety

Van Der Graaf machines are awesome - impressive as all hell, and allow you to investigate and demonstrate a large varieties of electrical physics concepts.

Typically I'm very much against the inductive-learning-good, deductive-learning-bad philosophy that has seemed to invade modern scientific theory. It takes too much time if done every lesson, it does little to correct pervasive alternative conceptions and it ignores half of how science works.

But a mixture of inductive and deductive learning works for Van Der Graaf machines. At the bottom of this post are some investigations that can be done. With good timing and a dollop (5 minutes) of deductive-style learning, you can complete most of them in a single one hour lesson.

Van Der Graaf - Miniature Lightning Bolts.

This earlier post outlines what you need to consider before doing any Van Der Graaf experiments.

This current post is the second in a series that describes some demonstrations that can be done with Van Der Graaf machines, including theoretical explanations.

The first demonstration is the simplest, throwing miniature lightning bolts.

Hair Club for Van Der Graaf Machines and Einstein Hair

This is the third post on Van Der Graaf demonstrations, and serves as an important theory point before you discuss charge distribution in hollow objects and air discharge and lightning rods.

Van Der Graaf Demonstrations - Induced Charge

It's demonstrations like this that really show the limits of Inductive-only learning in Science. Don't get me wrong, inductive teaching is a powerful tool under certain circumstances, but how the hell do you design an activity to inductively derive all the complexities of induced charge? Even if you can come up with one, it won't have as much impact or be as quick to do as the mixed-method demonstration discussed below.

Van Der Graaf Machines - Air Discharge and Identified Flying Objects

This is the last post I have on Van Der Graaf machines for the moment.
In this post, I describe demonstrations that show the charge on objects is restricted to the surface of the object, and objects with large radii, can store more charge than objects with small radii.

This explains why the Van Der Graaf machine has a large hollow sphere, and why dust (and the Hair Extension) reduces the charge on a Van Der Graaf machine.

Pearson's Square - Or Why Didn't My Chemistry Lecturers Teach Me This?

A common problem in chemistry (and feedlots, and home distilling, etc),

You have two solutions of different concentrations, let's call them H (high concentration) and L (low concentration), and you want to mix them to create a solution of concentration F (final concentration).

Using algebra, it takes a bit of time to get this.
xH + (1-x)L = C, etc, etc, etc, solve for x.

Using the Pearson's Square, it goes like this:

H - F = parts of solution L
F - L = parts of solution H

Gaaahhh.

Friday, 24 May 2013

Videos for Chemistry Laboratory Techniques

The following link points to a series of videos showing chemistry laboratory techniques, such as recrystallization  melting point determination, thin-layer-chromatography and titration.

http://ocw.mit.edu/resources/res-5-0001-digital-lab-techniques-manual-spring-2007/videos/

Wednesday, 22 May 2013

A Tally Font

Another short link. At the bottom of this page, there is a TrueType font for tally marks, and a font for calculator buttons.

http://www.subtangent.com/maths/resources.php

Very useful for example worksheets.

Saturday, 18 May 2013

RSA Two-Key Encryption

So everything in the last seven posts has lead up to this.
The RSA Encryption Algorithm is a mathematical method of generating two encryption keys. You then encrypt a message with one key, and decrypt it using the other key.
This allows you to keep one key private, and publish the other key to the entire world.

Then there are two ways you can use these keys:
  1. You look up the public key of your friend, and encrypt a message using their public key. You have guaranteed that no-one can read the message except your friend.
  2. You encrypt a message using your private key. Everyone can read it, but no-one else but you could have sent it.
So the same technology can be used to keep secrets, and prove identity. Cool.

The Euclidean Algorithm and the Extended Euclidean Algorithm

This is the last trick needed to understand the RSA method. The first algorithm is a quick(-ish) method to find the greatest common divisor between two numbers (a and b). The second algorithm also calculates values of x and y that satisfies the following equation:
$ax + by = gcd(a,b)$

Euler-Fermat Theory and Prime-testing

As mentioned in the last post, Euler-Fermat's Theory states that for any element m in a group:
$m^{\varphi(n)} = 1 \text{ mod } n$
where $\varphi(n)$ is the totient of n.
The totient of a number is simply the number of Integers that are smaller, and coprime (i.e. no common factors).

This allows us to come up with a simple (though not foolproof) test for prime numbers.
If n is a prime number p, then the number of smaller Integers that are coprime, are all of them. Thus $\varphi(p) = p-1$ for a prime number, and $m^{(p-1)} = 1 \text{ mod } p$ should be true for all the numbers m, smaller than p.

Lagrange's Theory of Finite Groups and the Euler-Fermat Theorem.

In my Group Theory post, I stated that once a Set of mathematical elements and a mathematical operation have been proven to satisfy some basic properties, all theories, proofs and facts from the mathematical discipline of Group Theory, would automatically apply.

 Lagrange's Theory of Finite Groups states that any group can only be divided up into subgroups of the same size.

Euler-Fermat's Theorem is an application of this, that says that any number (coprime to n), rasied to a special power (called the totient of n) will give 1 mod n.

The coprime and totient properties of the second theory are a consequence of the structure of the Multiplicative Group of Integers Modulo n.

Euler's Totient Function

As mentioned in the last post, when multiplying Integers mod n, there are some Integers that can't have an inverse. My example was, that it was impossible to multiply 2 by anything, that once divided by 10, gave you a remainder of 1. Likewise for 5. And if an element doesn't have an inverse with respect to some mathematical operation, then that element can't be a member of a Group.

The Power of an Example – Modular Mathematics

Recently I took a class of students on a Physics excursion. On the trip there, I observed some of them working a Maths C assignment on modular multiplication. I knew a bit of mod mathematics from my programming experience, and was able to help them on some of the tougher concepts, but something a student said, instantly cleared up some aspects of Group Theory that I had been struggling with for years.

Group Theory and the RSA Encryption Algorithm

This is another large group of postings, focusing on the culmination of three year's private study – I finally understand the RSA Encryption Algorithm. I'm one of those people that finds mathematical concepts baffling and confusing unless I understand all aspects of it. If there's any vagueness or ambiguity anywhere, it niggles at my mind and drives me up the wall. No I don't have Aspergers', I'm just very pedantic. :-)

Anyway, I will be presenting my understanding of RSA in the following sections:

Group Theory

I'm currently reading S. Sternberg's “Group theory and physics”. I bought it about four years ago. I have notes written in it up to page 66, but only really understand about ¾ of pages 1-15 and 48-60. It's hard going, but the book does have the advantage that nearly everything important about Group Theory seems to be included.
But the basics are:
A mathematical Group consists of:
  • a Set of mathematical elements (numbers, matrices, rotations, etc.) that, in the abstract, we shall refer to here-on in by pronumerals (e, a, m ...) and
  • an operator (addition, multiplication, matrix multiplication, etc.) that we shall represent with the $\cdot$ symbol.
The Set of elements can be finite in size, or infinite. An example of this would be the (infinite Set of) Integers, and the addition operator. I will be showing an example of each Group property using this group.

Wednesday, 15 May 2013

Manual: GeoGebra 4.2 in a Nutshell - GeoGebraTube

Whoop, just what a math teacher wants, a free manual for the opensource program Geogebra (.org)
I've learnt a few new features just by having a skim through this.

Manual: GeoGebra 4.2 in a Nutshell - GeoGebraTube

Sunday, 12 May 2013

Cellcraft - A Cell Organelle Game

The following link is to a wonderful resource for teaching functions of cellular organelles - I consider it to be appropriate for juniors and an intro for seniors.
http://www.carolina.com/teacher-resources/Interactive/online-game-cell-structure-cellcraft-biology/tr11062.tr

Friday, 3 May 2013

Simulating Heat and Energy

Quite recently I've found a lovely little java application on the Internet.
http://energy.concord.org/energy2d/

It's a heat simulator that allows you to investigate conduction, convection and radiation.

Sunday, 13 January 2013

Trigonometry Cheat Sheet

A quick shorthand table to remember which trig functions to use for given problems. Note that you really want to learn how to do this by reorganizing the trigonometric definitions themselves, but this serves as a quick reminder.

Why Linear Differential Equations are First Order

First, a note on format. Dn(y)x is the n-th differential of y with respect to x. 
I've used this format to keep it somewhat close to the format used by sagemath. However, I have broken down here and there and used y` to represent the first differential of y w.r.t. x.

It's a well-known fact that linear differential equations of the form:
Dn(y)x+ ... + a D(y)x + by =f(x)
are particularly easy to solve.

Since:
Dn(y+z)x = Dn(y)x + Dn(z)x 
it is easy to split the problem into several subproblems and add them all at the end.

But matrices allow us to rearrange linear differential equations of any order into a simple first order differential equation of the form:
D(y)x = ky

Cayley-Hamilton Theorem

My final post (so far) on matrix conjugation is the most useful of matrix theories and one of my personal favourites.

"In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Hamilton) states that every square matrix over a commutative ring (such as the real or complex field) satisfies its own characteristic equation."

This means that once you have the characteristic polynomial of a matrix M (for a 3x3 matrix in this example):
P(x) = a0 + a1x + a2x2 + x3= 0
then replacing x with M:
P(M) = a0I + a1M + a2M2 + M3= 0
will also be true.

Notice that we've replaced the constant term a0 with a0I, guaranteeing that this will give us a square matrix.

Matrix Conjugation Continued.

As mentioned in an earlier post, nearly every square matrix is linked to a diagonal matrix consisting of that matrix's eigenvalues.
M = VDV-1 and V-1MV = D

This pre/post multiplication by a matrix and it's inverse is referred to as conjugation and interesting things happen if we use any non-singular square matrix U for this operation.

Matrix Powers - Uses of Diagonal Decomposition and Conjugation

As mentioned in a previous post, nearly any square matrix M can be decomposed into a diagonal matrix D, using the eigenmatrix V:
M = VDV-1

This process is referred to as diagonal decomposition and has some useful consequences. In this post, we'll focus on taking the matrix powers of M.

Say we want to calculate the 100th power of M:
M100

Eigenmatrices, Decompositions and Conjugation(Matrices)

 The definition of eigenvalues and eigenvectors states that for a matrix M, we can find pairs of vectors (v) and scalars (λ) that satisfy the following rule:
Mv = vλ

We can extend this concept to an eigenmatrix by combining all the n eigenvectors into an n x n matrix that we will call V, and replacing λ with a diagonal matrix D.

Matrix Trace, Determinants and Eigenvalues

This is the start of a quick series of posts showing some powerful properties (and proofs) of matrix eigenvalues. It is assumed that the reader can calculate eigenvectors/values.

However, the series refers to some basic algebra and matrix properties that, while used everywhere, usually don't have accessible or comprehensive  proofs available on the Internet (You can find them in good linear algebra textbooks). It is also a good place to show some neat tricks for sagemath.
There are better and more comprehensive proofs out there, but these are a good start.

We will start today with matrix traces and determinants, and their relationship to matrix eigenvalues. Basically we will be proving that for any n x n matrix, the eigenvalues add up to the sum of the diagonal elements, and multiply up to the determinant of the matrix.