## Welcome to the chain gang

You’re in for it now.

Ya done wrong and ya gotta pay the price. It’s the chain gang for you. The Markov Chain gang.

You’re here until yu’ve done your time, and we are gonna make ya work.

Hard labour, kid. Work ’til ya brain burns, ya fingers cramp and ya think ya can’t do no more.

We’re gonna turn ya into a… Mathematician.

OK, enough of the high drama; Markov chains are the last bit of the course content you have to master. Despite my complaining about it, they’re not as bad as I’ve made them out to be. If you remember tree diagrams from year 9, markov chains are just one more step along that path. In short Markov Chain are about semi-independent repeated events. Independent events are probabilistic outcomes that are not affected by prior events; Markov style events are events that are only affected by the immediately prior event – i.e what happened two stages before doesn’t effect the next probability directly. In basic terms, only the last event that occured effects this event – this is not about “winning/losing streaks”.

What does that mean? Well, imagine going on a trip into the city to see a concert at the music bowl; you would probably have to take a train and a tram. Given that the two are both part of the public transport network, there is some relationship between the probabilities of them running on time. It may be that the entire public transport network is under stress (repairs needed, lots of commuters, bad weather – whatever); and thus if one part of your travel (e.g. the train to the city) is running late, the other may be running late as well. In other words, there is a probability that if the train is late, the tram will be late as well. We can express these conditional probabilities in a matrix – and working with these matrices is what the entire Markov Chain process is about.

We call the matrices of conditional probabilites Transition matrices – and the reason they are more useful than using tree diagrams is that we can apply them again and again – until we get as deep as we want. There is a good example here – unlike a tree diagram which become unwieldy after the third or fourth set of branches, a markov chain transition matrix can be used as many times as you want.

Here is a good video that shows how a Markov chain works – this is the first of a set of three, please watch them all the way through:

The most interesting thing that you should have noticed through that video was that it seems that two-state markov diagrams have an interesting outcome – stability. That after a certain number of iterations, they stabilise and further applications of the transition matrix result in no change to the probabilities of subsequent events. You can read more about this in your textbook.

You can investigate some aspects of this phenomenon at Wolfram, here.

Markov chains have some interesting applications, and as such they have appeared in mainstream media. Some of you may have seen the TV program, “Numb3rs” – they have used markov chains as a device to analyse patterns and solve crime previously, like in the episode “Harvest” (season 2, episode 14). Here is a snippet of what they showed:

While that doesn’t go a great deal into the maths, here is a site that explains exactly what he is talking about, and a simplified way you can do the same. Have some fun with it – the initial applications are remarkably simple.

See you in class – we’re nearly there now!

Explore posts in the same categories: Uncategorized

### 7 Comments on “Welcome to the chain gang”

1. Locky Says:

Hey Sir, awesome videos- and I definitely recommend everyone reading the other textbook example provided above- really clear.

You mentioned that there were 3 videos in the series, however the playlist appears to only contain two- and his youtube account has no more on markov chains.

Anyone else have this problem?

2. Patrick Says:

The only thing about markov chains i had a little problem with were the steady state solutions, everything else i found quite manageable.

• CyberChalky Says:

Remember that it is a trade-off – eventually the gains from one source are counterbalanced by the losses from the alternative source. When the gain – loss = 0, then steady state is attained. The question then can be asked what intervention can be required to alter that balance.

3. Brendan Says:

Hey Sir,

I have been trying to update my calculator for a while now, and for some reason I can not seem to find a download or file of sorts to update the version. Would you be able to send or post the link you found during class?
Otherwise I will wait to school is back and borrow someones CD.
Thanks 🙂