- monthly subscription or
- one time payment
- cancelable any time
"Tell the chef, the beer is on me."
A couple of hours ago I enrolled in an online course on (scientific) writing on Coursera. I still have to absorb the first lectures, but a statement that struck me was that our education strengthens a particular bad writing habit: It drives us into using meaningless filler words and other clutter to match with the length requirements of essays, rather than solely focusing on the content. I still have difficulties to get loose of that habit.
More often than not, I find myself tempted to learn a new programming language. Especially if it follows a different paradigm than the languages I already know. Which are, unfortunately, not that many. But learning a new language is a tedious task: you have to read through loads of tutorials, introductions and how-to’s, which are most certainly at a very basic level, and thus utterly boring. Without a somehow clear goal in mind, I usually just give up and visit Reddit to look at pictures of cats.
Without knowing on which problems you might want to apply your new knowledge, it is hard to stick to learning stuff. Finding a problem that needs to be solved, is interesting and is suitable to serve as a first exercise in a newly learned programming language is also tricky. After all, it mustn’t be too advanced and complicated, in order to be able to use a freshly learned tool to solve it, but at the same time, not too simple so that one does not find it too boring and trivial.
I finally found my personal toy problem which I will use to learn new programming languages: Implementing three of the most important algorithms for inference on Hidden Markov Models: the forward algorithm, the forward-backward algorithm and the Viterbi algorithm. For simplicity, I will only consider the case of discrete emission distributions. The code will not contain any kind of error checking whatsoever. It will be just some ugly hack, but I can live with that. Moreover, I will only use the standard libraries of each programming language. Let’s see where this will lead me.
As a starting point, I implemented the algorithms using my current language of choice, Python. I’ll just show the basics and the forward algorithm here, the rest can be found at my GitHub. So, here it goes:
class HMM: def __init__(self, pi, A, B): self.pi = pi self.A = A self.B = B def normalise(l): norm_const = sum(l) return map(lambda x: x / norm_const, l), norm_const
So here we just have the data structure (which is basically just a struct) and a function to normalise values in a list. And now for the forward algorithm:
def forward(model, observations): state_idxs = range(len(model.pi)) log_prob = 0. alphas = [[model.pi[i] * model.B[i][observations] for i in state_idxs]] alphas, nc = normalise(alphas) log_prob += log(nc) for obs in observations[1:]: alphas += [[sum([alphas[-1][j] * model.A[j][i] for j in state_idxs]) * model.B[i][obs] for i in state_idxs]] alphas[-1], nc = normalise(alphas[-1]) log_prob += log(nc) return alphas, log_prob
Simple as that! As I wrote before, check out the rest of the code at my GitHub.
The next language I want to implement the stuff in is Haskell. I started to read the fantastic “Learn You a Haskell For Great Good!” by Miran Lipovača. I’m really looking forward to using a functional language.
I cannot believe this got so many notes. But this is the continuation.
THIS WOMAN IS MY NEW HERO.
When the Internet gives you lemons, make lemonade.
This is such a righteous post that I am happy I stayed up late. I will probably still regret going to school on 5hrs of sleep, but then I’ll just think of this and not give a damn.
"Tell the chef, the beer is on me."
"Basically the price of a night on the town!"
"I'd love to help kickstart continued development! And 0 EUR/month really does make fiscal sense too... maybe I'll even get a shirt?" (there will be limited edition shirts for two and other goodies for each supporter as soon as we sold the 200)