Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 09 2014

lamuerte
22:45
Because police dogs are too mainstream.
via Depressing Finland
Reposted fromteemu teemu viasmoke11 smoke11
lamuerte
22:43
Reposted fromgruetze gruetze viafh fh
lamuerte
22:41
0137 689b 500
Reposted fromNeutrum Neutrum viafh fh
lamuerte
22:39

tastefullyoffensive:

Sochi Problems [yahoosports]

Reposted fromminna minna viasmoke11 smoke11
22:38
6520 6d95

downfalling:

these kids these days don’t known our struggle

Reposted fromdzony dzony viaHypothermia Hypothermia
22:36
6406 4fe8 500

sherlockismyholmesboy:

Neolithic amber bear, dated between 1700 B.C. and 650 B.C.

thought this was a new kind of gummy bear for a second

Reposted fromjohnstaedler johnstaedler viasmoke11 smoke11
22:32
lamuerte
09:03
Reposted fromsaski saski viapl pl
lamuerte
09:02
0137 689b 500
Reposted fromNeutrum Neutrum viavideogames videogames
lamuerte
09:01
Sami people in the late 1800 Sweden Norway
08:53
4028 4054 500


fresst DAS, ihr christen-moralapostel :-p
Reposted fromhennaflower hennaflower viaathramp athramp
lamuerte
08:52

December 13 2013

lamuerte
19:29
lamuerte
19:29

Best of Bayern-/Österreich-Memes














































Reposted fromnaich naich viaaustriansoup austriansoup

November 30 2013

19:40

How Schools Teach Us to Write Badly

A couple of hours ago I enrolled in an online course on (scientific) writing on Coursera. I still have to absorb the first lectures, but a statement that struck me was that our education strengthens a particular bad writing habit: It drives us into using meaningless filler words and other clutter to match with the length requirements of essays, rather than solely focusing on the content. I still have difficulties to get loose of that habit.


November 29 2013

15:53

I Found My Toy Problem

More often than not, I find myself tempted to learn a new programming language. Especially if it follows a different paradigm than the languages I already know. Which are, unfortunately, not that many. But learning a new language is a tedious task: you have to read through loads of tutorials, introductions and how-to’s, which are most certainly at a very basic level, and thus utterly boring. Without a somehow clear goal in mind, I usually just give up and visit Reddit to look at pictures of cats.

Without knowing on which problems you might want to apply your new knowledge, it is hard to stick to learning stuff. Finding a problem that needs to be solved, is interesting and is suitable to serve as a first exercise in a newly learned programming language is also tricky. After all, it mustn’t be too advanced and complicated, in order to be able to use a freshly learned tool to solve it, but at the same time, not too simple so that one does not find it too boring and trivial.

I finally found my personal toy problem which I will use to learn new programming languages: Implementing three of the most important algorithms for inference on Hidden Markov Models: the forward algorithm, the forward-backward algorithm and the Viterbi algorithm. For simplicity, I will only consider the case of discrete emission distributions. The code will not contain any kind of error checking whatsoever. It will be just some ugly hack, but I can live with that. Moreover, I will only use the standard libraries of each programming language. Let’s see where this will lead me.

As a starting point, I implemented the algorithms using my current language of choice, Python. I’ll just show the basics and the forward algorithm here, the rest can be found at my GitHub. So, here it goes:

class HMM:
    def __init__(self, pi, A, B):
        self.pi = pi
        self.A = A
        self.B = B

def normalise(l):
    norm_const = sum(l)
    return map(lambda x: x / norm_const, l), norm_const

So here we just have the data structure (which is basically just a struct) and a function to normalise values in a list. And now for the forward algorithm:

def forward(model, observations):
    state_idxs = range(len(model.pi))
    log_prob = 0.

    alphas = [[model.pi[i] * model.B[i][observations[0]] for i in state_idxs]]
    alphas[0], nc = normalise(alphas[0])
    log_prob += log(nc)

    for obs in observations[1:]:
        alphas += [[sum([alphas[-1][j] * model.A[j][i] for j in state_idxs]) * model.B[i][obs] for i in state_idxs]]
        alphas[-1], nc = normalise(alphas[-1])
        log_prob += log(nc)

    return alphas, log_prob

Simple as that! As I wrote before, check out the rest of the code at my GitHub.

The next language I want to implement the stuff in is Haskell. I started to read the fantastic “Learn You a Haskell For Great Good!” by Miran Lipovača. I’m really looking forward to using a functional language.


October 07 2013

lamuerte
20:57


edwardspoonhands:

IT WORKS ON CATS!

Reposted frombaumbaumbaum baumbaumbaum viabigbasti bigbasti
lamuerte
20:54
yerawizardmary:

I cannot believe this got so many notes. But this is the continuation.imageimageimageimageimageimage

curiouskitty:

verycunninglinguist:

whatlikeitshard:

jukeboxgraduate:

THIS WOMAN IS MY  NEW HERO.

HERO.

When the Internet gives you lemons, make lemonade.

This is such a righteous post that I am happy I stayed up late. I will probably still regret going to school on 5hrs of sleep, but then I’ll just think of this and not give a damn.

(via daisyloveletters)

Reposted fromthatsridicarus thatsridicarus viasmoke11 smoke11
lamuerte
20:46
0351 3519 500
Reposted fromadremdico adremdico viahighhopes highhopes
20:44
5368 563a 500

thefrogman:

[reddit]

That’s adorable.

Reposted fromJerry520 Jerry520 viacygenb0ck cygenb0ck
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl