I’ve been learning Haskell recently, and it’s reminded me of one of my pet peeves about academic computer science: the nomenclature. In several of the books that I’ve been going through, the authors describe parametric polymorphism in Haskell as just ‘polymorphism’ and when they decide to talk about type classes they are careful to distinguish between overloading and polymorphism. Only one author, that I recall, went so far as to acknowledge that overloading is a form of polymorphism also: ‘ad hoc’ polymorphism.

Now, I’m not overly sensitive, but it’s hard not to feel that someone’s thumbing their nose at a feature when they label it ‘ad hoc.’ I know that this nomenclature is fairly old; I think I ran into first in a paper by Peter Wegner back in the 1980s, but still it grates a little. I’m no great fan of method name overloading in statically typed OO languages, but I could easily imagine a less negative term.

One of the other places where computer science nomenclature is dismissive is in the area of dynamic typing. It seems that the academic literature is filled with references to languages that are ‘typed’ and languages that aren’t. The early dichotomy was between Lisp and nearly everything else. Lisp was seen as type-less, roughly in the same category as languages like C, where, if you aren’t careful, you can dance all over memory with a stray pointer and know nothing about it. On the other side, we had the ‘typed’ languages – languages where the compiler does some checking to make sure that your program behaves at runtime.

Some of the better literature attempts to differentiate between "strongly" typed languages and "weakly" typed languages. Smalltalk, Python, and Ruby are thrown into the "weakly" typed bin because they check at runtime. A type system that chooses to check at a later time should not be characterized as '"weak", or "un-typed."

This distinction between “safe” languages which check at compile time and languages which don’t really didn’t have to become dominant in language research. After all, Lisp doesn’t dance over memory – it does whatever checks it needs to at runtime. However, the "typed" versus "un-typed" nomenclature still lingers, and you have to admit, “un-typed” sounds a bit negative.

I think that any analysis of negative type nomenclature has to take into account the nature of programming language research. There’s been a great deal of type-related research over the last thirty years, and a lot of it has been interesting, but there’s a reason why many practitioners still work in “un-typed” languages and why interest in them is growing – the hardest problems that practitioners deal with day to day are not issues that language design can tackle; they are matters of economics, group dynamics, and motivation in the face of entropy. Languages can help, but really only when you admit that the problems we encounter are human problems, issues of ergonomics. Type theory? Yes, it’s beneficial, but I think that the reason why it’s received so much attention is because it’s tractable.

So, I’m learning. Haskell looks like it has a lot to offer. But, when I see parametric polymorphism introduced as ‘polymorphism’ it reminds of something I heard a man say in a jazz documentary. He said “When a jazz player doesn’t like what you’re playing, he won’t say it’s bad. He'll just say it’s not jazz.”

I wish I could get that out of my head.