When was the first time you ran into functional programming? For me it was a fairly confusing introduction sometime in my junior or senior year of college. When you pick up a new approach to solving problems--or when you pick up a new programming language--you usually isolate new concepts and play with them one at a time. Getting started with functional programming can be difficult because you almost always end up learning a new language (Haskell, Scheme, Erlang) and new techniques simultaniously.

In the past two weeks I've written work-related code in Java, Python, JavaScript, PHP, Objective-C and Perl. The take home message here isn't that I know a lot of languages (because it isn't really the case, half of those languages I can't use without half a dozen references open), but that I can still be productive in them despite only having a passing knowledge of the syntax and standard libraries.

If you understand the fundamental problems you're dealing with, then superimposing a new syntax on top isn't going to be prohibitively hard. On the other hand, when you don't understand the underlying problem (because you forfeited initiative to that friendly library that abstracted the details away, or because your language of choice shields you from the concept like an ORM letting you bypass learning SQL or Erlang letting you place semaphore strictly in the "I don't give a damn" corner of your mind), then you're going to lock yourself into languages or platforms.

Now you might be waiting for me to follow up this brilliant observation with a reassuring soundbyte like learn lots of languages to be a great programmer! But that isn't the answer either. The problem with focusing on learning languages is that it encourages relearning the same material over and over in slightly different guises. Sure, it's important to be able to translate

x = { "a" : 1 , "b" : 2 } x [ "c" ] = 10 del x [ "b" ] for key in x : val = x [ key ] print " %s => %s " % ( key , value )

into

my %x = ( 'a' => 1 , 'b' => 2 ); %x -> { 'c' } = 10 ; delete $x { 'b' }; for my $key ( keys ( %x )) { my $value = %x -> { $key }; print "$key => $value

" ; }

but being able to write both versions doesn't mean you can tell the difference between a linked list and a hashmap. And you sure as hell didn't learn anything fundamentally new when you learned to write del x["b"] as delete $x{'b'}; .

It's easy to get caught up in the language learning cycle for the same reason that people rush to recreate web servers in each new programming language that is thrown together: these are fun, interesting and moderately challenging problems that give us the high of doing something new without the potential downer of running into a genuinely difficult problem. We're intoxicated with that brand new feeling, even when we're actually crawling backwards.

Instead of the language learning cycle or the library translation circuit, we need to follow a third path straight down the middle: use whatever language you want to, but dig into internals and really grok the libraries and language features that you rely upon. It's by understanding underlying concepts that we improve as programmers, instead of as Java programmers, .NET programmers or Pythonistas.

I think a great example of this has been the backporting of the functional programming style from Scheme and Haskell to multi-paradigmn languages like Python and Ruby and--in some fairly awkward attempts--even imperative languages like Java and C. Instead of being caught up with Haskell or Erlang as the solution to programming problems, the programmers took the great ideas embodied in them and started applying them in other programming languages as well.

How to react to this idea of stepping back to the basics is a bit ambiguous. I'm saying people shouldn't keep reimplementing the same pieces of code, but I'm also saying we should get back to the basics and work our way up to the top by starting at the very bottom. How do we reconcile these two ideas?

I think we should maintain two different genres of projects, with a different approach for each kind:

Small personal projects that help us learn by recreating and reimplementing the core ideas of programming and computer science. This is where you should be creating your own servers, writing your own compilers, implementing A* and writing a graphics package. Approach these projects like they are sketches, because that is exactly what they are. Don't worry so much about making things perfect, just enjoy experimenting and playing around with the ideas. Contribute to exciting and non-redundant projects that are focused on creating new things or are taking a new approach to an existing field. It's hard to pin a definition on non-redundant, but I think an appropriate way to think about it is something that addresses an unaddressed need. I definitely agree that we're better off having Nginx, Apache and Mongrel, but it's less clear that having both Nginx and Lighttpd are fulfilling different needs. Now that we have all these options, do we need even more C based servers with an emphasis on simplicity? Maybe--I don't want to champion stagnation--but I think there needs to be a clear argument for what unaddressed need a new lightweight C server would meet before spending years of developer time making Lightginx 0.1.

I have a fairly romantic image of programmers as professionals, and that we have an obligation to keep getting better at what we're doing, and that we also have an oblgiation to build things worth building. I think that maintaining a healthy stable of small conceptual projects and larger complex projects that provide solutions to unsolved problems is the way to fulfill both obligations.

For me personally, that means doing a better job of contributing to existing projects, and less writing things from scratch. It's a somewhat bitter pill, but I think swallowing it is for the best for all of us.