Why the world needs Haskell Posted by Peter J. Jones on July 9, 2013

This is a technical review of Haskell and why software developers should care about functional programming. For a non-technical review of why your company should be using Haskell you might want to watch this introduction to Haskell video by FP Complete.

TL;DR: Writing high-quality, bug-free software isn’t easy. Haskell (and similar languages) provide the programmer with the tools necessary to get closer than ever before. By removing entire categories of bugs (such as those caused by null pointers and nil objects, type coercions, accidental mutation, etc.) and introducing a powerful type system, programmers can write better code in the same amount of time (and often less) than more traditional languages.

Null Pointers and Undefined Values

The code we write eventually executes in the real world and things don’t always go as we would like. Network connections drop while you’re reading from sockets, disk space goes to zero while writing to files, and people trip over power cables while balancing huge cups of double foam lattes.

These are exceptional situations that we don’t have much control over and which we have to deal with gracefully. But what about errors that we do have control over? You know, the exceptions we create as programmers every day when we dereference a null pointer or call a method on a nil object. These types of errors should be completely unacceptable yet they plague production systems all the time.

Sometimes undefined values propagate up from bad database constraints, null columns that leak right out of our data types as nil objects or null pointers. This is just one symptom of a larger problem, the widespread design idiom that overloads nil to stand in for failure, missing values, and the inability to compute a value. When compared to raising an exception this seems very reasonable indeed. But is it?

There are 3 alternatives when a function can either return a valid result or a nil object:

Test the result of every function to make sure it’s valid. Of course this leads to verbose code that sucks to write and when a programmer is feeling a bit lazy it’s easy to leave out. Pretend that the function always returns valid data and use it without checking. I’m surprised how often this happens (I’m certainly guilty of it) and that most languages can’t detect this and slap us right across the face like we deserve. This happens a lot in object oriented scripting languages where we chain a bunch of method calls together, any one of which may return nil. Use a complete hack such as the try method from Ruby on Rails which obscures the source of the problem, leaving it for someone else to deal with. This seems to be a popular favorite these days.

Of course the correct approach is method 1 but it comes at a cost. In order handle this situation properly we have to write a bunch of boilerplate code and we’re forced to make decisions about what to do in the event a function actually returns nil. Often the only sensible thing to do is propagate the failure up the call stack which is why method 3 is so popular.

It’s also why ruthless testing and 100% test coverage have become so important in mainstream languages. But even with 100% test coverage you can’t be sure that your code will work correctly if a function unexpectedly returns nil unless you’re also mocking things out or using fuzz testing. That’s a lot of extra work and most of us don’t go to such great lengths. Have no fear my friends, there’s a better way.

Haskell has a solution to this problem that is both simple and elegant. First and foremost there’s no such thing as null or nil. If a function is declared as returning an integer then the compiler won’t let it return anything else . If you’re brave and try to return anything that isn’t an integer the code won’t compile and the day will be saved.

If a function needs to either return a valid value or the absence of one it’s easy to encode that using the type system which then allows the compiler to verify that the programmer is dealing with the missing value. If it’s not explicitly dealt with the code won’t compile. We’ll get more into using the Haskell type system to our advantage in the next section.

This entire class of run-time errors is completely eliminated in Haskell for free, at compile time. After you’ve been using Haskell for a bit of time you’ll be paranoid and frustrated when using languages that have nil values or null pointers because the responsibility to deal with all those potentially missing values sits squarely on your shoulders. Those languages don’t help you out one bit in this regard.

What Has Your Compiler Done for You Lately?

As a programmer it’s likely you know at least a handful of programming languages and there’s one in particular you gravitate towards. It’s also likely that over the years languages have come into and gone out of your favor. Think of your current favorite language, what makes it different than your previous favorite language?

I’ve noticed a pattern in myself, I tend to toggle between lower level and higher level languages. I love the raw power and control of the lower level static languages such as C++, but after a bit of time the verbosity of the language and the amount of code to get something simple done starts to wear on me and I switch to a high-level dynamic language like Ruby.

With higher level languages it feels like you can get more done with significantly less code and the development cycle of writing code and running it becomes so much easier and faster. The experience is really enjoyable until things start to go wrong in production because stupid typos and nil objects that somehow slipped through testing. That’s when I start to long for a static language with a compiler that can catch silly mistakes well before anything is deployed to production.

From the perspective of a software developer it’s a zero-sum game. All the gains I think I’m getting from a dynamic high-level language are eventually eroded by all the test writing and general care that’s necessary to have confidence that things aren’t going to blow up in a user’s face. And all the hand-holding, nudging, and extra code needed in a static low-level language just to get the same amount of work done can be infuriating.

The obvious solution is a hybrid language, one that is abstract enough that you can work faster and write less code and at the same time uses a static type system with a compiler that can catch stupid typos and enforce design choices at compile time. In the world of functional programming, languages like this are plentiful and Haskell happens to be one of them.

Haskell has a very strong static type system. Every expression has a type and the way expressions are used is fully checked at compile time to ensure type safety. If you’ve used a static type system before you might have just thrown up in your mouth and had flashbacks of spoon-feeding the compiler type annotations and extra code that it should have been able to figure out for itself. Have no fear, Haskell has an excellent type inference system which means you get type safety without having to explicitly tell it what you’re up to all the time.

But type inference isn’t all that special really. Heck, even C++11 has the auto keyword that allows the type of a variable to be inferred by the compiler. What you should take away from this, however, is that Haskell provides a lot of type safety at compile time without placing any additional burdens on the programmer. Type inference is just one example.

The type system in Haskell is quite a bit more powerful than in most languages. If you haven’t been exposed to a language like Haskell you probably haven’t thought about how to use the type system to make code safer or how to enforce design decisions at compile time but that’s exactly what Haskell programmers do every day.

Consider again the topic of null pointers and nil objects. What can you (and the compiler especially) gather from the following C++ function prototype ?

User* get_user_by_name ( const std:: string &name);

I see a function that returns a pointer to a User object. Had it returned a reference I could be somewhat assured that it won’t fail at run time (but it could still throw an exception.) Setting aside the ambiguity of who’s responsible for memory management in this case, it’s probably safe to assume that this function can potentially return a null pointer.

The C++ compiler doesn’t make a distinction between a valid User pointer and a null pointer, it doesn’t even spit out a warning if you try to dereference a known null pointer, and it certainly doesn’t force the programmer to test the pointer to see if it’s NULL . This is a case of “the programmer knows best” even when the compiler can see otherwise. I don’t know about you but when I’m writing production software and the compiler can potentially detect a mistake I’ve made I want to know about it.

There are two popular ways to write this function in Haskell. The first clearly tells the compiler and the programmer that this function might not return a valid User :

getUserByName :: String -> Maybe User

And the following means that the function can either return a string containing an error message or a valid User :

getUserByName :: String -> Either String User

And here’s the kicker, this is all done in the type system. Programmers who don’t know Haskell might be tempted to think that Maybe and Either are language keywords or some sort of type qualifier for the compiler but they’re just data types that happen to be in the standard library.

The closest comparison to C++ that comes to mind is the union type. Take the Either type in Haskell, implemented in C++ it would be a union with two fields, a string and a user. Since it’s a union only one of the fields can be used at once and the programmer has to work out which one can be accessed. In Haskell it’s not so ambiguous, it’s actually not ambiguous at all.

In both Haskell examples above the compiler won’t let you directly access the User object because it might not even be there. You have to write a little more code to handle the case where the function failed to return a user. This might seem like extra work for the programmer but in reality it doesn’t work out that way. Haskell even provides some syntactic sugar that allows you to chain function calls of this nature together so that the first failure stops the chain, sort of like the try method mentioned earlier, but with much more flexibility.

If the Haskell type system can be used to encode information like “this function might return a user but it also might return nothing” can you imagine what else you could use it for? What about “this function only accepts validated user input” or something we’ll go over in the next section “this function does I/O”.

What this boils down to is that you can clearly articulate invariants using the Haskell type system and then know that the compiler will confirm that no programmer has broken the rules. It also means that the compiler acts as an automated test system for type related parts of the code. Now that’s a level of confidence you’re probably not used to, at least not yet.

Side Effects and Spaghetti

I’ve spent a large portion of my career using object oriented programming languages to great effect. The ability to manage the complexity of large projects by segmenting code into classes and hiding implementation details through encapsulation and abstraction has proved to be very helpful. Certainly more so than the abstractions in procedural languages.

Over the last few years something started bothering me and other object oriented practitioners, the fact that sending a single message to an object might change its state in several non-obvious ways. This led, in part, to conventions such as the single responsibility principle. In my opinion, however, this problem strikes at the heart of OOP (and more generally imperative programming.)

You might not think about it very often but a method call can do any of the following:

Alter instance variables for the current object either directly or by invoking other instance methods that do so. Change the state of any of its parameters. Modify class variables, global variables, or any other variable that might be in scope. Perform I/O (open files, write to a socket, communicate with another application, launch missiles, etc.)

The point is, going by the laws of encapsulation an object should be a black box that provides an interface and when calling a method you only care that it performs its promised task. You don’t really know what other magic it’s doing behind the scenes and you really shouldn’t care. Except that if you don’t know then you can’t write robust code because you can’t be sure what a method is capable of doing.

When invoking a method you know what inputs it takes (parameters) and what outputs it produces (return values) but you don’t necessarily know what its side effects are going to be. If fact, invoking the same method over and over again may produce different results and may alter the outside world in different ways. None of this information, however, is conveyed by the signature of the method. Most languages leave it up to the programmer to document these side effects in one way or another.

This all makes for a big mess.

Consider global variables, for a very long time now they have been the black sheep of the family, and for good reason. But what really makes global variables different than instance variables? Sure, instance variables have a much smaller scope, but in the context of all the code that makes up a single class don’t they essentially cause the same problems? Have you ever written a method that failed because some other method accidentally screwed up the value of an instance variable?

There’s a big difference between passing in an invalid parameter to a method and the object having an invalid internal state. The former is fairly easy to track down while the latter can be just like hunting for a bug involving a global variable. Instance variables in big classes also tend to create spaghetti code where side effects are hard to trace.

With the ability to indirectly (and often accidentally) change the state of any object and affect the outside world in any way, it’s like we’re living in the wild west of programming. While side effects are absolutely necessary for our programs to do anything useful, we need a way to manage them much better than we do, especially in imperative languages.

Let’s start by partitioning side effects into two buckets, mutable state and the ability to interact with the outside world (A.K.A. I/O). We’ll tackle how Haskell handles these starting with the latter.

Communicating with the outside world presents several problems, the least of which is changing the environment in a way that can’t easily be undone (deleting a file, sending a network packet, etc.) By default Haskell doesn’t allow any of these but instead provides a special I/O compartment or an escape hatch if you will.

The bridge to the outside, unpredictable world in Haskell is through a data type called IO . And since IO is a data type we can use the power of the type system to enforce this so-called compartment! In other words, a function can’t perform I/O unless its return type is the IO data type. Here’s a simple example:

getLine :: IO String

The getLine function reads a single line of characters from standard input. It doesn’t take any parameters and returns an IO type . The function’s type contains enough information to convey to the programmer that it might produce side effects. As a bonus feature it conveys the same information to the compiler which can make better optimization choices for functions that don’t have side effects.

This is so important that I want to say it one more time, you can’t do any I/O in Haskell unless your function returns an IO type. That also means that any function that uses an IO function must also be itself an IO function, which is why the main function is of type IO . Theoretically you could write all of your functions this way but you’d totally miss out on the benefits of compartmentalized I/O.

Getting back to the other major kind of side effects, mutations, Haskell has a simple solution to this one as well, they’re not allowed outside of the I/O compartment. In practice this isn’t an issue because mutations are very rare in Haskell. If you need to change one field of a record you just create a copy with the changes applied. The Haskell optimizer and garbage collector remove any of the downsides you might be thinking of.

Haskell does not have variables in the sense that you’re used to, which is a good thing. It means that everything you need to know about a function can be gleaned from its type signature. You don’t have to worry that it might change something you’re not aware of or affect the outside world behind your back.

More importantly, you can’t accidentally mutate something. Yet another class of problems completely eliminated and the only thing you have to do is slightly change the way you write code.

Haskell is Hard to Learn, Right?

There’s definitely some truth to Haskell’s reputation as a language that’s hard to learn. Fortunately it’s less about Haskell and more about us. As imperative programmers we bring a lot of baggage to Haskell. We expect to mutate variables, do I/O at will, and work with very simplistic type systems.

Then there’s the whole imperative vs. declarative issue to deal with. Haskell is a high-level language, very high-level in fact. In the imperative programming world even the highest level languages still require the programmer to structure code as a sequence of steps to execute on a CPU. In functional declarative languages like Haskell you write your algorithms similar to how they are structured in mathematics and let the compiler work out how to translate that into a series of steps. Often the Haskell compiler can generate binaries that are close to if not as fast as hand written C.

All of this adds up to Haskell being very different than we’re used to. At this point in my career I can usually learn a new programming language in a few hours and write production quality code in about a week. The way I’ve learned to assimilate programming languages is by comparison. Ruby and C++ are more similar than they are different so using what I know about C++ greatly reduced the cost of learning Ruby.

Haskell is a very different beast. It approaches programming from a completely different angle. It’s almost like learning to program all over again. Thankfully it’s not as bad as it sounds.

Hopefully I’ve demonstrated why Haskell is important and why you should take the time to learn it, even if it seems difficult at times.

It’s often said that learning Haskell will make you a better programmer no mater which language you choose to use. I’m a firm believer in that. I’d be surprised, however, if Haskell doesn’t become your new favorite language.

Learn Haskell the Right Way

If you like to learn in a classroom environment I teach a Haskell workshop where we focus on writing real world Haskell from day 1. It’s a very practical approach that specifically leverages your experience as an imperative programmer. I even follow up with students after the class to make sure they’re writing good code.

For those of you who like to go at it alone I’ve compiled a recommended reading and study list at the bottom of this page.

If you want to discus this article head over to reddit.

About the Author