Intro • Part 1 • Part 2 • Part 3

This post is part 2 in the series: “Gain confidence with Haskell!”, where I’ll be going over some reasons why I really enjoy coding in this up-and-coming language. In this post, I’ll be going over what makes declarative programming different from imperative programming and how being declarative makes programs easier to understand and less prone to mistakes.

Stop hand-holding the computer

Computers are like really stupid babies: you have to tell it exactly what to do, and it will do exactly that, no questions asked. The problem is that developers are stupid, too; we make stupid mistakes because we’re tired, we’re hungry, or we’re trying to type without looking at the keyboard to make our manager think we’re paying attention to their presentation on “being fully present at meetings”.

Since I don’t have trust in my ability to not make mistakes, I gain confidence in my code when I am able to outline what I want the computer to give me (declarative programming), instead of telling the computer, step-by-step, what to do (imperative programming).

For example, it’s a fairly common operation to take a list of items and change each item using a specific function. Say I have a list of people and want to turn that into a list of their first names. Most languages will have you write a loop that gives the computer step-by-step instructions:

let people = [{ name: "Alice" }, { name: "Bob" }, ...] let names = [] let i = 0 while (i < people.length) { let person = people[i] names[i] = person.name i = i + 1 } return names

Alternatively, some languages helpfully provide a map function, which lets you specify a function to run on each element in the list:

let people = [{ name: "Alice" }, { name: "Bob" }, ...] let names = people.map(person => person.name) return names

In the first example, we initialize a variable i that keeps track of where in the list we are currently at, and then take the person at position i in the first list, get its name, and store it in the second list at the same position i . Then we increment i and repeat until i is at the end of the first list.

In the second example, we describe the names list as being the same thing as the people list, except converting each element with the given function.

Yes, I know Red + Yellow ≠ Green. But the colors look nice, don’t they?

Notice how the first example is described as a series of actions, while the second example is described as a single transformation of one form of data to another. In other words, there are 6 actions in the first example that a developer can make a mistake on, compared to 1 transformation in the second example that a developer can make a mistake on. But on a deeper level, as a developer, I usually don’t care about how a computer does something; I do care about what data I’m dealing with. By writing code from a data-first perspective instead of an instructions-first perspective, I can focus on the bigger picture of what I’m trying to get out of a program instead of trying to hand-hold the computer’s every action.

Many languages are starting to adopt this declarative style of programming, often in conjunction with imperative programming. Both examples above use Javascript, which lets you do either style of programming. But this is just one example of data-first programming.

One thing that most languages don’t provide an easy way to do is function composition. If you might remember from your algebra class, the composition of two functions is defined as:

Here, x is passed as an argument to g first, then the result of g(x) is passed to f . As a concrete example, say you want to make up a superhero name for each person, but the only functions you have are getName (gets the name of a person) and makeSuperhero (which takes a normal name and hero-ifies it). In TypeScript, you might do

function makeSuperName(person: Person): string { let name = getName(person) return makeSuperhero(name) }

This reads as: “Define a function makeSuperName with one argument, person . First, call getName on person and save the result as name . Then, call makeSuperhero on name and return the result.” Compare this to Haskell:

makeSuperName :: Person -> String makeSuperName = makeSuperhero . getName

This reads as: “Define a function makeSuperName that is defined as the composition of makeSuperhero and getName “. In the first example, you have to explicitly pass the data through the functions, whereas here, you’re simply defining how the functions should work together and the computer will thread the data through for you. While it still basically boils down to the same thing (pass the argument to getName , then to makeSuperhero ), it reads completely differently, in a way that makes it more obvious what the general goal of makeSuperName is.

Again, the more the computer is able to help out with the mundane tasks, the more I can focus on the more interesting stuff. Function composition and the map function are just two examples of operations that automate logic so that the computer can do more work instead of me, and Haskell, in my opinion, provides more of these tools than other languages.

Making data manipulation more versatile

In Part 1 of this series, we briefly interacted with the Maybe a type, as well as the common [a] list type. These two types have one thing in common: they contain zero, one, or (in the case of [a] ) multiple values, and neither care about the type of the value being contained. This property is called polymorphism, but in the case of these two types, it also puts them into a special class called functors. Don’t spend too much time on this yet, but here is the technical definition of a functor:

class Functor f where fmap :: (a -> b) -> f a -> f b

I like to think of functors as “container” types: types that contain other types. Lists contain zero or more values of a certain type, and Maybes contain exactly zero or one value. In addition to whatever functions are defined for each specific type, types in the functor class can also use the fmap function, which converts the value(s) inside the container type into another type (or even the same type). This should sound familiar; for lists, this is, in fact, the map function!

timesTen :: Int -> Int timesTen x = x * 10 fmap timesTen [] == [] fmap timesTen [1, 2, 3] == [10, 20, 30] -- Turns a number into a string of that number showInt :: Int -> String fmap showInt [] == [] fmap showInt [1, 2, 3] == ["1", "2", "3"]

And indeed, lists define fmap as a simple alias to map :

instance Functor [] where -- rewriting the signature here for clarity fmap :: (a -> b) -> [a] -> [b] fmap = map

But while map is specific to lists, other types can take advantage of fmap to do a similar operation. For example, with Maybe :

timesTen :: Int -> Int timesTen x = x * 10 fmap timesTen Nothing == Nothing fmap timesTen (Just 1) == Just 10 -- Turns a number into a string of that number showInt :: Int -> String fmap showInt Nothing == Nothing fmap showInt (Just 1) == Just "1"

Hopefully, this example makes it clear what Maybe a and [a] have in common, and why fmap makes sense for both of these types. Both types “contain” some arbitrary type a , so it makes sense that a function that converts type a to type b (i.e. a -> b ) should be able to convert the type a within f to type b within f (i.e. f a -> f b ). This concept can perhaps be more noticeable in the functor definition for Maybe :

instance Functor Maybe where fmap :: (a -> b) -> Maybe a -> Maybe b fmap fn (Just x) = Just (fn x) fmap _ Nothing = Nothing

This intuition of a “container” type has been extremely useful to me in my Haskell journey. It even translates really nicely into a visual representation, which is helpful for me as a visual learner:

Despite its seeming simplicity, functors are surprisingly useful, and once you start using them, it’s very frustrating to go back to a language without them. In Python, for example, you might often write code like:

if possibly_missing_person: name = possibly_missing_person.get_name() else: name = None

or even shorthand it like

name = possibly_missing_person.get_name() if possibly_missing_person else None

But either way, using an if-statement frames the operation as “check this condition and run this operation if it’s true or that operation if not”. Getting back to the point of this blog post, we don’t want to care about which step should run and when to run it. What we’re really trying to do is “get some property from the Person, keeping the Person’s existence (or lack thereof) the same”. Using fmap on a Maybe value maintains that essence, as the very definition of fmap :: (a -> b) -> Maybe a -> Maybe b shows the internal value being changed ( a to b ) whilst keeping its Maybe-ness the same.

possiblyMissingPerson :: Maybe Person getName :: Person -> String fmap getName possiblyMissingPerson :: Maybe String

In summary, one of the great things about Haskell is the way it trains you to think primarily about the data and how you want to manipulate the data. It provides many useful concepts like functors or function composition, which enable more higher-level data manipulation (e.g. fmap transforms any value inside of any container-like structure).

We humans are great at describing what data we want to give the computer and what data we want to get back, while we’re not so great at thinking like a computer and keeping track of what step we’re on. So as programmers, writing programs as a series of data manipulations gives us more confidence in our code, because it’s usually easier to notice if you transformed data incorrectly than if you gave the wrong instruction to the computer.