4th August 2010, 02:22 pm

There a belief about Haskell that keeps popping up in chat rooms and mailing lists — one that I’ve been puzzling over for a while. One expression of the belief is “everything is a function” in Haskell.

Of course, there are all of these non-functions that need to be accounted for, including integers, booleans, tuples, and lists. What about them? A recurring answer is that such things are “functions of no arguments” or functions of a one-element type or “constant functions”.

I wonder about how beliefs form, spread, and solidify, and so I asked around about how people came to this notion and how they managed to hold onto it. I had a few conjectures in mind, which I kept to myself to avoid biasing people’s responses. Of the responses I got, some were as I’d imagined, and some were quite surprising to me, revealing some of my blind spots about others’ thinking and about conversation dynamics.

My thanks to the many Haskellers, especially newbies, who took the time to help me understand their thought processes. If you’re interested and in a patient mood, you can see the unedited responses on a Haskell reddit thread and on a #haskell IRC log. There were also a few responses on Twitter.

Edits:

2009-08-04: Added “simplify”: “Would making everything a function really simplify the formal system that is Haskell programming?”. Thanks, SLi.

2009-08-04: Focus on “constant function” story for “It makes things simpler”. I realized that I hadn’t said what I intended there. Thanks, Jonathan Cast.

2011-03-04: Remarks on mutability & dynamic typing, under “Operational thinking”

Analogy with object-oriented programming

Hmm… I think it was because I was originally taught ‘everything is an object’ in OOP.

Many people come to functional programming (FP) after object-oriented programming (OOP). One may have heard that in a “pure OOP” language, everything is an object. By analogy, one could then wonder whether in a “pure FP” language, everything is a function, i.e., whether function is to functional as object is to object-oriented.

Perhaps such an analogy sounds plausible at first, and some people don’t give it critical thought to test whether this first guess fits with reality. Or perhaps they’ve heard someone speak with confidence, and they confused confidence with understanding.

To try to answer the question about the development of beliefs: I think part of it has to do with newcomers hearing the old-hands saying it, and so it becomes part of the dogma they learn.

It makes things simpler

It’s a unification (like electro-weak interaction). Having only one kind of things is better than having two.

Many of us like simple, unifying principles, and we dislike exceptions. Fewer building blocks and fewer exceptions often leads to a simpler, more compelling precise formulation, which supports more tractable reasoning (mechanized in compilers and non-mechanized in programmers).

This reason is perhaps in the realm of wishful thinking. As in “it would be nice if everything were a function in Haskell, since things would be simpler that way”.

Would making everything a function really simplify the formal system that is Haskell programming? For instance, what would be the impact on the type rules?

First, let’s separate the nullary and constant-function perspectives. The (formal) type system has no notion of functions other than unary, so I’ll focus on the constant-function story. For instance, that “ 7 ” is shorthand for “ () -> 7 “, and so “ x = 7 ” means “ x = () -> 7 “, i.e., “ x () = 7 ” (adopting the usual syntactic sugar). (Edit: added this paragraph.)

If we did decide that values of non-function type are constant functions, then what order functions? 1st, 2nd, 3rd, …? In other words, if 7 is a nullary or constant function, what is the result type of that function? If the answer is, say, Integer , then we’d have to recall that Integer really means nullary or constant function whose result type is Integer , ad infinitum. In a first-order functional language, this troubling question would not arise, and for imperative programmers (including OOP-ers), functions are a mainly first-order concept.

Why complicate your universe with non-functions when functions naturally generalize to cover them?

Similarly, why complicate your universe with functions when non-functions naturally generalize to cover them? For instance, every value (including functions) can be promoted to a list, tree, Maybe, pair, Id (identity newtype wrapper), etc.

But then I wonder what type of thing is in those lists, trees, maybes, etc, and what type of thing is returned from those functions.

And I do like static typing, which is lost when one wraps up everything into a universal type. For instance, we lose knowing that the list or tree has exactly one element, that the Maybe is a Just, and that the function is a constant function.

Mixing up functions and definitions

Many times, I’ve seen Haskell programmers (perhaps mainly newbies) use the word “function” to mean “top-level definition”. Which surprises me, as the meaning I attach to “function” has nothing to do with “top-level” or with “definition”.

For instance,

‘y = 3′ can be considered as a function with no arguments that returns just 3.

I guess what’s going on here is a conflation of functions and (top-level) definitions. Maybe because these two notions are tightly connected in C programming, where functions are always named (defined) and always at the top level and are always immutable, whereas non-functions can be defined in a nested scope and historically were mutable (before const ).

Church’s lambda calculus

Well, blame Church.

In Church’s original lambda calculus, everything is a function. For instance,

type Bool = ∀ a. a -> a -> a false, true :: Bool true t e = t false t e = e type Nat = ∀ a. (a -> a) -> (a -> a) three :: Nat three f = f ∘ f ∘ f

Do some folks believe we’re still doing what Church did, i.e., to encode all data as functions and build all types out of -> ?

Operational thinking

Haskell’s laziness by default was what made me think of constants as functions that take no parameters.

A thunk at the operational level (language implementation) is vaguely reminiscent of a function at the level of language specification and semantics.

This bit of operational reasoning had not occurred to me as a source of thinking that even non-functions are “functions”. I was looking for denotational reasons, i.e., what things mean rather than how an implementation might work. One of my recurring blind spots when trying to understand other’s thought processes is this sort of operational-by-default thinking.

(Edit: added the following paragraph.)

Lazy evaluation replaces thunks by more evaluated forms (pure values for simple types but otherwise an evaluated outer-wrapper and possibly unevaluated inner contents, i.e., weak head normal form). That replacement is done destructively, i.e., a side-effect of evaluation. So the thunk-focused operational-thinking explanation of “everything is a function in Haskell” leads to the interesting conclusion that Haskell has pervasive (and difficult to predict) side-effects and that these side-effects cause changes to types, not just to values (going from function to non-function). That’s a lot of mutability and dynamic typing for a purely functional, statically typed language!

Interested in a different conversation

This one was the hardest of all for me to get so far and was a real lightbulb moment for some people’s belief that everything is a function in Haskell. They don’t believe it. They’d rather talk about something else, such as formalisms in which everything is a function. Or maybe they even have a different notion of what topic is being discussed.

And I realize I often do the same thing. I hear a discussion topic that overlaps with one interesting to me (such as how do people form, spread, and solidify beliefs), and I focus more on the topic that interests me and less on the original topic. And if someone doesn’t notice the switch, they can get confused.

Wow — this reason was in another blind spot for me.

Is everything a function in Haskell?

conal, that’s a funny one. in Haskell, static types give a pretty sharp objective determination of what is a function and what isn’t

Haskell has a precisely specified type system that defines what it means for (the value of) an expression to have function type.

Reasons vs rationalizations

Most of our so-called reasoning consists in finding arguments for going on believing as we already do. (James Harvey Robinson)

Although I keep hearing “everything is a function” (and 3 is a nullary or constant function), I don’t hear people say “everything is a list”, and 3 is really the singleton list [3] . Or “everything is a pair”, and 7 is really (7,⊥) or some such. Or “everything is a Maybe “, and True is really Just True . Personally I don’t like to equate non-functions (number, bools, trees, etc) with 0-ary functions any more than I to equate them with singleton lists (or trees, …) or non- Nothing Maybe values, etc.

So my best guess is that a statement like “well, 3 is a constant function” is not so much a reason as a rationalization. In other words, not so much a means of arriving at a belief as a means of holding onto belief in the face of evidence to the contrary. After all, one could use the same style of “reasoning” to arrive at quite different beliefs, namely that 3 is a list, pair, or Maybe in Haskell. On the other hand, if the task at hand is to solidify a given belief rather than arrive at an understanding, then one would be less likely to notice that a given rationalization can be easily adapted to alternative beliefs.

There a belief about Haskell that keeps popping up in chat rooms and mailing lists — one that I’ve been puzzling over for a while. One expression of the belief...