Let's assume we have a language without recursion or anything like it. This means no looping structures either. It also means we have (non-recursive) types as well so that we can't form a Y-combinator and escape. In this language, we are truly weak, separated from so many of our tools.

But we can ask a very good question about this language. Namely, what is the smallest thing we must give it in order for it to become just as powerful as a language with no such restrictions?

It turns out there are two answers.

We can introduce recursive binders, like a let rec command or something like Haskell's let which is always let rec . In other words, a structure which lets us define let x = e in b such that if x is free in e then it is computed as a fixed point on the equation x = e . We can introduce the function fix :: (a -> a) -> a such that fix f reduces in one step to f (fix f) .

It should be clear from the above presentation that fix can be implemented using recursive binders. What's a little less clear is that recursive binders can be implemented from non-recursive ones using fix, but here we are:

let x = fix $ \this -> e

The value this refers to the whole expression which ends up bound as x which is just what we want.

So why did I go out of my way to say all of the above?

Essentially, I'd like to argue that it's not possible to say that recursion is necessarily implemented via HOF combinators like map so long as you're willing to consider fix on that list. I'd also like to argue that any recursion implemented by combinators in that set can be done "explicitly" using recursive binders. They're equally powerful.