Until now, it was hard to call functional programming a mainstream practice. It still remains not well understood and is perceived as a topic rather obscure if not with the enigmatic halo and awe.

Even though respected gurus like Robert Martin, Rich Hickey, Mark Seeman, and others try to promote it, not so many developers practice it consciously. Nevertheless, the functional paradigm itself slowly but surely sneaks into our general-purpose languages and tools sometimes giving new names and applications to fairly old concepts. Just consider Reactive Extensions, React, Redux, LINQ, Java Streams, and C# 8 as a superficial example of this process. In this article, we will try to figure out what is the foundation of the functional paradigm. Why does it really matter? What features make it close to still dominant object-oriented programming?

Case History of Functional Programming

When looking at the functional programming as a modern trend, it is interesting to note that this is in fact one of the oldest paradigms. It is even older then structural programming and the times when Go To statement became considered harmful! Here is a short list of programming paradigms with the respective time of their appearance:

Imperative:

Machine (1940s); Procedural (1960s – FORTRAN, ALGOL, COBOL, BASIC); Structured (1966-1968 “Go To Statement Considered Harmful” by Dijkstra); Object-oriented (1967 - Simula, 1972 - Smalltalk); Event-driven (1970s);

Declarative:

Functional (1958 - LISP); Logic (1972 - Prolog); Domain-specific (1974 - SQL); Functional reactive (1997-2000 Paul Hudak).



So, the story begins back in 1958 with the creation of LISP, which was and remains an overwhelmingly powerful language, and you can learn more about it in our special article. LISP was created by John McCarthy who was a pioneer of Artificial Intelligence in the way that the term AI itself was coined by him. For the foundation of his new language, he took the concept of Lambda Calculus, a distinctive notation for function. That’s where the term “lambda” originally came from and then spread among various languages.

Lambda Calculus was developed in 30s by Alonzo Church and can be considered as an alternative to a Turing machine. In fact, both scientists were solving the same problem at the same time though they came up to it with different approaches. Turing machine is virtually a state machine, in other words it has a set of states and the rules of transition between them. Lambda calculus, on the other hand, does not operate the state at all, it is stateless. This difference in theoretical approaches led to fundamentally different designs of the future programming languages and was naturally reflected with the first two high-level languages: Fortran and LISP, respectively.

Programming in FORTRAN and its successors (including C, C++, Java) or any other imperative language is actually programming a state machine, while programming in LISP and functional languages looks like a composition of recursive functions.

This difference is so crucial that it actually affects the way of thinking.

At this point I should admit that, in reality, most of the functional languages allow a programmer to bring in some imperative code, though this is not “idiomatic” way and rarely justified.

Over time, there were many functional languages developed, let’s mention the most prominent ones:

LISP – 1958;

Scheme – 1970;

Racket – 1994; Common Lisp – 1984; Clojure – 2007;

ML (Meta Language) – 1973;

OCaml -1996; F# - 2005;

Erlang – 1986;

Haskell – 1990;

Scala – 2004.

There is a troubling question of why functional languages were almost completely whipped out of the industry and given up the market to imperative (procedural and later object-oriented) languages? Wasn’t that an evidence of their inferiority and if so, why are we experiencing the comeback of functional practices nowadays?

For that reason, we should point out that functional languages being declarative are indeed more abstract and expressive, but such qualities cannot be easily achieved without trade-offs in performance. Therefore, imperative languages appeared to be superior at the time when personal computers were rising, and every byte counted.

Nevertheless, functional languages did survive. Firstly, they were continuously evolving in the academic environment for an obvious reason of closeness to math together with the availability of powerful mainframes. Secondly, functional languages were constantly finding application in solving cutting-edge problems even by private companies. One of the prominent applicants was Xerox, which employed LISP in its pioneering workstations that featured one of the first graphical user interfaces. Another but truly remarkable was the experience of employing the Erlang system by Ericson in building distributed telecom systems. It appeared that building the same system in functional Erlang compared to C++ shortens the codebase three times, while almost doubling the performance!





We have an outstanding track record in many knowledge-savvy industries. Looking for a technological partner to make working software solutions?

This is an important point, because previously we admitted that declarative expressiveness indeed comes with a trade-off in performance, and this is still true, while another truth is that imperative approaches cease to work at some degree of system complexity. For simplicity it can be explained in two ways: either human intelligence fails to efficiently model a distributed system like a state machine or a state machine is an incorrect way to model such systems. Keeping that in mind, let’s take into account the radical change in the hardware we experienced during the last decade and a half (Images 1-3). We came from single-processor workstations to dozen-core smartphones, while the line between fast RAM and formerly slow persistent storage is blurring. Modern software should utilize these powers efficiently and reliably. While the imperative approach naturally fits programming a single calculator such as the Turing machine, it seems to be not suitable for handling parallel, distributed, heterogeneous computations.

Image 1. CPU cores trend (cores/year)

Image 2. GPU cores trend (cores/year)

Image 3. RAM price trend ($ per Mbyte/year)

The Pillars of Functional Programming

Every object-oriented developer should be well familiar with the so called Four Pillars OOP:

Encapsulation Inheritance Polymorphism Abstraction

Since it is a familiar approach, it would be suitable to represent the key features of the functional paradigm in the same way.

So, the Four Pillars of the Functional Programming would be:

Immutability Purity First-class functions Recursion

Herein below we will describe them precisely and see that in addition to them some OO features are not just achievable, but appear to be a natural part of functional languages as well.

Immutability

Immutability is a default property of all functional data structures and variables, which means impossibility to change an object state once it was created.

Being a quite harsh and strange restriction at first glance, immutability appeared to be a really valuable property having the following benefits: