There are two main questions to be asked regarding computation:

What can be computed?

What is easy/hard to compute?

The first question concerns computability theory, and the second question computational complexity theory.

To base those questions on solid foundations though, a more fundamental question should be answered first: What is computation?

That question leads us to automata theory, and the simplest model of computation: the finite automaton.

Finite Automata 101

We first have to cover some basic terminology. A finite automaton operates on symbols. A symbol can be any character ( 0, 1, a, b, $ , …). A set of symbols makes up an alphabet. {0,1} and {a, b} are alphabets. A finite sequence of symbols from an alphabet makes up a string over that alphabet (e.g. 0100 and 111 are strings over the alphabet {0,1} ). Finally, a set of strings form a language. A finite automaton (FA) simply reads an input string, accepts it if the string is part of the language the FA is programmed to recognize, and rejects otherwise.









An FA (finite automaton) has a set of states, and transitions between those states depending on the symbol being read. It starts at a designated start state, reads individual characters from the input string, and moves from state to state as specified by the transitions. If an FA is in one of its accept states when it reaches the end of the string, it is said to accept or recognize that string. If not it rejects the string.

There are different ways to represent an FA. The most intuitive way is to draw a directed graph similar to the following examples, called a state diagram. Another method is to draw a table with a row per state, column per symbol, and each cell containing the next state.

Exercise: The FA on the left recognizes all strings ending with a 1 . Can you figure out what language the one on the right recognizes?

The examples above show deterministic FA’s. An FA can also be non-deterministic. In a DFA (d stands for deterministic), there can only be one transition per input per state. An NFA (non-deterministic) can have arbitrarily many transitions for a single input from each state.

Think of an NFA as creating copies of itself and the computation going on independently on each of them if there are multiple possible transitions. If any one computation terminates on an accept state, the NFA accepts the string.

There is a special symbol for NFA’s, . A transition with input is followed regardless of the actual input.

Here is an NFA that accepts all strings containing 101 or 11 as a substring.

Notice that it transitions to the accept state when it encounters the sequence 101 , but the 0 in the middle is optional since the same transition also has as a symbol, meaning it is automatically taken.

An important conclusion is that NFA’s and DFA’s are equivalent. Any NFA can be converted to a DFA that recognizes the same language.

Adding some more jargon, a language is a regular language if it can be recognized by some finite automaton. Question: Are there non-regular languages?

Any regular language can also be represented by some regular expression. Hence, we say that regular expressions and finite automata are equivalent in their power, even though they serve their purpose in different ways.

Limitations

The main limitation of a finite automaton is its lack of memory. An FA doesn’t keep track of how it reached a state. It only knows the current state it is in. It can be argued that the current state in itself encodes some information about the past, but that depends on the meaning we add to it (e.g. a state can mean the FA has read three consecutive 1 ’s, but the FA still doesn’t remember its previous state). There is no undo operation, or different transitions from a state depending on the previous state.

Consider this language:

A finite automaton would need to keep track of how many 0 ’s it read so far and then reverse that operation for each 1 it reads. Without knowing the length of the string in advance, it is impossible to design an FA that recognizes this language. This is an example of a non-regular language.

So What?

I won’t spend much time telling you of the practical applications of finite automata, but there is one general point. When faced with a complex problem, thinking in terms of states is a very strong tool. Countless natural phenomena and electrical devices can be represented as a set of states and transitions. I am waiting for a productivity guru telling people to draw their ideal self as a state diagram (would probably name it something fancier) and recklessly obey that.

Another point is that it is easy to start with finite automata and build more complicated and powerful computational models. Add a stack, you get a pushdown automaton. Make it an endless tape and you get a Turing machine. A DFA is in a sense a decapitated Turing machine. It is interesting that historically, Turing machines came before the rigorous study of finita automata, though concepts similar to finite automata existed before (Markov chains).

Then there is the distinction between computability and complexity. Complexity theory emerged after the theory of computability for a reason. From a practical perspective, knowing a problem was computable alone did not help. Was there a faster way to compute the same thing, or was the effort spent trying meaningless?

The classification of problems into various complexity classes is one of the greatest achievements of computer science. The field of cryptography for example, would hardly be where it is now without such hardness distinctions.

The distinction between computability and complexity is also observable with finite automata. NFA’s and DFA’s are equivalent in power, but an NFA is likely simpler than its equivalent DFA in terms of the number of states. So does this mean nondeterminism in general leads to more efficient models than the deterministic ones? What advantage do deterministic models provide if they are less efficient?

Finally, from a more philosophical point of view, stepping away from the hype-driven daily cycle of life and contemplating the fundamental aspects of computation is an exercise everyone should do once in a while. These questions will keep computer scientists busy whether the practical computers are using vaccum tubes, transistors, or some quantum system.

This essay by Scott Aaronson on the philosophical implications of computational complexity theory hits a similar note on a different but related topic. It is amazing that such a seemingly complex and changing process that is computation can be viewed and analyzed in the form of these simple structures.

Further Reading

The book Introduction to the Theory of Computation by Michael Sipser is the de facto text on the theory of computation.

Introduction to Automata Theory, Languages and Computation by Hopcroft, Motwani, and Ullman is another comprehensive text.





Follow The Computation on Twitter, subscribe on Substack, or support on Patreon.