Post systems (think of generative grammars) are string transformations based on repeated, perhaps context-sensitive, substitutions -- replacements of substrings by other strings. The substitutions are expressed via a finite sequence of rules (grammar productions). The order of applying the rules is not defined. Normal Markov algorithms, true to the name, are a restricted, `normal' form of rule-based substitution systems. Substitutions are performed in the strictly defined order and regardless of the surrounding context. The whole string transformation process runs deterministically and can be done by a simple mechanism, a `machine'. Normal Markov algorithms deserve to be called `algorithms'. (There is a play of words missing in English: `algorithm' is `алгоритм' in Russian. Markov called his system `алгорифм'.)

Normal Markov algorithm is hence a machine that repeatedly rewrites the given input string according to an ordered sequence of rewriting rules. Each rule is a pair of strings: the source src and the replacement rplc , each of which may be empty. Some rules may be marked as terminal. The work cycle of the machine is: find the first rule in the sequence whose src appears as a substring of the input string. Replace the leftmost occurrence of the src with the rplc of the rule. If the rule is terminal, stop. Otherwise, start the cycle anew, the just rewritten string being the input. If no rule applies, stop.

As an example, the following sequence of rules, written as an OCaml array, converts a big-endian binary number (a string of 0 s and 1 s) into the string of vertical bars encoding the same number in unary.

let bin_to_unary = [| rule "1" "0|"; rule "|0" "0||"; rule "0" ""; |]

run bin_to_unary "110"

"||||||"

Curious readers may want to work out how the following rules

let gcd = [| rule "aA" "Aa"; rule "a#a" "A#"; rule "a#" "#B"; rule "B" "a"; rule "A" "C"; rule "C" "a"; rule "#" ""; |]

a

#

a

a

Markov has shown that all known classes of string transformations performed by a finite sequence of rewriting rules can be expressed as a normal algorithm (that is, `normalizable'). This let him pose that all algorithms are normalizable. He justified his thesis more than either Turing or Church did for the corresponding Church-Turing thesis. Here is how V.M.Glushkov (cited from the English translation) describes this justification:

The validity of this [Markov normalization] principle is based first of all on the fact that all the algorithms known at the present time are normalizable. Since in the course of the long history of the development of the exact sciences a considerable number of different algorithms have been devised, this statement is convincing in itself. In actuality it is even more convincing. We can show that all the methods known at the present time for the composition of algorithms which make it possible to construct new algorithms from the already known ones do not go beyond the limits of the class of normalizable algorithms. ... However this is not all. A whole series of scientists have undertaken special attempts to construct algorithms of a more general form and all these attempts have not been carried beyond the limits of the class of normalizable algorithms.

I'd like to clear a common misunderstanding about the exact period when A.A.Markov developed his theory of algorithms. All the English sources that I have seen claim that Markov Algorithms was the work of 1960s -- most likely, based on the publication date of the very brief English translation of Markov's work. However, the theory was published in Russian quite a bit earlier. According to the bibliography of A.A.Markov (referenced below), the first paper `Theory of algorithms' appeared in Steklov's Mathematics Institute journal in 1951. Three years later, in 1954, A.A.Markov published a 376-page book with the same title. He was working in the area of string algorithms already in 1947, when he published his famous result: undecidability of word problems in semigroups.