$\begingroup$

In the cognitive sciences Alan Turing is best known for launching AI with his Computing machinery and intelligence (1950). However, this was not his first contribution to the cognitive sciences, in his unpublished 1948 technical report Intelligence Machinery he foresaw connectionism with his B-type neural networks.

The model is a recurrent neural network that is wired at random, and synchronized by a global clock. The neurons are two input $\mathrm{NAND}$-gates. The connections have one of two states: they either forward their signal perfectly ($0 \mapsto 0$ and $1 \mapsto 1$), or replace it by $1$ ($0 \mapsto 1$ and $1 \mapsto 1$). The learning algorithm adjusts the states of the connections.

Unfortunately, the director of the National Physical Laboratory rejected Turing's work and it was not published until significantly after Turing's death. The original manuscript, though predates Hebbian learning (1949) and Rosenblatt's perceptrons (1957; and they weren't as sophisticated, only doing feedforward as opposed to recurrent).

Was Turing's B-type neural networks are the earliest neural-like models of computations capable of learning?

Although by modern standard Turing's approach is dated, and has been supplanted by more realistic and general treatements (for instance ones that incorporate dynamic Hebbian updating on weighted connections without a need for central clock synchronization). When did the state of connectionism first surpasss Turing's B-type neural networks? Is there modern treatments of B-type neural networks and their learning abilities?

Notes

I am interested in this mostly from the historic perspective, and not in how accurate Turing's model was under current interpertations. Although current knowledge would help answer when other models surpassed Turing's.