$\begingroup$

Computational learning theory (CoLT) is a branch of theoretical computer science associated with the mathematical analysis of machine learning. A lot of the early ideas of the field take inspiration from human learning. The field has developed into a very rigorous, mathematical, and precise science, but I have not seen it used much in the cognitive sciences directly. There is some indirect use through CoLT's interaction with statistics and machine learning algorithms (say, analyzing neural networks through VC-dimension).

Are there examples of rigorous uses/applications of CoLT to build theories in psychology, neuroscience, and/or cognitive science?

Notes:

The only two examples I am familiar with are:

The first made quiet a stir in the poverty-of-the-stimulus debate, and the second has been unnoticed by cognitive science.

I am interested in approaches of this flavor. I am relatively comfortable with CoLT as it is studied in mathematics, and am only interested (for this question) in approaches that have direct bearing on theories of human/animal cognition/learning, and not classic machine learning results. I am looking for general mathematical and asymptotic approaches, not the running of specific types of algorithms (be it neural-nets, bayesian, or otherwise) to simulate human performance as is typical in computational modeling in cogsci (which I am relatively familiar with).

I am not interested in arguments that try to trivially undercut the whole approach, even if they have empirical validity. For instance, the whole approach can be derailed by asserting that human brains are finite and thus asymptotic arguments are useless. This is the same as arguing that all of computational complexity theory is pointless because computers (and the whole universe, for that matter) are finite. It is a valid empirical argument, but boring from the point of view of theory building.

Related questions: