$\begingroup$

I'd like to add to Chuck's excellent answer; the computational approach is very well-represented in neuroscience, and actually involves a large number of very heterogeneous methods. Thus, a very different set of neuroscientists and examples have sprung to mind for me.

To my mind, the best single example of the utility of a computational approach to interpreting neural data is the reward prediction error hypothesis of dopamine function. Early investigations in machine learning led to the development of the temporal difference reinforcement learning algorithm. Subsequently it was noticed, by Wolfram Schulz, that the pattern of firing in dopaminergic cells appeared to closely correspond to what would be expected of a temporal difference error signal - the very core of this form of reinforcement learning. Subsequently, similar reinforcement learning algorithms (e.g., Q-learning) have been applied to a variety of neural data - albeit at the much coarser scale of fMRI - and provided a variety of evidence that is strongly supportive of this hypothesis. This work has been absolutely crucial to progress in the last 10-15 years in cognitive and systems neuroscience.

The second example that springs to mind also pertains to dopamine; in this case, the utility of the computational approach was in understanding the pattern of effects produced by release of dopamine, rather than in characterizing the pattern of dopamine release itself. At the cellular level Jeremy Seaman's work is a great example of this approach; he has used Fisher discriminant analysis and other multivariate techniques to better characterize multi-unit activity recorded in prefrontal cortex, under various dopamine conditions. Ultimately his work has been the primary computational approach contributing to what is now the modal understanding of dopamine's effect in the prefrontal cortex, which is to enhance signal to noise ratio.

A third example has to do with the application of graph theory to neural networks, and in particular our understanding of small-world connected graphs. The implication of this work for neuroscience was not lost on the mathematicians that originally pursued this line of work, but its actual utility for neuroscience has been most famously demonstrated by Olaf Sporns in understanding what is coming to be called the "connectome" - that is, the graph of structurally- and functionally-connected neural systems. Although this is still an active line of research, small-world connectivity appears to be a ubiquitous feature of cortex.

I would be remiss not to mention the work of a large group of researchers trained in the "parallel distributed processing" or connectionism tradition, including Matt Botvinick, Jon Cohen, Michael Frank, Ken Norman, Yuko Munakata, Randy O'Reilly. There are too many excellent examples from their work to name, but all of these researchers have used behavioral and neural data to build neural network models of cortical and subcortical processing, at varying levels of biological detail; validated these models in terms of their fit to existing behavioral and neural data; and used these models to derive predictions that have subsequently been tested using a variety of approaches. The following is a woefully incomplete, but at least representative, list of the progress made by these researchers: the discovery of parallels between the dimensionality-reduction or "abstraction" techniques that can be useful in reinforcement learning to cortical processing (in the case of Matt Botvinick); the discovery of parallels between the exploration/exploitation dilemma in reinforcement learning and the function of the locus coereleus/norepinephrine system (in the case of Jon Cohen); the understanding of fine-grained functional consequences of differences in dopamine receptor subtypes, as well as prefrontal/striatal interactions (in the case of Michael Frank), the discovery of the large-scale consequences of cellular-level long-term depression and potentiation phenomena for hemodynamic patterns in memory tasks (in the case of Ken Norman), the discovery of the role of prefrontal inhibitory neurotransmission in tasks requiring "selection" of information under competition (in the case of Yuko Munakata), and the discovery of the computational underpinnings of the tri-partite division of labor between hippocampus, posterior cortex, and prefrontal cortex (in the case of Randy O'Reilly). Jay McClelland, David Rumelhart, David Seidenberg, Geoff Hinton, and many others laid the groundwork for much of this progress, although their contact with detailed neural phenomena was more limited than that made by their trainees (and their trainees' trainees), above.

I would conclude by seconding Chuck's suggestion to do a PubMed search. Your question is too broad for a comprehensive answer conveyed in anything shorter than a book!