Birds are a surprising group of creatures: parrots are great mimics, while birds in the UK figured out how to get a free meal of milk a long time ago. Some species are thought to recognize themselves in a mirror, and many species can also figure out how to pull a string up and trap it under foot to gain a meal, even if the string is quite long and they have to do it repetitively.

But among all these overachievers, crows seem to be the shining exemplar of intelligence. You see, a crow, when first faced with a bit of meat dangling from a bit of string, figures out a solution pretty much instantly. This has led researchers to posit that crows build mental models that generate solutions, instead of relying on trial and error. Now, a bunch of Kiwis have published research in PLoS One that suggests crows don't actually build models.

What is the difference between model-based solutions and feedback-based solutions? When we rely on feedback, we first perform an action—pull on the string and trap it underfoot—if we perceive that we are closer to our goal (the meat is now closer), we repeat the action. A model-based solution, on the other hand, involves understanding that the meat is connected to a bit of string, and that to get the meat, we must pull the string up. In the second case, feedback after each step is not required, because we understand the problem and know that we will be rewarded in the end.

To build a model requires a deeper understanding of the situation: that there is string, and it is connected to food. We know that crows understand the connectivity between the string and the meat to a certain extent. For instance, if there are multiple strings, the crow will usually pull the correct one. If the strings are crossed, but have different colors, the crow will still usually pull the correct one.

The crows only become really confused when the strings are both crossed and the same color—mind you, having had to untangle numerous kite and yo-yo strings, I can understand the crows' feeling of confusion. But, the point is that this last test suggests that the crow uses the continuity of the string running from the perch to the meat to conclude that the two are connected. That may not require a symbolic understanding of the connectivity.

What was missing in all these tests was control over the feedback. Each bird could see the meat throughout the test, and could judge success by the proximity of the food. To really distinguish between model-building and feedback-based solutions, the feedback needed to be restricted. The researchers did this by placing a large wooden board with a small hole in it between the perch and the meat. As the crows pulled the meat up, they would lose sight of it. Further, even when the meat was visible, the hole and restricted visibility made it difficult to judge distances, meaning that the crows didn't necessarily know that it was any closer.

A number of tests were run. Several birds were given the normal string test, so that they were familiar. Others were kept unfamiliar with the test until their first exposure, when they were presented with the more difficult string test, the one with the board. Following that, all were give a series of string tests involving angled and crossed strings.

The authors found, as expected, that the crows given the normal string test figured it out quite fast. Of those given the test involving pulling the meat up through a hole in a piece of wood, only one crow succeeded. Further, the birds that were experienced in solving the normal string test performed much worse when faced with the more difficult test. This combination suggests that even experienced crows really need constant feedback to solve the problem. Conclusion: crows don't make models.

I must admit to having a little problem with that conclusion. First, one crow did solve the problem; second, the crows all varied widely in their performance on all of the tests, suggesting that problem-solving abilities vary wildly between individuals—no surprise there. Finally, I think the distinction between model-building and feedback-based problem solving skills are artificial points in a mental toolkit that spans a continuum.

The point is that, even when we make models of the world, we rely on feedback to validate those models. Our previous experience also influences how quickly we abandon a model in the absence of positive feedback. These experiments show that, if crows do build models, they don't generalize them very well, and they require a fair bit of reinforcement before they'll abandon the model. But, other than the amount of reinforcement involved and the complexity of the model, is this that different from human behavior?

A relevant example is student learning. In first year physics, students will generally make one, perhaps two, attempts to get an experiment to work before abandoning their efforts and seeking help. In other words, without positive feedback, the students don't trust their model. As they develop experience in experimental physics—even in the artificial environment of teaching labs—they will become more patient and explore a greater a range of possibilities, and less positive feedback is required for the student to trust the model.

Similarly, post-graduate students generally need to learn to explore a range of experimental parameters before deciding that an experiment is not going to work. Finally, an experienced researcher expects that they may not receive positive feedback from an experiment for a year or more. Further, they don't consider an experiment that didn't work a failure unless they can't understand why it didn't work. In those cases, they haven't learned anything from the failure and cannot use the failure to modify their model.

I think the research is really well done, and allows one to draw conclusions about the limits of a crows model-building ability—if, indeed, they have any—and also their trust in those models. But I also feel that the researchers have over-interpreted their results, because they neglect the role of feedback in the construction of models.

I should also add that I am not an expert in the field, so if anyone knows why I am wrong, I welcome correction in the comments.

PLoS One, DOI: 10.1371/journal.pone.0009345