Last time I wrote an article that was published in Free Code Camp called Don’t copy-paste code. Type it out. In the article, it shows a graph from the book Audio-Visual Methods in Teaching called the Cone of Experience. However, in the initial draft of that same publication, another graph was being used instead, called the Learning Pyramid.

The Learning Pyramid is a popular graph that is widely advertised on the internet, appearing among many articles related to learning. It basically promises to show the average retention of information of a person through many different means.

One of the most popular Learning Pyramid graphs, it shows the average student retention rate from top to bottom: 10% Lecture, 20% Audiovisual, 30% Demonstration, 50% Discussion, 75% Practice doing and 90% Teach others.

The problem with the Learning Pyramid is that it doesn't have any evidence of being the result of a proper scientific research. The links below sum up very well the arguments against it:

The TL;DR is:

There is no paper that provides the exact sections and numbers demonstrated in any of the popular pyramids.

We know that inconsistency indicates a weak empirical base and there are many different types of pyramids with a slightly different structure, albeit with consistent numbers.

The numbers although consistent are rounded and we know that is very unlikely for a real research to achieve round numbers.

The original paper of Edgar Dale, which is the paper that many people claim to be the source, call it the "Cone of Experience" instead and doesn’t provide any numbers on it.

The craziest thing is that 10 years ago Will Thalheimer, current president of Work-Learning Research, got in contact with the NTL Institute and they claimed that the round numbers were real but have failed to provide evidence of how they were discovered.

If somebody is willing to look for it, there's enough evidence to assume that the Learning Pyramid don't have a strong case for itself given that many people failed to find a reasonable source for the image. Unfortunately, I wasn't aware of this and used it in the article, guided by the huge amount of (apparently) reputable sources that were using it in theirs. All of them repeating the same numbers in a considerably consistent manner.

This comic might give an idea of why this relied solely on the credibility of more than one source:

A comic where one developer tries to explain an abstract concept but is always interrupted by another question requiring additional explanations of something inside the description of that concept

Of course those who work with real applications (like me) could always read all existing research papers and base the conclusions on that. However, most of the time we need to rely on abstractions, otherwise we won't get anything done. In this case, the internet represents an abstraction where we can be able to infer something based on the analysis of different independent sources, given that multiple sources also look up on the sources before them in a way we could be able to trace back the information to the origin of the content.

Unfortunately, trusting the internet can have unintended side effects. If a source just made up numbers and the following ones failed to look up other sources before them and just copied the same numbers, everybody that relied on that to find information will copy the wrong one. What happens is the natural mutation of a concept because of multiple individuals that don't use the real research as a reference.

Relying on an abstraction implicitly makes you rely on the decisions of that abstraction towards its dependents

The telephone game, a game that is often played in schools, illustrate this concept very well:

… one person whispers a message to another, which is passed through a line of people until the last player announces the message to the entire group. Errors typically accumulate in the retellings, so the statement announced by the last player differs significantly, and often amusingly, from the one uttered by the first.

The Learning Pyramid could have been the result of the effect of a telephone game played unwittingly by all the articles that just copied the last version of an information (with all the errors) without bothering to consult the original source, relying on a gut feeling that the numbers and the graph were correct without verifying the scientific basis behind it to confirm.

What we can take from this is that certain kinds of information, like numbers, are dangerous to be mentioned without strong evidence that science was used to produce it. If relying on an abstraction that provides us that information, we should at least try to find some evidence that the information is not just made up before using it, consulting the original sources to verify how the research was done and if it complies with the standards of the scientific method.

If there isn't at least some effort to look up for a piece of information, there is always the risk that the information is wrong.

This is the danger of relying on abstractions.