Companies seem to become more obsessed with talking about innovation as they become less and less innovative. Today, the phrase “radical innovation” has become popular in contrast with incremental innovation. It describes some absurdly innovative idea that’s challenging for people to understand—if the idea were easily comprehensible it would hardly be radically innovative.

The real conundrum is how to “engineer radical innovation,” as a recent Harvard Business Review article put it. There used to be an answer to that question and it was Bell Labs.

The transistor, the bit, information theory, the laser, UNIX, the C programming language, cellular telephony. These revolutionary and transformative ideas, without which there would be no internet, much less Apple, Google or Facebook, were spawned in an environment of near total employee freedom at Bell Labs. Research projects could take decades, if necessary, with no quarterly deadlines or investment returns to satisfy shareholders, and no project risk assessments. Engineers like Claude Shannon, the founder of information theory, had one task—to figure out how to make things work. He had as long as he needed to accomplish a task however he saw fit. He even had the freedom to invent one of the first computer programs that could play chess, decades ahead of IBM’s Deep Blue, and to figure out how to get it working, which he did.

You could argue that the radical innovations that came out of Bell Labs beginning in the 1940s and 1950s were the low-hanging fruit of digital technology. That the reason we don’t have as many big breakthroughs today is because the problems are much harder and more complex. But it could also be that our focus on short-term profits make it so that ideas that take decades to develop are laughed out of conference rooms. Furthermore, the focus on incremental improvement of many tech companies makes the free exchange of ideas and co-location of engineers that was so vital to Bell Labs, a practical impossibility.

Bell Labs had the luxury of a guaranteed funding stream from the regulated telephone monopoly, which AT&T owned during the heydays of the labs. This ensured that the researchers at Bell Labs never had to worry about writing grants, or selling high-risk research ideas in PowerPoint slides to marketers. Academic freedom, too, is increasingly under pressure. In the 1970s, 67% of faculty were tenured or on tenure track, today it at 30%. The constant economic insecurity that academic researchers face forces them to focus on pumping out publications at a rapid pace, rather than on long term or risky areas of new research.

Interestingly, during the era of incredible innovation, the top officer at Bell Labs made about 12 times that of the lowest-paid worker. The ratio these days is that the CEO of a large company can make about 1,000 times that of an average worker—not even the lowest-paid worker. This month, Switzerland voted against a referendum called “1:12” which would have legally mandated that the salary of the highest-paid worker at company could not be more than twelve times that of the lowest-paid worker. One of the key arguments against the Swiss proposal was that it would kill innovation. What if a more egalitarian distribution of salaries actually encourages innovation, as the example of Bell Labs suggests?

The Silicon Valley model of innovation has given us a few new billionaires but solved very few problems. It does not foment new knowledge, it focuses on “disruptive” innovation, which rewards the financiers that back the startups. Ostensibly, some of the egalitarianism and open-source ethos Silicon Valley inherited from Bell Labs makes the ideal breeding ground for the kind of fundamental new technologies that came out of suburban New Jersey during the last century. However, none of the engineers or scientists at Bell was interested primarily in money. They were motivated by curiosity, and they were compulsive tinkerers. Returning to that would actually be radical.