Between the networking, the interviewing, the negotiating, and the onboarding, moving on or up can feel like a job of its own.

On the threshold of the Santa Fe Institute (SFI), the world’s epicenter for complexity science, a plaque in Greek characters shares Plato’s admonishment: “Let no man ignorant of geometry enter here.” While it’s far more impressive in Ancient Greek, few visitors to SFI can read it. Eventually the same will be true of computer code. It will become the Ancient Greek of the late 21st century.

STEM has become a ubiquitous call-to-action in response to technological change, and learning to code in particular is often positioned as a panacea. The Wall Street Journal recently suggested that coding bootcamps can “rapidly retrain American workers for the 21st century.” And Apple CEO Tim Cook has said that learning to code will eventually be more important than studying English as a second language.

I agree that coding skills are important. Groups such as Girls Who Code bring coding to young women, and school systems worldwide have added coding to their curriculums. Bravo!

But we’re over-stating the benefits.

Though it’s certainly far better to know a computer language than not, remaining relevant will be a moving target as computer languages and programming environments arise, evolve, and in some cases die. A singular focus on “learning to code” can impede attention to the much more important skill of understanding how technology works, and the opportunities and risks within systems and society.

Coding As Ancient Greek

From the Renaissance onward, knowing Ancient Greek—not to be confused with Modern Greek—was essential to being considered part of the truly educated elite. This Greek bias persisted in academic circles into the early 20th Century, long after the language was necessary for all but specialists. It is still enlightening and impressive to understand Ancient Greek, but it’s not terribly useful. The same will over time happen to coding.

Evidence suggests that coding will increasingly be implemented, even planned, by AI systems. This is part of a natural progression from computer-friendly to human-friendly systems. Consider how the end user’s experience has evolved. The graphical user interface, developed at XEROX PARC in the 1970s and brought to market with the Apple Macintosh in the 1980s, overtook clunky, technical text-based interfaces with a much more intuitive approach. While operating a computer once required specific technical knowledge, today it requires almost none.

Programming languages and environments have reflected the same trend. Since their genesis mid-last century, programming platforms have become more abstracted from the underlying 1s and 0s. For instance, programming in Ruby, developed in the 1990s, is a far cry from composing in COBOL, which rose to prominence in the 1960s. The author of Ruby, Yukihiro Matsumoto, commented about his objective, “I really wanted a genuine … easy-to-use scripting language. I looked for but couldn’t find one. So I decided to make it.”

Though COBOL was partly an attempt to make programming more “English-like,” Ruby and other languages developed since have brought coding languages closer to natural human communication. Extrapolate this trend and you can imagine instructing a computational system in your native tongue (really a broader, messy form of human “code”).

Already we have seen companies make computational capabilities more accessible to non-specialists. Recent MIT spinoff pienso, for instance, seeks to make “AI for All.” To empower people with no data science background to create and train their own machine learning models. In this vision, domain expertise becomes more important than programming savvy.

The need for humans to code will gradually disappear for all but the most specialized situations. Platforms will enable humans to describe in natural spoken or written language what they’d like computers to accomplish. The coding will occur behind the computational scenes. We won’t code so much as direct and request. Ultimately, coding isn’t the point. The objective is to define and communicate what we want computational systems to do.

Discovering And Defining Problems To Solve

Beyond coding, humans will identify, define and prioritize problems for computers to solve. Over the coming decades, though, computational systems will become capable of defining problems of value and generating solutions with only limited human engagement.

While we’ll shift toward human-friendly approaches, understanding how computational systems work and what possibilities and risks they pose will remain essential. But so will having broad and deep exposure across disciplines and ways of thinking.

When technology can increasingly do anything, the question becomes, what should we do and why? We humans cannot be sufficiently equipped for the future without exposure to the social sciences, humanities and the arts. Well-functioning civil society depends on it. On the eve of World War II, Winston Churchill cautioned, “Ill fares the race which fails to salute the arts with the reverence and delight which are their due.”

While essential, STEM as work skills (as opposed to research disciplines) harbor a Trojan Horse. The STEM capabilities required to create technology will one day generate technologies that accomplish STEM far better than human beings. If we’re too focused on STEM skills, we’ll eventually STEM ourselves out of work.

While I admit my intellectual inferiority to anyone who can read Plato in his native language, this doesn’t matter much beyond my pride. My daughters will surely learn to code. More importantly, we’ll ensure they understand how computational systems operate and the concomitant opportunities and risks.

And they’ll learn Geometry so they can visit the great people at SFI.

Robert Wolcott is a clinical professor of innovation and entrepreneurship in executive education at the Kellogg School of Management, Northwestern University.