This engineering-oriented AI is indeed everywhere, and being increasingly applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans. AI-based tools are enhancing our own cognitive powers, helping us process vast amounts of information and make ever more complex decisions .

Soft AI was behind Deep Blue , IBM ’s chess playing supercomputer, which in 1997 won a celebrated chess match against then reigning champion Gary Kasparov , as well as Watson , IBM’s question-answering system, which in 2011 won the Jeopardy! Challenge against the two best human Jeopardy! players. And, as Mr. Andersen notes in his article, it’s why “We’re now accustomed to having conversations with computers: to refill a prescription, make a cable-TV-service appointment, cancel an airline reservation - or, when driving, to silently obey the instructions of the voice from the G.P.S.”

Soft, weak or narrow AI is inspired by, but doesn’t aim to mimic, the human brain. These are generally statistically oriented, computational intelligence methods for addressing complex problems based on the analysis of vast amounts of information using powerful computers and sophisticated algorithms, whose results exhibit qualities we tend to associate with human intelligence.

“Artificial intelligence is suddenly everywhere. It’s still what the experts call soft A.I., but it is proliferating like mad.” So starts an excellent Vanity Fair article, Enthusiasts and Skeptics Debate Artificial Intelligence , by author and radio host Kurt Andersen . Artificial intelligence is indeed everywhere, but these days, the term is used in so many different ways that it’s almost like saying that computers are now everywhere. It’s true, but so general a statement that we must probe a bit deeper to understand its implications, starting with what is meant by soft AI, versus its counterpart, strong AI.

Soft AI was nicely discussed in a recent Wired article, The Three Breakthroughs That Have Finally Unleashed AI on the World, by author and publisher Kevin Kelly, who called it a kind of “cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off."

"It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ… Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization.”

“In the past, we would have said only a superintelligent AI could drive a car, or beat a human at Jeopardy! or chess,” writes Mr. Kelly. “But once AI did each of those things, we considered that achievement obviously mechanical and hardly worth the label of true intelligence. Every success in AI redefines it.” Such a redefinition is now taking place with data science, one of hottest new professions and academic disciplines. It’s hard to tell where data science stops and AI starts. We’ve started to view former AI achievements as mere data science applications. The two disciplines are evolving in tandem, with AI leading the way and data science commercializing its advances.

Strong AI, on the other hand, aims to develop machines with a kind of artificial general intelligence that can successfully match or exceed human intelligence in cognitive tasks such as reasoning, planning, learning, vision and natural language conversations on any subject. Mr. Andersen’s Vanity Fair article discusses a group of strong AI advocates he refers to as the Singularitarians, who believe that beyond exceeding human intelligence, machines will some day become sentient--displaying a consciousness or self-awareness and the ability to experience sensations and feelings.

He turned to Siri to help him define Singularity. “What is the Singularity?,” he asked her. Siri answered: “A technological singularity is a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict.”

The term is most closely associated with Ray Kurzweil, author, computer scientist, inventor and presently director of engineering at Google. In his 2005 book The Singularity is Near: When Humans Transcend Biology, Mr. Kurzweil predicted that the Singularity will be reached around 2045, at which time “machine intelligence will be infinitely more powerful than all human intelligence combined.”

“And, if the Singularity is near, will it bring about global techno-Nirvana or civilizational ruin?,” Mr. Andersen asks. “Since the turn of this century, big-time tech-industry figures have taken sides: ultra-geeky masters of the tech universe versus other ultra-geeky masters of the tech universe. It’s a kind of Great Schism separating skeptics from true believers, dystopians from utopians, the cautious men from the giddy boys.”

Personally, I’m on the side of the skeptics, even though the true believers include a number of brilliant technologists and successful entrepreneurs. But I can neither understand Mr. Kurzweil’s utopian visions, nor the dystopian fears of Tesla’s founder Elon Musk, who calls AI “our biggest existential threat” and “a demon” being summoned by foolish scientists and technologists. A similar concern has been expressed by world-renowned physicist Stephen Hawking, who recently told the BBC: “The development of full artificial intelligence could spell the end of the human race… Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Frankly, the potential advent of super-intelligent, sentient machines is not high on the list of things I worry about. What really concerns me are the highly complex IT systems that we’re increasingly dependent on in our every day life. I worry whether we’ve taken the proper care in designing the powerful computer systems that have now penetrated just about every nook and cranny of our economies and societies.

Technology advances have enabled us to develop systems with seemingly unlimited capabilities. Highly sophisticated, software-intensive smart systems are being deployed in industry after industry, from energy and transportation to finance and entertainment. These complex systems are composed of many different kinds of components, intricate organizations and highly different structures, all highly interconnected and interacting with each other. They exhibit dynamic, unpredictable behaviors as a result of the interactions of their various components, making them hard to understand and control.

Even more complex are socio-technical systems, which involve people as well as technology. Such systems have to deal not only with tough hardware and software issues, but with the even tougher issues involved in human behaviors, business organizations and economies. We are increasingly developing highly complex socio-technical systems in areas like health care, education, government and cities.

The very flexibility of software means that all the interactions between their various components, including people, cannot be adequately planned, anticipated or tested. That means that even if all the components are highly reliable, problems can still occur if a rare set of interactions arise that compromise the overall behavior and safety of the system.

How can we best manage the risks involved in the design and operation of complex, software intensive, socio-technical systems? How do we deal with a system that is working as designed but whose unintended consequences we do not like? How can we protect our mission critical systems from cyberattacks? How can we make these systems as resilient as possible?

Human intelligence has evolved over millions of years. But humans have only been able to survive long enough to develop intelligence because of an even more fundamental evolution-inspired capability that’s been a part of all living organisms for hundreds of millions of years-- the autonomic nervous system. This is the largely unconscious biological system that keeps us alive by controlling key vital functions, including heart rate, digestion, breathing and protections against disease.

Our highly complex IT systems must become much more autonomic and resilient, capable of self-healing when failures occur and self-protecting when attacked. Only then will they be able to evolve and incorporate increasingly advanced capabilities, including those we associate with human-like intelligence.

Irving Wladawsky-Berger worked at IBM for 37 years and was then strategic advisor to Citigroup for 6 years. He is affiliated with MIT, NYU and Imperial College, and is a regular contributor to CIO Journal.