Despite Elon Musk's warnings this summer, there's not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only "Apocalypse" on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for "Artificial Intelligence"—technologies such as expert systems and machine learning—are excellent for creating software that can help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types have been around for decades. And both are only as good as the source information they are based on. For that reason, it's unlikely that AI will replace human beings' judgment on important tasks requiring decisions more complex than "yes or no" any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24x7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

One-trick pony

Machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Sophisticated algorithms “train” on those data sets and “learn” how to make correct diagnoses. Machine learning can train on data sets where even a human expert can’t verbalize how the decision was made. Thanks to the ever-increasing quantity and quality of data being collected by organizations of all types, machine learning in particular has advanced AI technologies into an ever-expanding set of applications that will transform industries—if used properly and wisely.

There are some inherent weaknesses to machine learning, however. For example, you can’t reverse-engineer the algorithm. You can't ask it how a particular diagnosis was made. And you also can’t ask machine learning about something it didn’t train on.

For instance, a classic example of machine learning is to show it pictures of pets and have it indicate “cat | dog | both | neither." Once you've done that, you can’t ask the resulting machine learning system to decide if an image contains a poodle or a cow—it can't adapt to the new question without retraining or the addition of one more layer of machine learning.

Viewed as a type of automation, AI techniques can greatly add to business productivity. In some problem areas, AI is doing great, and that’s particularly true when the decision to be made is fairly straightforward and not heavily nuanced.

I’m beginning to see a pattern here

One of the most widely applied types of machine learning is pattern recognition, based on clustering and categorization of data. Amazon customers have already experienced how machine learning-based analytics can be used in sales: Amazon's recommendation engine uses "clustering" based on customer purchases and other data to determine products someone might be interested in.

Those sorts of analytics have been used in brick-and-mortar stores for years—some groceries place "clustered" products on display near frequently purchased items. But machine learning can automate those sorts of tasks in something approaching real time.

Machine learning excels in all sorts of pattern recognition—in medical imaging, financial services (“is this a fraudulent credit-card transaction?”), and even IT management (“if the server workload is too high, try these things until the problem goes away”).

That sort of automation based on data is being used outside the retail world to drive other routine tasks. The startup Apstra, for example, has tools that use machine learning and real-time analytics to automatically fine tune and optimize data center performance, not only reducing the need for some IT administrative staff but also reducing the need to upgrade hardware.

Another startup, Respond Software, has expert systems that corporate Security Operations Centers (SOCs) can use to automatically diagnose and escalate security incidents. And Darktrace, another security vendor, uses machine learning to identify suspicious behavior on networks—the company's Enterprise Immune System looks for activities that fall outside of previously observed behaviors, and it alerts SOC staffers to things that may be of interest. And a module called Antigena can automate response to detected problems, disrupting network connections that appear to be malicious.

Human intelligence

Machine learning has also been applied to analysis of more human communications. With a good bit of work by data scientists and developers up front, machine learning algorithms have been able to relatively reliably detect the "sentiment" of a piece of text—determining whether the contents are positive or negative. That has begun to be applied to "text mining" in social media and to image processing as well.

Microsoft's Project Oxford created an application interface for checking the emotional expression of people in images and also created a text-processing API that detects sentiment. IBM's Watson also performs this sort of analysis with its Tone Analyzer, which can rank the emotional weight of tweets, e-mails, and other texts.

These types of technologies are being integrated into customer service systems, which identify customer complaints about products or services and prompt a human to respond to them. IBM partnered with Genesys to build Watson into Genesys' "Customer Experience Platform," providing a way to respond to customer questions directly and connect people with complaints to employees armed with the best information to resolve them. The system has to learn from humans along the way but gradually improves in responses—though the effectiveness of the system has yet to be fully tested.

Even the ultimate people field—human resources—is benefitting from AI in terms of measuring worker productivity and efficiency, conducting performance reviews, and even deploying intelligent chatbots that can help employees schedule vacations or express concerns to management using plain language. AI startups are optimizing mundane HR tasks: Butterfly offers coaching and mentoring, Entelo helps recruiters scour social media to find employment candidates, and Textio helps with writing more effective job descriptions.

But AI doesn’t do well with uncertainty, and that includes biases in the training data or in the expert rules. Different doctors, after all, might honestly make different diagnoses or recommend different treatments. So, what’s the expert diagnosis system to do?

An often-discussed case of machine learning is screening college admission applications. The AI was trained on several years’ admissions files, such as school report cards, test scores, and even essays and was told whether the student had been admitted or rejected by human admission officers.

The goal was to mimic those admissions officers, and the system worked—but also mimicked their implicit flaws, such as biases toward certain racial groups, socio-economic classes, and even activities like team sports participation. The conclusion: technical success but epic fail otherwise.

Until there are breakthroughs in handling ambiguity or disagreements in rules and implicit or explicit biases in training data, AI will struggle.

Help wanted

To get better, machine learning systems need to be trained on better data. But in order to understand that data, in many cases, humans have to pre-process the information—applying the appropriate metadata and formatting, then directing machine learning algorithms at the right parts of data to get better results.

Many of the advances being made in machine learning and artificial intelligence applications today are happening because of work done by human experts across many fields to provide more and better data.

Cheap historical satellite imagery and improved weather data, for example, make it possible for machine learning engines to forecast crop failures in developing countries. Descartes Labs was able, using LANDSAT 8 satellite data, to build a 3.1 trillion pixel mosaic of the world's arable land and track changes in plant growth. Combined with meteorological data, the company's machine learning-based system was able to accurately predict corn and soybean yield in the US, county by county. With the increasingly large volume of low-cost satellite imagery and pervasive weather sensors, forecasting systems will continue to become more accurate—with the help of data scientists and other human experts.

Forecasting of other sorts may well change the shape of businesses. A recent paper by researchers at Nayang Technological University in Singapore demonstrated that machine learning forecasts using neural networks could more accurately forecast manufacturing demand, allowing companies to better plan their inventory than when using expert systems or other forecasting methodologies that rely just on time-series data, particularly in industries with "lumpy" demand—where demand is either high or low but seldom in between—because the systems can find patterns without being told how to model the data in advance.

These sorts of systems, as they grow more complex and apply more types of data, could provide businesses and organizations with the power to find patterns in even more vast datasets. But while we can use AI to help humans make decisions about things we already know how to do, we can’t send AI-based agents into the true unknown without human oversight to provide expert rules or create new training data from scratch.

While some AI systems, like IBM’s Watson or Amazon’s Alexa, can hoover in huge amounts of unstructured data from the Internet and use it for text-based searches and building up a knowledge base to help answer questions, that won’t help in creating new training databases for pattern recognition, at least not yet. The science-fiction trope of computers intelligently autonomously searching for its own data sources (and for some inexplicable reason, flashing black-and-white battlefield pictures on a screen) is beyond today’s AI—and beyond tomorrow’s as well. The decisions—and the questions—will continue to have to be made by humans.

Alan Zeichick is principal analyst of Camden Associates, based in Phoenix, Arizona. A former developer and systems analyst, he was founding editor-in-chief of Software Development Times. Follow him @zeichick.