When British Prime Minister Theresa May bigged up AI at the World Economic Forum in Davos last month, it was as if she had nothing better to talk about.

Name-dropping DeepMind, perhaps the only justification for her claim that the UK can be a "world leader in artificial intelligence", seemed a little desperate, especially as DeepMind has been a Google company since 2014. Adding that "We have only just seen the beginning of what AI can achieve" was equally underwhelming but indicative of the general view of AI, as though it's this one thing, one technology that will solve everything in the near future.

May or at least her advisors have clearly been sucked in. It's not surprising. It's impossible to escape the AI hype, particularly around general AI and the concept of singularity, when humans will be unemployed, if not killed off by hordes of upwardly social mobile machines. It's an old idea that has regained momentum in the past few years, fuelling more hype and perhaps undermining the real work being done in specialist applications where machine learning is building relatively strong foundations.

Hype of course is dangerous. It can be counterproductive and politicians spouting AI generalisms are not helping. The concern is that it will set AI development back and force it into a new winter of discontent.

Back in December, Francois Chollet, deep learning thinker and software engineer at Google, wrote on his Twitter account a prediction: "There will not be a real AI winter, because AI is making real progress and delivering real value, at scale. But there will be an AGI winter, where AGI is no longer perceived to be around the corner. AGI talk is pure hype, and interest will wane."

There were two major AI winters, one at the end of 1970s and another at the end of 1980s.

Skynet it ain't: Deep learning will not evolve into true AI, says boffin READ MORE

What produced them was a collision between the collective underestimation of the difficulties of building intelligent machines and the realities of the technologies of the time; this led to many research proposals and projects being abandoned.

Dr Anders Arpteg – principal data scientist at Peltarion, chairman of Machine Learning Stockholm and who previously led a Spotify research group making use of machine learning and big data analytics – points to how AI was viewed in the 1950s and 1960s as an indicator of what unfulfilled ambition can do to a technology.

"People believed that intelligent machines would be easy to build and general AI would be 'solved' in a matter of years," says Arpteg. "A famous example was the 1954 Georgetown experiment that was able to partially translate around 60 sentences from Russian into English. They believed that general machine translations would be solved within five to six years. Obviously, this was not the case, and it turned out to be much harder than previously imagined."

Today, again, we have hype around general AI hype, and while the idea of singularity may wane a little, it is being bolstered by real applications of machine learning within industry.

But things are different compared to those earlier times, we're told. Today it's not about "general purpose" AI.

"At times, many people believed it would be impossible to ever build intelligent machines," Arpteg says. "A lot has happened since then and recent advances have made huge improvements in many areas, including in machine translation. For example, the Google Neural Machine Translation that was released in 2016 can now translate between more than 100 languages and the quality of the translations has significantly improved."

Emma Kendrew, AI lead at Accenture Technology UK, supports this view. She talks about the growth of "intelligent automation" primarily in financial services back office functions, but also increasingly in helpdesks. This is not all about chat bots either, a term Kendrew feels "mischaracterises" the technology.

"We use the term 'virtual agents' because it's about more than just voice now," says Kendrew. "It's evolving into something more sophisticated than the automation of routine tasks."

Kendrew points to work being done in mortgage advice automation, how automated algorithms are pulling on personal financial history data and available and relevant products in the market to offer mortgage advice to consumers. It's still early days, so is there enough technical robustness and public trust to sustain startups such as Nuvo or Habito?

Nevertheless, it is this sort of special-purpose machine learning where AI could start to really learn. It will feed the necessity for and improvement of toolsets. It will help finance research and development and more importantly provide real-world feedback on what is culturally acceptable and technically possible. Understanding where to focus energies is half the battle.

Daniel Kroening, CEO of AI software startup Diffblue and professor of computer science at Oxford University, agrees, saying that special-purpose AI is designed to solve specific problems for specific domains, for example, an AI-powered software application which uses historical data to predict the performance of equities traded on stock markets. He says that while specialised AI systems are becoming increasingly ubiquitous, general AI still remains a lofty aspiration.

El Reg was invited to the House of Lords to burst the AI-pocalypse bubble READ MORE

He sits in the same camp as Chollet and Arpteg when he talks about how special-purpose AI will ensure there is no real AI winter, despite the hype over general AI. But he also talks about how different approaches to AI development could yield different results, compete but also feed off each other.

"There are essentially two approaches to building special-purpose AI," says Kroening. "You can either take an existing library, such as TensorFlow, and use it within your problem domain by (effectively) tuning the parameters of the library, or you can endeavour to invent your own algorithms, in the hope that they will provide significantly better performance."

We have been here many times before, the proprietary versus standards argument. Will a set of open, standardised tools improve development and accelerate growth or do we need proprietary development to potentially break down barriers through off-piste innovation?

There are plenty of tools on the market at the moment, from TensorFlow and Microsoft's Azure tools through to Amazon's Deep Learning AMIs and open-source tools such as PredictionIO and Torch, which is used by Facebook. This is important for market development but what you make with the tools is the big question, which is where special-purpose AI steps up.

Kendrew believes it is important to focus on joining up the technologies, how applications can bring together machine learning, robotic process automation and language translation to generate business-changing AI. She talks about how Accenture has seen a shift towards how AI can improve employee experience, automating the more mundane, repetitive tasks and giving employees more ammunition to make them more efficient and effective.

"We are seeing this a lot in retail at the moment. Organisations with a lot of distributed sites have a high internal helpdesk volume, which is where AI can certainly help."

A recent study by Infosys bears this out, with 90 per cent of C-level executives having already reported measurable benefits from deploying AI technologies within their organisation.

While helping employees is a natural progression, much of the special-purpose AI focus to date has been on applications targeting improved customer experience. Facebook is a good example here. It is using AI to analyse text and images, to deliver apparently more relevant content (including ads) to users but also potentially to provide caption automation to images and warn users when their photos are being used by other users. The company is also using AI to analyse its own vast array of back office systems to find potential areas of efficiency.

This is surely where AI developers can learn about potential implementation. While financial services firms busy themselves with bots, it is surely the gains made in cost reduction, improved reliability of IT infrastructures and storage, as well as the ongoing cybersecurity fight that will keep AI development warm through a potential general AI winter.

Claus Jepsen, chief architect at Unit4, certainly thinks so. Jepsen helped to build the Wanda AI digital assistant, essentially a tool to help ERP users with HR, expenses, travel and purchasing tasks, among others. As well as the fact that today, unlike the 1970s and the last AI winter, we have plenty of storage and processing power. We also have a number of proven applications.

Today in bullsh*t AI PR: Computers learn to read as well as humans (no) READ MORE

"Special-purpose AI or data driven algorithms (using machine learning) are already applied to an increasing number of tasks and use cases," says Jepson. "Within enterprise software such technologies are used to lower human interactions with enterprise applications, by automating self-service tasks previously done manually by employees, like filling in time sheets, doing expense reports, financial reconciliations, approvals, tasks management."

Like Kendrew, Jepsen sees opportunity in helping employees do their jobs. It makes sense. It adds value to a business. It can be justified, through cost efficiency alone. For consumer-facing businesses, though, the outlook is more difficult to gauge. Recommendations and taste are the only real examples of how early AI is being deployed on sites like Amazon or Netflix, but they are still crude, often inaccurate and unnecessary.

And then there is the general definition, how AI is being clouded with machine learning but the perception of AI is centred on the thinking machine, the science fiction of so many Hollywood movies. While on the one hand there is plenty of development on special AI, on the other, there is the leading-edge research of DeepMind and the Open AI project. Both demand investment.

"Much of the latest leaps in AI development is due to the idea of deep learning," says Kroening. "We see no reason to believe that such big leaps should continue forever."

The real question, he adds – and pretty much everyone is in agreement – is: whether, before the next AI winter starts, the impact of AI on our everyday lives be significant enough to be called an AI "revolution". ®