End users don't necessarily care where and how data is stored and processed. They just want to know the job will get done quickly, securely and with as few difficulties as possible. Oh, and inexpensive, too.

To meet those expectations, a major trend seems to be getting underway. After the devolution of the client-server era, when some questioned the need for a central IT function at all, the trend toward centralization has been predominant, abetted by the cloud. However, with so much data coming from or going to the edge, schemes are afoot to do more processing there, to extract value from the data sooner and to reduce the volume traveling over scarce network bandwidth.

Cisco, for one, is championing an awareness of "perishable" data, a concept that clarifies the evolution of edge computing. There are two main reasons why organizations may need to do more at the edge, according to Mike Flannagan, vice president and general manager of data and analytics at Cisco.

The first is that they don't have enough network bandwidth. He cites the example of an oil or gas company operating an offshore rig. The "downhole" sensors used to monitor progress can generate 10 terabytes of data per day per well.

"You are talking about tens to hundreds of terabytes a day, and since offshore sites are generally connected via satellite at around a megabyte per second, it would take a month to move a day's worth of data," he said. So, if they don't process the data closer to where it is generated, they don't really get the value of the data.

Such data is also, often, perishable, he notes. Should the drilling operators increase or decrease pressure in order to improve productivity? In this example, the data has its maximum value when it is freshest, allowing an operator to make an immediate operational decision. If it takes a week or a month to do that because of time delays in data transmission and analysis, the information is no longer useful.

Gaining the edge From an industrial or corporate perspective, there can be huge advantages to edge computing, said Sylvain Fabre, research director at Gartner. Processing data locally means not only faster results, but also less movement of data, since only "results," rather than raw data, are likely to be sent to a central location. Another angle is security. Processing data locally means it is kept in an internal environment, Fabre explains. "Any industrial environment where latency is a problem, not to mention things like drones and self-driving cars, will find value in an edge approach," he said. And, from a mobile infrastructure standpoint, there has been definite interest in building more compute and storage capacity near the edge. "A couple of years ago Nokia started offering to put functionality in base stations near the users; there were some interesting use cases, but it ultimately proved to be quite expensive," Fabre said. However, others said the Internet of Things (IoT) is definitely the big news when it comes to edge data. "This is not really a new concept because IoT is using much of the same topological paradigm as industrial controls," said Christian Renaud, research director for IoT at 451 Research, based in New York. Locating compute resources "on the edge" was traditional in operational systems, "because you need local control, local action and faster response." "If you look at the process control world, industrial automation, and energy sector, the legacy systems had already been using edge out of necessity," Renaud said. The difference today is the availability of cloud models and the almost infinite extent of IoT, ranging from fitness and factories to cars and healthcare. A good percentage of those actual or potential IoT functions require local compute and analysis for viability, he notes. "I don't want a bad cable somewhere else to stop my assembly line from operating," he said. Additionally, bandwidth usage, security and data sovereignty have emerged as drivers for doing more locally. "If I have sensitive operational data from an electric utility substation, I don't want to risk transmitting that." The primary arguments for edge computing are reduced cost and bandwidth, survivability -- because there is no single point of failure -- privacy and security, Renaud said. "The arguments against are that we don't know if things like IoT gateways will be more expensive than centralized cloud functions, especially for applications that don't have demands such as ultra-low latency and strong privacy or security," he said.