Making use of machine learning technology as a business sounds cutting edge, but as a business it's not easy to navigate and know what to look for in a nascent industry. When Mazin Gilbert, assistant vice president of Intelligent Services Research at AT&T Labs was first interviewed by TechEmergence in February 2016, he discussed AT&T's initial forays in using machine learning and the Internet of Things (IoT) to elevate its communications network to what he described as a "software-defined network" (SDN), capable of learning how to spot systemic anomalies and repair technical issues autonomously over time.

In a more recent conversation, Mazin compares searching for machine learning applications for business to uncovering the tip of an iceberg that is forming in real-time, with few existing case studies or pre-packaged products and services from which to model and leverage. There doesn't exist a 'Gartner quadrant' in this industry; there's no grab bag of ideas, no easy way to make machine learning-purchasing decisions, whether you're a small or large company (though having more resources does make experimentation more pragmatic).

AT&T knew that they wanted to leverage this technology to help optimize their network management and business operations, but because of the lack of comparables available, they decided to suit up and solve the problem in a different way - by building their own virtual network and cloud-based system from the ground up. In March, AT&T introduced ECOMP (Enhanced Control, Orchestration, Management and Policy), an open software platform that is capable of supporting any business domain.

ECOMP is part of AT&T's bigger initiative to move to a software-centric network model; it allows for the quick on-board of new, customer-centric services that are created by AT&T or third-party developers, and also serves as a conduit for real-time data collection, analytics, and policy functions for better network management.

The company's primary drive in developing this system was having the ability to virtually detect indicators of anomalies in sets of equipment, and the entire communications system, in order to reduce inefficiencies. When a human analyst boots up a machine and notices that indicator X seems correlated to problem Y, then strategic solutions can be more rapidly brainstormed and put into place to prevent the same issue from occurring in the future. A system that can learn how to spot errors before they happen can predict potential related problems and ensure that equipment or software are functioning in real-time.

Take AT&T's fleet management, for example. AT&T started noticing that when technicians were sent to people's houses to fix and install services, a portion of the fleet experienced a van battery failure en route to a customer's location. In such cases, not only was the technician stuck, but the customer often got upset, which ended up costing AT&T resources to both fix the van and retain the customer.

To help solve the issue, AT&T applied sensors in the vans, which emitted signals about battery life to a database using the system's cloud-based technology. Applying predictive analytics to this information allowed the system to assess and learn from the data. Today, AT&T has been able to improve predictions of these types of automotive failures, helping save money and also create a better customer experience. Future IoT and AI applications may make it possible within the next decade for a van to autonomously drive itself to have signalled repairs made.

AT&T has also applied its virtual network applications to directly improve consumer relations. Using large-scale machine learning systems that pull data from contacts, chat, and customer service voice operations, the system learned how to make predictions about customer sentiment. AT&T is able to provide this big-data-based intelligence to managers and supervisors, who can look and monitor patterns to identify anomalies and ask a range of important questions, such as: 'Were my customers happy or not?' 'If we put them on hold, did that make them unhappy?' 'Did my agent solve their problem the first time?' 'Why did the customer call in the first place, and are they likely to call again?'

The two examples above are just a sampling of the predictive capabilities that can be scaled across a large company. AT&T has set an ambitious goal of getting to 30% virtualization of network by 2016 and 75% by 2020 (they surpassed their 2015 goal of 5% by 0.7%). A key component of ECOMP is DCAE (Data Collection, Analytics, and Events), which leverages the collection, management, storing, and analysis of data, which is then fed to the ECOMP system and into control-loop automation systems and network cloud services.

AT&T is thinking seriously about putting a portion of ECOMP into open source. This would open the platform to developers, third-party providers, data scientists and engineers to contribute to the building of a 'playbook' of network applications. Even if AT&T team members don't know all of the immediate purposes of a particular experiment or application, says Mazin, it still gets put into the playbook with the long-term view of better understanding these applications over time as others use the technology in a similar way.

The platform works with OpenStack, but can also be extended to other cloud and computing environments. If AT&T decides to open source the APIs in the future, it would allow developers of all backgrounds to find ways to leverage and combine developed technologies and help drive the use of machine learning and cloud-based technologies in business forward.

For companies looking to break into the field and leverage machine learning to help solve proprietary problems, Gilbert emphasizes starting at the ground level. He emphasizes the need for businesses to own their data and information in a deep way i.e. have a solid understanding of where data is located, how data is categorized (documenting in a menu or guidebook helps), and refine/create a system for how data has and will be sorted in the future.

Machine learning is data hungry, and companies need to prepare to feed the engine by doing their research, including talking with other companies and consultants about what kinds of data are legitimate for solving which types of problems. By opening up their own virtual network systems and having anyone (not just researchers with PhDs) share their work, AT&T is helping bridge the gap between machine learning-based ideas and experimentation to actionable machine learning production that can be used by businesses across industries.