CenturyLink's network outage in December would have been much worse if the carrier's patchwork quilt of acquisitions and regional networks were fully integrated. (See Why CenturyLink's Network Suffered a Christmas Hangover).

That bit of reassurance came from Jeff Storey, CenturyLink's president and CEO, who spoke at the 2019 Citi Global TMT West Conference earlier this week in Las Vegas.

While addressing the question of whether merger activity and network integration was partially at fault for the outage, Storey said, "no." He explained that "our approach to integration made it [the outage] smaller than it would have been otherwise."

Storey added: "A lot of companies come in and integrate everything together and make one platform out of all the -- we don't do that. We segment our network because our network is so large, so significant, that we want to have it segmented because things happen. You get fiber cuts, you get equipment failures. And so, our approach to integration actually facilitated it not being bigger than it was by making sure that we don't just haphazardly integrate everything together. We operate a segmented network."

CenturyLink's Broomfield campus. Image courtesy of CenturyLink.

CenturyLink told Light Reading earlier that the outage originated in Denver. That's both close to Level 3's former headquarters and the assets of the former TW Telecom, which Level 3 purchased in 2014. Qwest (formerly US West) is the other massive company in the Denver metro area that CenturyLink acquired in 2011.

Without going into technical detail, Storey did explain what happened to the audience of investors in Las Vegas. He reiterated that the outage was caused by a single piece of equipment from a US vendor. He also noted that, apparently, CenturyLink and its vendor couldn't diagnose or troubleshoot the issue remotely.

"The source of the outage was a particular equipment vendor and a malfunction with one of those cards -- I'm not going to get into all the details, but it created an inability for the system to continue to process capacity and it blocked our ability to control those nodes," Storey said.

"And so, we had to physically go out and shut things down, restart them on that transport layer ... It wasn't something associated with human error; it wasn't an architectural issue. It was an equipment failure that had a more dramatic impact than we would have wanted it to have," Storey said.

He didn't name the vendor but said it "is a US-based company that has been part of our network for a long time. Several of our companies had bought equipment from them, historically, and they had been a great partner for us over a pretty long period of time."

Storey didn't frame the scope of the outage by talking about the number of customers, cities or states affected and he didn't reference the number of 911 calls dropped during the outage. Instead, he compared it to the overall capacity of the carrier's network, a measure that made the outage sound more palatable. "It was a relatively small percentage of our capacity and infrastructure that was down," Storey said. "It was a single platform from a single vendor ... one of our single legacy companies, rather than anything affecting the rest of the transport systems."

Though the term "transport" can be used to generically refer to any internal network problem at a telco, Storey's remarks to investors seem to point to the carrier's optical transport equipment. The December 27 outage, which affected both IP and TDM services, was "at a transport layer and, if you think about the way networks are designed, transport is the fundamental element and other products right on top of that," Storey said.

— Phil Harvey, US News Editor, Light Reading