There’s been a longstanding belief that IT and security teams are at odds with each other. This is because their measures for performance are, on the surface, almost contradictory with one another. IT must find ways to provide the applications that the business needs. But business conditions change rapidly, and the applications the organization needs can shift on a dime. IT organizations must be agile and quick in response to new business drivers because no CIO wants to be the bottleneck in the boardroom for business change. Thus, IT tends to favor technologies that accelerate change, such as the rapid adoption of virtualized business workloads to the cloud.

Security, on the other hand, operates on a different set of benchmarks and priorities. Security’s foremost concern is the protection of data by eliminating avenues of risk. As such, the general inclination of security tends to be conservative and values consistency over change. Introducing new applications and emerging technologies opens up new vectors for risk and data loss, which are precisely the opposite of what they’re tasked to minimize.

Despite having a healthy appreciation for each other’s work, both sides feel conflicted. IT does not want to forsake security, and security does not want to slow down IT. Yet, it’s not uncommon to see IT and security teams working in completely different parts of the organization due to their conflicting missions.

I found this recent article in Dark Reading interesting: “How Security and IT Teams Can Get Along,” in that there is precedent for change. It discusses several areas where change is occurring, including where new roles are emerging. For example, DevOps groups bridge the gap that traditionally separated application development (constructing new applications) and operations (keeping existing applications running at all times). When thinking about how a similar divide exists with IT and security, perhaps the first step will come through shifts in the expectations on what each group should do.

The article goes into depth about how to make a difference when bringing the teams together, and one area is the problem of measuring goals when the metrics are not meaningful. I agree, because there is a major risk of losing sight of the goals when your metrics are based on the symptom rather than the problem. For instance, incidence response teams that work on investigating alerts often face a Sisyphus’s stone amount of work. There is no shortage of red alerts being generated throughout the organization, and quantity of alerts is seldom a good measure to determine the severity of the problem. The more patient attacker will not draw attention, but how do you find the events on which to focus? And how do you correlate that activity across systems that are traditionally unrelated to one another?

One area that I think is particularly promising is the decoupling of security controls from the application. Phrased in a different way, the reason that I see IT and security competing, at times, is that there’s been no shortage of evidence showing what can happen if you deploy an application first and then bolt the security on afterwards, typically with a one-off point product. It’s seldom going to be as secure or easily managed as if it was designed to be deployed together with the application in the first place. The policy will certainly be fragmented, with a different control point for every point product deployed. And it will almost certainly create the issue described above, where every point product generates red alerts with no correlation on what to prioritize.

That’s why I believe that the Palo Alto Networks Next-Generation Security Platform provides the security controls that bridge the intersection between the interests of IT, Security and DevOps. It does this because it positions critical security functions as the common denominator to all applications: the network. By seeing all traffic, and extending that visibility across all users, applications and devices, the organization can set up the underlying security that applies to all the applications that IT wants to deploy. The critical security controls for stopping an attack are in place ahead of the application, rather than trailing it.

It’s important to note that “network,” in this sense, does not solely mean the traditional perimeter because the platform extends to the mobile user (through GlobalProtect), the public cloud (through VM-Series on AWS and Azure) and the virtualized data center/private cloud. These baseline principles set the foundation for additional controls that the organization deploys along with the application.

Operationally, the use of the platform helps organizations get contextual views of network activity that bears investigation (through AutoFocus) as well as a deeper level of control through the enforcement of policy on the next-generation firewall.

These principles deliver upon the premise of prevention first, while breaking the lifecycle of an attack across all stages, because the protection is inherently baked into the platform rather than bolted onto the application. It’s been designed to do this from the ground up.

I think that, in the years ahead, there will be even greater discussion on how IT and security teams align in new ways, and every organization should be preparing for this conversation. Fortunately, the principles of the Next-Generation Security Platform can help pave the way.