Cloud-native environments and security policies have been on a collision course for quite some time. Unfortunately, the crisis remains largely unaddressed. The reality is you don’t have to be a criminal mastermind to infiltrate many cloud-native deployments. The same standard oversights and organizational blunders tend to repeat themselves. Although the most infamous cloud breaches are very often infuriating, the challenge is systemic, extending far beyond an isolated incident. Still, there’s no excuse to ignore such a gaping attack surface entirely.

Below is an account of the top cloud security mistakes DevOps still make, and what you can do to avoid them:

1. Enable runtime updates without going through the CI/CD pipeline

DevSecOps teams understand the limited power of bureaucracy-driven security governance. While everyone tends to agree that security would benefit by restricting runtime deployments to workloads that passed through the CI/CD pipeline without exception, enforcing such a requirement in practice is a challenge. A case in point is the use of open source libraries, enabling developers to deploy code to runtime without going through the CI/CD pipeline. Developers will continue to ignore policies as long as security accountability is not an enforceable priority. As a result, workloads are consistently deployed without pipeline authorization, forcing security practitioners to run routine scans for “rogue” workloads. Worse yet, many DevOps teams have become complacent, accepting that there’s no effective way to eliminate unauthorized workloads altogether. Over time, this erodes the security posture, making it easier not just for unaccountable developers to ignore policies, but for malicious actors to leverage lax enforcement practices.

2. Leave a network “flat” with unrestricted public access

To save time tackling what often seems to be an insurmountable task, DevOps unwittingly configure entire networks to enable unrestricted access. Typically this means giving up on segmentation entirely by dumping all workloads into one unsegmented VPC (often granting access to third parties as well). This is obviously bad practice, as it increases time-to-isolate for both negligent and malicious activities. From there it’s a slippery slope to extreme oversights, such as enabling unrestricted public root access.

3. Implement microsegmentation with faulty network rule configurations

Unfortunately, the most competent approaches to microsegmentation come with their own set of challenges. The more granular segmentation you introduce into a cloud deployment, the more likely you are to miss faulty rules. Even familiar habits can lead to gaping vulnerabilities. For example, enabling developers to connect to the production runtime environment using SSH via a specific IP could easily escalate to unrestricted public access to sensitive resources. Such faulty rule configurations often go unnoticed for weeks or even months at a time. A good first step is to run a rule auditing check using a utility such as Amazon Inspector’s Agentless Network Assessment tool.

4. Ignore managed service access

One of the most commonly cited causes of breaches is mismanagement of service access, specifically S3 buckets. According to a study by HTTPCS, 58% of public AWS S3 buckets are public, and 20% aren’t write protected. While those figures may not seem alarming, a quick review of some of the most critical data breaches cite unrestricted access to S3 buckets. Enabling access to sensitive customer data is bad enough, but all too often the data includes cloud credentials as well.

5. Let zombie workloads run unchecked

While zombie resources “only” consume idle resources, they could also be an indication of (and an open invitation for) foul play. According to this report by SkyBoxSecurity, Cryptojacking exceeded ransomware in popularity as the leading attack vector in 2018. If you let zombies run loose in your deployment, your chances of stumbling onto a stray crypto-jacker are significantly reduced. Although zombie workloads are often perceived as little more than a nuisance, the idea that a perpetrator is consuming company resources to mine cryptocurrency is far from comforting.

Unfortunately, many cloud-native deployments are prone to more than just one of the hazards listed above. Driven by agile business logic, cloud environments are decoupled from network infrastructure by design. Security groups and ACLs will continue to fall short as long as they remain tied to the very network infrastructure that business logic was meant to transcend. Managing network security more efficiently is not going to make a dent in any cloud-native security posture. To address the root cause of the problem rather than the symptoms, we need to look at new technologies. One promising approach is to implement cloud workload identities across your entire application to bridge the rift between business logic and network infrastructure.

Learn more about how Portshift implements Cloud Workload Identity