Given the humongous amounts of data being generated on a daily basis, there is no debate about the fact that cloud computing is crucial to running a competitive, modern digital business. The benefits are manifold – agility offered in IT and enterprise business operations, reduced capital outlays, efficiencies in processes, speed and overall productivity gains. It is not surprising at all then that Gartner has projected that the worldwide public cloud services market will grow 18 percent in 2017 to a total $246.8 billion, up from $209.2 billion in 2016. But despite its near explosive growth, cloud security remains a concern.

In today’s IT threat landscape, keeping pace with the attackers and ensuring security is more important than ever. Even though there is nothing inherently unsecure about the cloud, the fact remains that the responsibility of the apps that run on the cloud lies with the user and not the cloud vendor such as AWS. In fact, AWS features a shared responsibility model, which means that AWS takes responsibility of the facilities, physical security of hardware and virtualization infrastructure, and not the apps that run on it.

Therefore, it really boils down to changing the psychology or concerns mindset when it comes to thinking about cloud security. Here are some highlights for enterprises to deliberate on.

Security is Just the Cloud Vendor’s Responsibility

A number of recent global industry surveys on cloud security have indicated that enterprises consider cloud security is the sole responsibility of the cloud service providers. That’s an extremely flawed and dangerous assumption. Cloud Security is always a collective responsibility shared by the vendor and the user. This is true irrespective of whether you talk about a public or a private cloud. Of course, it has not been easy for the IT Security industry to keep up with the rapid growth of the cloud computing industry.

But that means that organizations need to go the extra mile, for example, by configuring apps that are compatible with the cloud infrastructure they are using. On their part, vendors need to ensure rigorous security on Virtual Machines (VMs) where storage space is shared by multiple clients and on data centres, addressing a lot of complexities with regard to this challenge can be successfully achieved. Regulatory compliance also helps.

For instance, AWS is compliant with PCI DSS 3.2 and many other compliances. This means that users can confidently leverage the certified AWS products and services to meet security and compliance objectives from infrastructure perspective while just focussing on the application level security. Because Amazon has validated PCI DSS compliance against the latest set of criteria. Users including early adopters of the standard can benefit from it.

From a regulatory perspective, Governments should work to mandate encryption and perhaps enforce penalties for companies that suffer data breaches in the times to come. Security vendors have some catching up to do too, to develop new cloud-first products. Currently, a majority of security tools being offered in the market do not work with the cloud, but are meant for the traditional networking environments.

It’s important to establish who is responsible for which aspects of security, so that measures can be put in place to ensure the system and data remain safe. To quote an example, for the Amazon ELB service, which is the shared infrastructure service, its default configuration is susceptible to some known SSL vulnerabilities. Also web application firewalls need to be configured for application level security. In this case, the responsibility to enforce a particular implementation doesn’t lie with Amazon, but with the provisioner to ensure that adequate configuration and testing takes place. Simply pinning the blame on the vendor is not an option. Ultimately, security should be a constant consideration of everyone involved.

Why Keep the IT Department in the Loop?

Often, the biggest risk associated with cloud is as a result of human factors. For example, business functions sometimes may sign up cloud services for their purpose, without even involving the IT department or CISO team.

There is a reason why IT departments are cautious about moving mission-critical applications to the cloud. There are some legitimate fears around security, downtime and control. Bypassing the IT department while making cloud investment decisions presents a genuine risk for enterprises. This is because such instances make it unlikely that security audits have been conducted, or safeguards have been put in place.

A sensible security policy is essential, to ensure that IT is involved in the decision-making process when it comes to any type of IT adoption. This information should be used to develop test cases, which can thoroughly test the status of cloud security. If there are any loopholes in security, they need to be addressed immediately via patches. At the same time, updates also need to be implemented on a regular basis.

Of course, an overly draconian cloud security policy too defeats the purpose since it risks being circumvented. Instead, the aim should be to build a solid security policy that empowers departments to achieve what they need without sacrificing security.

Regulated Bring Your Own Device (BYOD) Policies Will not Help

BYOD can put a whole different spin on security. In fact, 44% of security professionals listed BYOD as their biggest security concern in one recent survey, more than any other aspect of security. On one hand, lost or stolen devices could potentially give an unauthorized access to cloud services, as well as sensitive data stored locally or in caches, to unauthorized persons. BYOD also makes it more challenging to diagnose data breaches, as filtering and monitoring systems may not be in place on employees’ own devices.

In addition, family members and friends of staff may have access to a device used at work, so measures need to be put in place to restrict access to sensitive data.

Of course, BYOD brings with it some huge advantages. It gives staff the freedom to use devices that they are comfortable with, giving them more convenience and better features loaded than those provided by their employers. Implementing an acceptable use policy, as well as controlling access to sensitive data with a password or PIN and Multi Factor Authentication (MFA) can help.

Shared Resources on Public Clouds are Always Risky

Access to data on Virtual Machines (VMs) is a big concern in public clouds. By definition, public clouds share resources between different customers and use virtualization heavily, and this does create additional security vulnerabilities, both from access levels as well as from exploits in the virtualization software.

In theory, VMs hosted on the same physical server could suffer undetected network attacks between each other in the absence of suitable network detection. Hijacking VM hypervisors, and exploiting local storage in memory are also fairly common. Therefore, investigating the controls that providers have in place to secure the cloud environment is important. Vendors accredited to the highest industry standards, such as AWS and AZURE, do make this information available. In general, most security issues are enterprise-specific and are standard server security or admin related problems. Therefore, the best approach to mitigate security risks is tracking critical security patches at VM level and using the latest version of machine images. For critical applications, an External VAPT Security testing and application level security protection are important.

Less Availability of Data Security Breach Identification Solutions in the Market

The best way to ensure that data integrity is not compromised, whether it is through deliberate or accidental modification, is to use resource permissions that can limit the scope of users who can modify the data and also leverage data auditing controls to track who accessed what, from where and when information for compliance purpose.

Even for doing this, the threat of accidental deletion by a privileged user still remains. It could also be an attack in the form of a Trojan using the privileged user’s credentials). Measures such as performing data integrity checks, including Message Integrity Codes (parity, CRC), and Message Authentication Codes (MD5/SHA), or Hashed Message Authentication Codes (HMACs) to detect data integrity compromise are helpful. Above all, there are several solutions available solely for these purposes that provide Host Level Intrusion Detection(HIDS) and File Integrity Monitoring Solutions (FIMS).

Ignoring DevOps, Or Slow in Adopting it

In many ways, DevOps are bringing in a new wave when it comes to cloud security. In the DevOps automation cycle, for example, every code commit triggers a build that tests security and functionality of the application bundles using tools like Amazon Inspector and Selenium. In fact, while Selenium was earlier used for test automation only, it has since emerged as one of the top DevSecOps tools as it can easily trigger security scanning tests along with other application test scripts. At the same time, it also ensures that systems are always patched, vulnerabilities scanned and checked for functioning before deployment.

DevOps are giving enterprises a way to make application quality and security testing more scripted, continuous, and automated. DevSecOps enable an automation approach for security tests throughout development, even on the cloud. They are even integrating security-feature design and implementation into the development lifecycle in ways that wasn’t possible before.

In many ways, DevOps is helping application security to reach a level that many security professionals have been advocating for years. The only way to do this is through automation of security and regulatory compliance tests throughout development and deployment. For organisations, by leveraging automation tools to enforce security and compliance controls, DevSecOps will empower them to achieve regulatory compliance at speed, and at scale. DevSecOps also makes detection and closing of security vulnerabilities faster than before while on the cloud.

Partial Knowledge of Risks Involved

Knowing your cloud compliance and understanding security vulnerabilities completely in real-time is the first and foremost step. Once you are aware, taking steps to ensure business continuity is relatively easier. The comprehensive security process includes enabling auditing controls, logging access data, network security, IAM controls, data governance, passive/active protection for VMs and applications.

It is possible to quickly assess and mitigate vulnerabilities in real time and adopt a comprehensive security management for your cloud, for example with cloud management platforms like Botmetric Security and Governance. With such tools, it is possible to optimally improve your AWS cloud security and identify critical vulnerabilities at Cloud level quickly from various perspectives — data security, Disaster Recovery, user access, network security, etc.

Penetration (or VAPT) Testing is another process used traditionally to understand the risks and test if there is scope for hackers gaining access to application environment. This is equally useful for cloud systems too. And with the cloud, come additional vectors for attacks.

Integrating Security Across your Processes is Secondary

In the initial stages of adoption, companies experimented with storing mostly non-strategic data into the cloud. But now that they have made the transition to moving business critical apps and data into the cloud, processes to ensure compliance with legal and regulatory norms haven’t quite caught up yet.

Also, many organizations fail to integrate security as a seamless feature as part of their continuous methods like DevOps, and for some, security slows down the development methods. In order to realize the full potential of the cloud, built-for-cloud-security products must adhere to the DevOps process.

Amazon VPC is Not Safe

The Amazon Virtual Private Cloud (VPC) provides some great features that you can use to increase and monitor the security for your enterprise data and applications. For example, its security groups act as a firewall for associated Amazon EC2 instances, and they control both inbound and outbound traffic at the instance level. Its network access control lists (ACLs) act as a firewall for associated subnets, and control both inbound and outbound traffic at the subnet level. It also has flow logs that capture information about the IP traffic going to and from network interfaces in the organization’s VPC.

These tools make it possible to monitor the accepted and rejected IP traffic going to and from your instances by creating a flow log for a VPC, subnet, or individual network interface. Additionally, the organization can use AWS Identity and Access Management to control who in the organization has permission to create and manage security groups, network ACLs and flow logs.

To sum up, the risk isn’t from transitioning to the cloud; rather, it is a result of poor policies that might not be conducive to a secure your business whether it’s in cloud or on-premise. What is your take?