Software containers are the latest in a list of IT infrastructure technologies to start getting adopted at scale in enterprises. From the days of the mainframe, through virtualization and cloud, and now to containers, companies are always keen to make more of their IT.

Containerization is now starting to move from the early adopter phase and into the early majority according to surveys and studies. Gartner estimates that around 20% of companies are running containers in production at the moment, while our own estimate of adoption is around 18% today. However, the rate for adoption growth is high.



So what makes containers different, and why are they proving popular now? More importantly, how does container deployment differ to more traditional IT deployment and how can you manage them securely?



A new way to deploy software needs new security processes

Containers are deployed differently to virtual machines or cloud machine images. Whereas these machines require operating systems to support any installed applications, containers get built to include only those components that are necessary to run an application component. This cuts the footprint for each image dramatically, so more of them can be run.

If the application needs more compute power, it can be created quickly and provide that boost immediately. Developers appreciate this ability to scale up rapidly.

Containers get put together as they are needed based on standardized builds from a library of asset components. These components have to be checked before they get added to the library, and they have to be regularly checked to make sure that new vulnerabilities have not been discovered in the meantime.

Compared to traditional patching on virtual machines – which adds changes to the image to bring them up to date – containers are updated based on changing the components in the software library. This one change is then propagated across all the containers that have those specific components. This can make updates much simpler compared to traditional IT.

Ensuring that your containers are all up to date does also mean checking that no additional software assets are added in after they are put together. For instance, a developer may want to add in another software element and hack it in rather than updating the container; this can be acceptable for a short-term fix or specific requirement, but it should not become standard policy or process. Detecting these kinds of changes should be included within the overall approach to security.

Alongside the containers themselves, you will have to update your approach to managing workflows within the company too. For developers that are running containers, deployments to hybrid or public cloud will be common.

However, managing security keys for these containers becomes a much bigger issue as keys will be needed for each container that is created, and changed regularly for existing containers as well. Given how quickly containers can be created in response to demand for services, this can scale up very quickly, so automation of key management is essential. This also means that access control is necessary due to the size and scope of these installs – one account can have access to hundreds or thousands of images.

However, containers can actually provide a better layer of security if implemented right. They provide a forcing function to minimize the amount of services running in the stack as standard. In itself, this can help reduce the surface layer of attack considerably.



Culture beats process if you let it

Following on from the technology management side, it’s also important to look at the culture that exists around security internally. Developers and IT operations teams should collaborate more closely around processes, while IT security departments are asked to ensure that compliance and security rules are followed.

Containers can potentially upset this by putting more control and automation into the hands of developers without IT security teams being involved.

To get over this, it is important to look at what developers are encouraged to do and how they are rewarded. For instance, we look at automation as a necessary element of all our activities and encourage our security, operations and development teams to automate processes wherever they can.

However, this is not just about personal productivity – the cultural step here is that we understand what a good process looks like first before we look to automate it.

Internally, we take part in code reviews that prioritize security responses and how incidents are managed over time. We instigated more co-working and collaboration across teams, so that everyone was working to the same goals.

To reward those developers that supported our security goals, we also provided trips to Black Hat USA where they could learn from the best in the industry around security and software. These trips are really prized by the developers, and they bring back a lot of insight and best practice advice that can be shared across the team.



By encouraging more discussion and collaboration across our own teams, we can get developers to see the value that our security and compliance departments provide. From the security side, we can put guard rails up for our developers so they can move faster on deployments but not break things. By taking this approach, we have changed our culture to respect process and make the most of automation.