Challenges in Securing Containers

6point6
7 min readMar 5, 2020

--

According to the National Institute of Standards and Technology (NIST) publication [1], containers are a form of operating system virtualisation combined with application software packaging. They provide a portable, reusable and automatable way to package and run applications. Application containers are isolated from each other with separate binary/library files and share the underlying operating system (OS), allowing for efficient restart, scale-up or scale-out of applications across an infrastructure.

Digital transformation has enabled organisations to innovate and pivot quickly to meet the changing demands of their customers. The role of security is to manage risks. As businesses adopt new tooling to deliver value at speed, security must do the same to ensure timely and appropriate responses to risks.

This paper looks specifically at security considerations that need to be addressed when building a container-based infrastructure.

Introduction

Many businesses understand that securing enterprise infrastructure enables cyber threats to be managed. Risks posed to an organisation are often rated in terms of impact to confidentiality, integrity and availability (CIA). To protect the CIA of an asset, a Defence in Depth strategy should be adopted, where multiple layers of security controls are built around an asset to increase the effort or time a threat actor will need to compromise the asset. The primary motivations for security include regulatory compliance or an understanding of the risk posed to assets held by a business. Maturity can also be a motivator, that allows for appropriate controls to be put in place as security understanding develops over time.

The DevOps (development and operations) methodology involves practices, cultural philosophies and tools to deliver applications and services at high velocity. DevOps aims to start with the end product in mind giving more autonomy to developers for innovation [2].

Securing a containerised infrastructure requires a holistic view of all layers of the infrastructure. Security controls will need to be considered at the initial phase of the project and automated wherever possible.

Leading container technology options include, Linux LXC, Docker, CoreOS rkt, and Ubuntu LXD — Docker being the most prominent of the container technologies. Docker uses a client-server architectural model made up of the Docker client, Docker Host and Registry. The Docker daemon listens for API requests and manages Docker objects such as images containers, networks and volumes. Docker can be plugged into the continuous delivery pipeline by integrating it into tools such as Jenkins.

Figure 1: Docker Container Architecture.

Security Advantages of Running Containers

Containers add a level of security to applications by using runtime separation of applications on the host. Another advantage is resources are used only when required as deployments take place as and when demand dictates. Containers are largely ephemeral and can easily be patched by simply rebuilding and redeploying a cluster with a newly updated image.

Challenges to Container Security

A disadvantage related to Docker is, anyone using the Docker daemon control socket/API is effectively running as root. Also, perimeter-based defences like Intrusion Detection/Prevention Systems (IDS/IPS) and Web Application Firewalls (WAF) cannot detect issues within or between containers as they have no visibility of the Docker environment.

Applications have become more dependent on code or agents from third party components. Modules or packages from third parties are usually imported as part of application builds to speed up the development lifecycle. If imported code contains vulnerabilities that are not identified and mitigated early in the development lifecycle, these can be a point of entry into the infrastructure for a threat actor.

The orchestration tools for the containers such as Kubernetes can also be a source of vulnerability. Access to these tools need to be managed accordingly as this can result in malicious or accidental problems relating to scope creep.

Authorisation

According to Dockers documentation [3], Docker’s out of the box authorisation is all or nothing. Any user with daemon access essentially has access to run any client command. Authorization Plugin can be used for greater access control. The Authorization Plugin can be used to configure granular access policies for managing access to the Docker daemon. After authentication, Docker requests are passed through the Authorization Plugin Framework, acting as a mediator that can allow or deny API requests based on granular access control.

Secrets management in containers like Docker could also pose a potential issue as Docker has no facility to enable the management of secrets.

Addressing Container Security

Securing the containers should begin at the design phase. Ensure all layers of the infrastructure that hosts the container applications are secure from the inception of the project.

Secrets Management

Container technologies like Docker cannot handle secrets. Best practices suggest use of secret management tools such as Hashicorp’s Vault. Additionally, cloud platforms such as AWS and Azure provide secrets management solutions such as AWS Secrets Manager and Azure Key Vault.

Container Isolation

Container isolation is essentially done at runtime through OS kernel namespaces. However, vulnerabilities in the kernel can allow this isolation to be broken. Configuration mistakes can also cause processes in containers to interact with other containers or with the host. The solution is to isolate containers at the virtual machine layer or on a separate physical infrastructure for multi-tenant applications. Tiers on the same physical host should not be mixed as part of the design.

Container Security

Container security should start with the application to be deployed in the container. Standard hardening procedures should be followed from the design. Libraries, packages and modules imported to be used in the application should be versions that are free from vulnerabilities. Secure protocols should be used throughout the infrastructure. The Open Web Application Security Project (OWASP) has a number of open source vulnerability scanning tools that can help gain visibility into vulnerabilities being onboarded into the environment.

Within the container and during the container runtime, capabilities can be dropped/capped by using the -cap-drop runtime option to limit the set of capabilities available or -cap-add to increase privileges as required. [4]

Control groups are key components of Linux containers used to implement resource accounting and limiting. They provide useful metrics and can also help ensure each container gets its fair share of memory, CPU, disk I/O and importantly, a single container cannot bring down the host by exhausting all resources. They can be used to set limits on CPU and RAM to prevent runaway processes from consuming resources on the host. This is also documented in the reference above on the Docker security page.

Again, documented in [4] is Secure Computing Mode (Seccomp) which provides syscall firewall between user-level processes and the Linux kernel, states the specific Linux kernel feature that can be used to restrict actions available within a container.

Image Poisoning

Images from public registries might contain vulnerabilities or might have been poisoned. These should be vetted before use in any estate. Additionally, scanning tools should be used in the verification process or manually review images before use. Again, only download images from official repositories or trusted registries. Ensure enforcement of use of digitally signed images to establish an end-to-end chain of trust from image publishers.

Automated tools should be used as part of any continuous delivery pipeline. Open Source or commercial scanning tools such as Anchore, CoreOS Clair, Nessus, Twistlock, can be integrated into delivery pipelines to access known image vulnerabilities.

Monitoring

Logging and monitoring capabilities should be considered for the containers during runtime. All process calls and systemwide activities should be logged to a secure centralised location where tools such Security Incident and Event Management (SIEM) are used to analyse, detect, and respond to potential threats identified during the runtime. Tools that scan and monitor Common Vulnerabilities and Exposures (CVEs) in the runtimes should be used to report on discovered vulnerabilities for upgrade on any host or container. All logs from these monitoring tools should be aggregated into a centralised location.

Securing the Container Orchestration Platform

According to [1], administrative access to the orchestration console should be one of utmost important controls. This is due to the span of control this will have across an infrastructure. Access should be granted to skilled engineers/administrators who need access to the right sections to enable them to do their role.

Cluster-wide administrative access should be controlled appropriately with the right authentication methods. Single Sign-on (SSO) should be considered as an addition to existing directory services to allow auditing access at a central location. Multi-Factor Authentication (MFA) should also be applied to user login processes.

Orchestrators should be configured to isolate deployments to specific sets of hosts by sensitivity levels. Networks for the orchestrators should be separated into discrete virtual networks by sensitivity level. Log management aggregation and consolidation into a secure location should also be considered for the orchestration tool.

Summary

A variety of factors have to be considered when securing container-based infrastructure; this paper touches on a number of these. As organisations innovate at a fast pace, security needs to be embedded into architecture from the design phase.

Applying security to containers could be a compliance or essential business requirement to enable an organisation to remain competitive. Security needs to have visibility of the whole estate and to ensure risks that are discovered are appropriately managed. Services that fall outside of compliance will have to be quickly reported on to allow for appropriate remediation. Remediation can also be automated through event driven security.

If you’d like more information on anything covered in this piece, please contact us here:

References:

[1] — https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-190.pdf

[2] — https://aws.amazon.com/devops/what-is-devops/

[3] — https://docs.docker.com/engine/extend/plugins_authorization/

[4] — https://docs.docker.com/engine/security/security/

Originally published at https://6point6.co.uk.

--

--

6point6

Leading with strategy, design and architecture, we connect cloud, data, and cyber to engineer and deliver large-scale, complex transformations.