A single open, unsecured container.

Securing Containerized Applications in the Cloud

Let's discuss the importance of container security and outlines common vulnerabilities, such as misconfigurations, and insecure container images.

Christina Harker, PhD

Christina Harker, PhD

Marketing

So what will be covering in this article? We'll explore strategies for securing containerized applications, including network segmentation, access controls, and runtime protection. We'll also address the challenges associated with implementing these strategies.

An Introduction to Securing Containerized Applications

Container technology has been a revolution in modern application development, facilitating the adoption of patterns like Service-Oriented-Architecture and microservices. Containers are a natural fit for organizations looking to migrate their application workloads to the cloud, but this convenience comes with a new set of security challenges that are unlike traditional server or virtual machine workloads.

With the adoption of containers, the question arises: are organizations prepared and able to meet this challenge? In this technical blog article, we will delve into the unique security challenges that container technology poses and explore ways to mitigate these risks.

Six Common Mistakes and Vulnerabilities in Container Security

Containerization is a different paradigm for packaging and running software, so it's important to be aware of the common mistakes that can lead to security vulnerabilities. The type of mistakes that can leave containerized applications vulnerable share some overlap with those of traditional workloads, but containers present unique security challenges that engineers may not even be aware of. Although there are different container engines and runtimes available, this article will focus on Docker due to its broad adoption and usage across a variety of platforms.

1. Insecure/Compromised Base Images

Docker images are lightweight, self-contained packages that include all the essential components needed to run an application - code, runtime environment, libraries and system tools. These images are built based on a set of instructions in a specialized configuration file known as a Dockerfile:

Example Dockerfile contents:

FROM: ubuntu:20.04 #base image, in this case the Linux OS Ubuntu 20.04

ADD: /some-files / #developer adds some files to their Docker image

...

CMD ["/foo"] # This is the process/binary the container will run when started

When building containerized applications, developers will often utilize a parent image downloaded from a public image repository such as Dockerhub. Unfortunately, these images can be insecure or outright compromised by a third party.

2. Containers Running as Root

Containers still need a host system to actually run. Although all major cloud providers offer managed container hosting, many organizations will run containerized applications on vanilla computing infrastructure, which are often Linux-based virtual machines. In a Linux system, every process has a user associated with it as the owner. Core system processes are often owned by the root user, as they need full access to critical parts of the OS.

However, containers like Docker often run as the root user by default, giving them the same privileged access to system functions. If a developer mistakenly built their application on a compromised image, and the resulting container was run as the root user, that container would be a vector for the attackers to take control of the entire system.

3. Containers Aren't Isolated Properly

In cloud architecture, the principle of zero trust is an important concept. In short; any other node, process, or user should be treated as an untrustworthy entity until authentication/authorization proves otherwise.

Zero trust provides an extra layer of defense in the event of a compromise, and containers are no different. Docker containers, by default, are allowed to communicate with each other without restriction. If one container were compromised, it potentially makes compromising the other containers on a host much easier.

4. Containers Can Write To Host

Containers can operate as wholly self-contained application execution environments, with their own file system isolated from that of the host's. However, in many use cases they are given access to a part of the host server's file system through a bind mount.

This may be to access a shared configuration file or media resources like images that are inefficient to store within a container image. In these instances, the container may be able to both read and write to the shared resources. If a container is compromised, writing to the host file system could provide another pathway to container breakout and a compromise of the whole system.

5. Host VM/Server Isn't Updated

Since containers still depend on a host server or VM to function, the attack surface of containerized infrastructure will naturally have to include these host systems. One of the most common vulnerabilities with server infrastructure is outdated or insecure OS packages.

Attackers will often utilize exploits that are discovered in packages or dependencies with a large install base, searching for servers or nodes that have not been updated to version in which the vulnerability has been fixed. The Heartbleed vulnerability is one of the most well-known examples, affecting the OpenSSL library.

6. Docker Socket is Exposed

Docker uses a UNIX socket that acts as a sort of API for the Docker daemon. This socket is owned by the root user in most Linux systems, and has access to the same sensitive system components as any other process running as root. If the socket is exposed over a network protocol like TCP, then remote elevated access attacks are possible. If containers are allowed to write back to the socket, they could also act as a potential vector of compromise.

How to Secure Containers and Containerized Applications in Six Steps

Despite the numerous potential vulnerabilities that could occur with containers, it is possible to create a secure environment in which to run containerized applications. However, it requires a careful attention to detail, a sustained effort to employ automation liberally, and a willingness to employ tools and methods that are significantly advanced from legacy security tooling.

1. Audit Supply Chain/Base-Image Sources

Catching potential vulnerabilities before they are introduced into a development environment is one of the most efficient and least painful ways to address security issues. It is essential to audit the sources of container base images to ensure they come from trusted sources and are free of any known vulnerabilities or malware. Tools like Trivy can be used to scan container images for potential vulnerabilities. They can also be integrated with CI/CD automation so that any new container base or change is automatically scanned.

2. Ensure Containers never Run as Root

Containers with access to root privileges represent a significant and dangerous attack vector, however it's not feasible to manually check and disable this setting across every container host system. The best way to address this issue is through the use of automation; container infrastructure should be configured and deployed using infrastructure-as-code. These configurations can be automatically checked for correctness prior to deployment-- if the containers or Docker process is not running as a separate user with limited privileges, the deployment is not allowed to proceed.

3. Ensure Containers Can't Communicate With Each Other Directly

This setting is available as a configuration options for the dockerd daemon. Setting --icc=false  will disable inter-container communication. Enforcing this in a distributed system environment will again depend on having CI/CD-based automation to ensure all host systems are configured similarly.

4. Use Read-Only Mounts

If an application needs to write data that needs to be persisted between container invocations, consider using a Docker volume. These are isolated storage spaces that can be accessed by containers, but are still isolated from the host filesystem.

If a bind mount (access to the host filesystem) is needed, it should almost always be set to read only. Most cloud platforms allow users to configure compute nodes to perform certain behaviors or automatically start processes on boot via "user-data" scripts. The commands to start application containers can be placed here, and should be checked to ensure that they are enforcing read-only directives.

5. Keep The Container Host Updated

Avoiding OS and package vulnerabilities means keeping the container hosts up-to-date with the latest software versions. Although that sounds like a simple task, the reality is a bit messier. Keeping a few servers hosting a basic application updated is fairly trivial. What about 1000 servers with multiple microservices? 10000? In some cases, new packages may not be backwards compatible; installing them could break the core application. Just maintaining awareness of package installations across 10000 servers is a non-trivial exercise.

Configuration management tools can be employed to help automatically keep compute nodes up to date, however in the cloud the use of immutable infrastructure is a better pattern. Rather than updating in place, hosts are destroyed and redeployed anew whenever new packages or configurations are available; integration testing can be used to ensure that new packages don't break application compatibility.

6. Never Expose The Docker Socket

Container and daemon commands should be checked to ensure they are not exposing the socket over TCP or to individual containers. Utilize CI/CD automation to lint and test configuration files, and flag any configurations that violate this rule.

Challenges of Securing Containers in the Cloud

Taking application workloads to the cloud adds even more wrinkles to the challenge of securing containers. It has a multiplicative effect on the complexity for any given aspect of application infrastructure.

  • Complex network topology: A misconfiguration of any potential vulnerability surfaces in this article could lead to a much broader exposure compared to more traditional, on-premise computing environments. In the cloud there isn't a strongly defined network border, and if configurations aren't being audited regularly organizations will find their insecure application infrastructure also happens to be publicly accessible.

  • Access control is difficult: In most cloud environments, there are a large number of users and systems that require access to a variety of resources. Sometimes these entities require varying access levels to the same resources. Customers may also be utilizing multiple accounts and multiple cloud services, requiring a complex web of interconnected and interdependent identity systems. Containers present an additional technical challenge as they still need to utilize these same IAM systems to function properly inside a cloud platform.

  • Monitoring and observability takes time to get right: Having visibility into production systems is absolutely critical for security. In on-premise environments, the network boundary at least mitigated some of the risk. In the cloud environment, organizations must be aware of what's happening across all systems at all times. Deploying effective monitoring is not a quick process, it takes time to configure, deploy, and optimize these platforms.

  • Managed services offload some of the burden, for a price: Most cloud platforms offer managed container hosting-- the customer does not have to worry about maintaining, patching, and securing the underlying hosts. However, these services typically carry a significant cost premium compared to standard compute infrastructure, and they may be restrictive about what kind of workloads and configurations they are compatible with.

The Cloud Adds Another Dimension to Securing Containers

Containers have offered a massive boost to developer productivity and overall efficiency of application delivery.However, they introduce unique security challenges that organizations may not be prepared to confront. The use of cloud platforms adds additional technical complexity and challenges to securing application environments; organizations need to make significant investments in their engineering capabilities to address these challenges now and in the future.

Experience Divio's Open Cloud with our 30-day Free Trial! Easily deploy your web applications and explore customized containerization services and solutions. Sign up now!