Kubernetes Security

Pius Dan's Blog
17 min readApr 6, 2020

Kubernetes is turning out to be the de-facto orchestration tool for enterprises to run their workloads with high availability. This can be attributed to the fact that Kubernetes abstracts away the complexities of working in distributed environments.

As with traditional models, where enterprises deployed applications on bare metal servers, applications utilizing K8s are still vulnerable to attacks and exploits from malicious individuals. The need to secure Kubernetes is, therefore, as crucial as on any other deployment infrastructure.

First, let’s look at some of the known Kubernetes attack vectors and the underlying security principles you should consider when setting up your cluster:

Kubernetes attack vectors

On Kubernetes, attack vectors could be categorized into two, cluster-wide where an attacker could compromise the entire Kubernetes cluster or pod wide where an attacker compromises an application or the host on which an application runs.

Cluster Wide Attack Vectors

  • Access to the nodes
  • Access via Kubernetes API or Proxy.
  • Access to etcd API

Attackers often exploit the above vectors to intercept/modify/inject control plane traffic.

Pod or Application Attack Vectors

  • Exploiting vulnerability in the application’s code.
  • Access via the kubelet API

Attackers can exploit the above vectors to escape into a container to host or intercept application traffic.

Kubernetes Security Principles

To fully secure a Kubernetes cluster, the cluster administrator has to have an understanding of the underlying security principles. These principles include:

  • Least privilege

Based on the principle of least privilege, a component has access only to the resources it needs to perform its functions. In Kubernetes, use ServiceAcounts, Roles and RoleBindings to restrict access to cluster components and resources.

  • Limiting the attack surface

Here we try reducing the number of ways a system can be compromised. In most software systems, this would mean reducing the amount of code deployed.
Reduce the size of your containers’ images to limit the attack surface.
As a rule of thumb, only include binaries necessary to run your application in your images. Most developers prefer building containers from alpine based images.

  • Multiple defensive measures

Layering various defensive mechanisms can help thwart attacks on your cluster as attackers will now have to beat just more than one defense to gain access to your cluster, this lowers their chances of success. For example, you could layer Kubernetes RBAC with Mutal TLS for service to service authentication.

Securing the Cluster

We have discussed attack vectors and security principles, we now know where attackers may likely hit and are also aware of the various ways we may limit their chances of success, but how do we implement the principles?

That brings us to the actual steps of securing our Kubernetes environment while applying the security principles discussed above.

In Kubernetes, we use appropriate configuration settings for the various cluster components to implement the discussed security principles. Below we discuss how the cluster components and how we secure them.

API Server
This component offers a REST API used to control Kubernetes. Gaining access to the API is an equivalent of gaining root access to every node in the cluster.

Kubernetes also ships kubetctl ; the cli component used to manage Kubernetes resources through the API server.

To secure the API server:

  • Close the insecure port (this is the default port that the API server listens on) by setting –insecure-port flag to 0 and enduring the — insecure-bind-address is not set.
    You can verify whether the insecure port is open on the default port using a simple curl command, as shown below. <ip address> is the host where the API server is running
$ curl <ip address>:8080
{
“paths”: [
“/api”,
“/api/v1”,
“apis”,

A similar response as the one above indicates that the port is open. However, if the insecure port is disabled, you should get a connection refused error.

  • Restrict access to the API Server to only authenticated users set the –-anonymous-auth=false flag for the API server.
    You could, however, allow anonymous users for clusters using RBAC.
    To enable RBAC in the control panel:
  • Set -–authorization-mode on the API server to allow the RBAC authorization module.
  • Include the Node authorizer in the — authorization-mode list.

Kubelet

The Kubelet is an agent that runs on every node and uses the container runtime to manage the lifecycle of pods. It also reports the status of the nodes to the Kubernetes control plane.
Kubelet exposes an API used to start and stop pods. This post explains how unauthorized access to the kubelet API can enable hackers to compromise the entire cluster.

With proper configuration, however, you could lock the Kubelet API and thus minimize the risk of compromise.
Below we discuss some outstanding configuration options.

  • Disable anonymous access by setting –anonymous-auth=false. As a requirement, you will also need to set up the –kubelet-client-certificate and –kubelet-client-key flags that the API server will use to authenticate itself to the kubelet.
  • Set –-authorization-mode to something other than AlwaysAllow, To ensure that all requests as authorized.
  • Add NodeRestiction in the -–admission-control settings, to scope the control of the kubelet to the node on which it runs.
  • Prevent anonymous users from access information about workloads by setting -–read-only-port=0
  • Furthermore, enable the –-rotate-certificates flag to automatically renew kubelet certificates as their expiry approaches. This feature is, however, only supported as for Kubernetes version 1.8 and above.

Etcd
ETCd is a distributed key-value store that Kubernetes uses to store cluster state and configuration information. An attacker who can compromise your etcd store can effectively compromise your entire cluster.
To secure your etcd, you will need to restrict access to authenticated users only. You could achieve this by:

  • Only allow connections using HTTPS by setting the-–cert-file and -–key-file
  • Set –-client-cert-auth=true and –-trusted-ca-file, to ensure that every client uses a certificate from a specific certificate authority to verify its identity.
  • Set -–peer-client-cert-auth=true and –-peer-auto-tls=false.
  • You will also need to specify -–peer-trusted-ca-file, to enable etcd nodes to communicate securely.
  • Set –-etcd-certfile and –-etcd-keyfile to enable the API server to identify itself to, etcd.
  • As with any other store that stores sensitive information, it’s also vital to encrypt data stored in etcd. This post offers a thorough walkthrough on how to achieve this.

The above configuration ensures restricted access to the etcd store. However, you should also take additional measures to encrypt etcd’s data at rest. More important if you plan to use etcd to store Kubernetes’ secrets.
This tutorial offers a deep-dive on how to encrypt etcd’s data.

You could also use a network firewall to prevent traffic into etcd from sources other than the Kubernetes control panel.

Kubernetes Dashboard

Attackers have the past used the Kubernetes dashboard to gain access to the cluster, made possible by its default settings in older Kubernetes versions that gave the dashboard full admin privileges.

Using appropriate configuration, you could shield your Kubernetes dashboard from attackers. These configuration options include:

  • Restrict access to authenticated users only.
  • Don’t expose your dashboard to the public internet directory.
    Instead, use kubectl proxyto access the dashboard securely.
  • Use RBAC
    Limit privileges so that users can manage only the resources they need to.
  • Ensure the Dashboard Service Account has limited access.

More important, always refer to the Kubernetes Dashboard recommended setup

Validation

Validation aims to assert that your cluster is secure. There are two options for this:

  • Penetration testing
    Testing your cluster from the perspective of an attacker to establish any vulnerabilities in the cluster setup.
    You might want to employ the services of a pen-tester to probe your cluster for vulnerabilities.
    There exist open-source tools like Kube Hunter, to help in this.
  • Configuration testing
    Running your deployments against the latest published CIS benchmarks.
    For this, you can use Open source tools like Kube bench

Authentication and Authorization in Kubernetes

Authentication is the process through which the API Server determines the identity of an entity that wants to communicate with it.

Identity in Kubernetes can either be normal users or serviceAccounts. By default, Kubernetes does not manage normal human users. Instead, it assumes you are using an independent external service such as Directory Access Protocol, Single sign-on login standards like Kuberos. Various other authentication strategies do exist

For programmatic access, Kubernetes manages identities using ServiceAccounts. A ServiceAccount is a namespaced resource that your applications can use if they want to communicate with the API server (query, create, update resources like pods, services, etc.).

ServiceAccount is automatically created by the API service and associated with a running pod using a ServiceAccount Admission Controller.

By default, Kubernetes mounts a secret containing credentials to the service account on the running pod.

To verify that you pod contains the necessary authentication credentials, escape into a running pod and list files under /var/run/secrets/kubernetes.io/serviceaccount directory.

$ kubectl run -it — rm — image=alpine sh — sh
~$ ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt namespace service-ca.crt token

The token file contains a jwt encoded token.

You can also explicitly assign a pod to a serviceAccount by specifying the serviceAccountName field in the pod spec.

The general format for ServiceAccount name is system:serviceaccount:(NAMESPACE):(SERVICEACCOUNT) which is the username that the service account uses to authenticate itself.

Authentication Concepts in Kubernetes

Kubernetes performs authentication using plugins. These plugins lookout for Username, ID, and group attributes of an identity that it uses to perform authentication.

Kubernetes presents multiple authentication strategies each provided by a particular authentication provider.

Below we discuss the common authentication strategies available in Kubernetes

  • Static password or token file
    An authentication pattern based on the Basic HTTP authentication scheme.
  • X.509 certificates
    Here every user has their own X.509 certificate that the API server verifies against a valid CA. If the certificate is valid, then the common name of the certificate is used as the username and any defined organizations used as groups.
  • OpenID connect
    This a flavor of OAuth2 supported by some OAuth2 providers, notably Azure Active Directory, Salesforce, and Google. The protocol’s main extension of OAuth2 is an additional field returned with the access token called an ID Token. This token is a JSON Web Token (JWT) with well-known fields, such as a user’s email, signed by the server.

Authentication best practices.

  • Use third-party providers
    Integrate Kubernetes with Third-party providers such as Github, Microsoft, Google, or AWS. Unless you would want to roll up your own thing.
  • Don’t use static files
    In case you cannot use third-party providers, void using a static password, and username for authentication. Instead, use X.509 certificates as this limits access of individual users to the life span of the certificate.
  • Life cycle
    Always remember to revoke access for people that leave the organization.

Authorization flow

Kubernetes authentication flow

After authentication, the Username, ID, and group information of the users together with the path, Resource, verb, and namespace attributes of the user request passes onto the authorization module that determines whether the user is permitted to perform the specified action.

Kubernetes implements various modes to enforce permissions as listed below:

  • Node authorization
  • Attribute-based access (ABAC)
  • Webhook
  • Role-based Access (RBAC)

Role-Based Access Deep Dive

Role-based access control (RBAC) introduced in upstream Kubernetes as of version 1.8.

By Definition, it involves four parts as discussed below

  • Entity
    An entity is a group, user, or service account that wants to carry out an operation.
  • Resource
    A Kubernetes resource that the entity wants to access.
  • Role
    Set of rules that restrict or allow specified action on a resource.
  • RoleBinding
    Put simply as a mapping of a role to an entity. Specifies that an individual entity is allowed to perform specific actions.

Actions on Kubernetes are the so-called verbs and include:

  • Get, list (read-only)
  • Create, update, patch, delete, delete collection (read-write)

A role can be of two types:

  • Cluster-Wide
    These are cluster roles and cluster role bindings
  • Namespace-Wide
    These are roles and role bindings and scoped to a namespace.

When not clear whether to use a Cluster-Wide or Namespace-Wide role, use the following rule of thumb:

  • Use a role and role binding if you want to grant access to a namespaced resource, e.g., a pod.
  • To reuse a role across namespaces define a cluster role and role binding and assign it to an entity (user or service account)
  • To grant access to cluster-wide resources, use a cluster role, and cluster role binding.

However, before proceeding to create your roles, be sure to have exhausted the default Kubernetes roles that include:

  • User-facing roles
    cluster-admin, admin; this is namespaced, edit, and view that you can assign to your entities out of the box.
  • Core components
    Kubernetes components ship with appropriate roles that define only the permissions they need to perform their functionalities. Examples are the system:kube-controller-manager role that defines allow actions for the kube-controller component.
  • Other components
    Kubernetes also defines roles for other non-core components that define actions allowed for other Kubernetes components that aren’t part of the core components. Examples are like the system:persistent-volume-provisioner

To query for all out-of-the-box roles that Kubernetes ships with, run.

$ kubectl get roles,clusterroles -l kubernetes.io/bootstrapping=rbac-defaults — all-namespaces

RBAC in action
Let’s say you have an application that needs access to services information.
The view cluster role could enable it to do so, but it also grants it access to various other resources, as shown below.

$ kubectl describe clusterrole view 
Name: view
Labels: kubernetes.io/bootstrapping=rbac-defaults
rbac.authorization.k8s.io/aggregate-to-edit=true
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
— — — — — — — — — — — — — — — — — — — — — — — -
bindings [] [] [get list watch]
configmaps [] [] [get list watch]
endpoints [] [] [get list watch]
events [] [] [get list watch]
limitranges [] [] [get list watch]
namespaces/status [] [] [get list watch]
namespaces [] [] [get list watch]
persistentvolumeclaims [] [] [get list watch]
pods/log [] [] [get list watch]
pods/status [] [] [get list watch]
pods [] [] [get list watch]
replicationcontrollers/scale [] [] [get list watch]
replicationcontrollers/status [] [] [get list watch]
replicationcontrollers [] [] [get list watch]
resourcequotas/status [] [] [get list watch]
resourcequotas [] [] [get list watch]
serviceaccounts [] [] [get list watch]
services [] [] [get list watch]
controllerrevisions.apps [] [] [get list watch]
daemonsets.apps [] [] [get list watch]
deployments.apps/scale [] [] [get list watch]
deployments.apps [] [] [get list watch]
replicasets.apps/scale [] [] [get list watch]
replicasets.apps [] [] [get list watch]
statefulsets.apps/scale [] [] [get list watch]
statefulsets.apps [] [] [get list watch]
horizontalpodautoscalers.autoscaling [] [] [get list watch]
cronjobs.batch [] [] [get list watch]
jobs.batch [] [] [get list watch]
daemonsets.extensions [] [] [get list watch]
deployments.extensions/scale [] [] [get list watch]
deployments.extensions [] [] [get list watch]
ingresses.extensions [] [] [get list watch]
networkpolicies.extensions [] [] [get list watch]
replicasets.extensions/scale [] [] [get list watch]
replicasets.extensions [] [] [get list watch]
replicationcontrollers.extensions/scale [] [] [get list watch]
ingresses.networking.k8s.io [] [] [get list watch]
networkpolicies.networking.k8s.io [] [] [get list watch]
poddisruptionbudgets.policy [] [] [get list watch]

Following the principle of least access, you’d want to limit its access to only the resources it needs by:

  • Create a serviceAccount you’d use to represent the application’s identity to the API server.
$ {
> kubectl create namespace my-ns
> kubectl — namesapce=my-ns create serviceaccount my-app-sa
}
namesapce “my-ns” created
serviceaccount “my-app-sa” created
  • Create a role svc-view that allows viewing and listing of services.
    $ kubectl -n my-ns create role svc-view — verb=get — verb=list — resource=services
    Verify that the operation to create the role succeeded.
$ kubectl -n my-ns describe role/svc-view
Name: svc-view
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
— — — — — — — — — — — — — — — — — — — — — — — -
services [] [] [get list]

Now the svc-view role only allows viewing services.

  • Next, we attach the role to our application which is, in turn, represented the serviceaccount my-app-sa

    $ kubectl -n my-ns create rolebinding my-app-viewer — role=svc-view — serviceaccount=my-ns:myappid
  • To verify that our application runs with only the required roles.
$ kubectl -n my-ns auth can-i — as=system:serviceaccount:my-ns:myappid list services
yes
$ kubectl -n my-ns auth can-i — as=system:serviceaccount:my-ns:myappid list pods
no

Tooling

There exist tools that focus on RBAC authorization.
Some worthy mentions include

Authorization Best practices

  • Always use RBAC
  • Disable automounting of the service account token.
    Some applications will never need to access the api-server via the service account token. You can disable mounting of the serviceaccount token for such scenarios by
    $ kubectl patch serviceaccount default -p $’automountServiceAccountToken: false’
  • Use dedicated service accounts
    Create a dedicated service account per application and configure it with the least privilege it needs to carry out its functions.

Securing your containers

So far, we have looked at security from a cluster perspective. Now we look at how you can ensure that containers you run within your cluster do not introduce vulnerabilities to your cluster.

Best practices

  • Scanning your containers images
    Scan your container image files regularly to detect any vulnerabilities that may be present.
    Most private registries like Google container registry and Docker trusted registry provide metrics into the state and health of your container images that you can leverage.
  • Patching container images
    Once you discover a vulnerability in your container image ist imperative that you update the container to include a fix to the vulnerability.
  • Storing images securely
    Ensure your images are stored securely with proper authentication and authorization measures to control who can query or update your container images. Several public registries exist, but most enterprises would prefer a private registry. Whichever path you choose to ensure that you can control who can access and update your images.
  • Using the correct image version
    Kubernetes enables us to define which image to run on a pod using the image property of the pod spec.
    You must be explicit as possible when defining the image tag to ensure that your pod always runs with the appropriate version of the image.
    Rule of thumb is to avoid using image tags like latest, dev, test to define images, and instead, the images unique digest to refer to a container image. You could also use semantic versioning to tag your images that way; you could also ensure you are running the correct version of the image.
    This blog provides excellent insight as to how it is essential to give the correct image version.
  • Using lightweight images to reduce the attack surface
    Using the principle of limiting the attack surface, ensure that your resulting images do not include any unnecessary code or packages.
    For example, applications that compile down to a single static binary file, it is possible to build the binary file and then copy the resulting file to the container image and no other packages.

Running containers securely

While ensuring your containers don’t introduce vulnerabilities results in generally secure container images, how you run those containers could also impact how secure your cluster can ultimately be.
To ensure that your containers run securely, always aim to:

  • Use the least privileges to carry out specific tasks.
  • Avoid communication between applications, to and from the external world. If any communication is to happen, it should be through a controlled and deterministic set of connections
  • Only do minimal host mounts

As a rule of thumb, you should never run your containers as root unless

  • Your container needs to modify the host system.
  • Container binds to privileged ports.
  • You need to install software into a container at runtime, of which is an anti-pattern; Container images must be immutable, any software installation must be at image build time.

Kubernetes also provides tools and concepts, also known as policies that you could use to ensure your containers run securely. To appreciate how policies come to the effect, we will need to discuss Admission Control

Admission Control in Kubernetes

An admission controller is a piece of code that intercepts requests to the Kubernetes API server before the persistence of the object, but only after authentication and authorization.

Kubernetes provides over 30 admission controllers that cluster admins can leverage.

Below we will discuss admission controllers with a focus on how to configure them to run containers securely.

  • AllwaysPullImages
    Modifies pod specification to set the image to pull policy property to Always. Ensuring that a new image gets pulled whenever a pod restarts or created, bypassing the locally stored image. Thus you can always run the exact version of the image specified.
  • DenyEscalatingExec
    Denies exec and attach commands to pods running with escalated privileges. Preventing attackers from escaping from a pod/container into the host machine.
  • PodSecurityPolicy
    Acts on creation and modification of the pod and determines if it should be admitted based on the requested security context and the available Pod Security Policies.
  • LimitRange and ResourceQuoata
    Observe the incoming request and ensure that it does not violate any of the constraints enumerated in the LimitRange and ResourceQuota object in a Namespace
  • NodeRestriction
    Limits the Node object and Pod object that the kubelet can modify.

The above concepts define and enforce security boundaries.
Security boundaries are a set of controls that mean isolating resources from each other.
In Kubernetes, these boundaries include:

  • Cluster
    A cluster comprises of all nodes and control-plane components to form the topmost level unit.
  • Node
    These are virtual machines that host their Kubernetes components. It’s possible to isolate workloads to different nodes using nodeSelectors or using node or pod affinity.
  • Namespace
    A virtual cluster that consists of pods and services and the basic unit for RBAC, as discussed above. Here you can use admission controllers to define resourceQuoatas and LimitRange to prevent namespaces from starving each other of resources. As in Denial of service attacks
  • Pod
    The basic unit of deployment in Kubernetes.
  • Container
    A collection of application code and necessary files/packages required to run the application.

These security boundaries need to be enforced and established using Kubernetes Policies.

Policies that control process can do within a pod.
Some policies define security contexts that control privilege and access control settings at the container or pod level.
Define these policies by setting the securityContext field on either the pod or container level.

let’s say you want to define a pod withing the following security context:

  • All containers in the pod must run as user 10001
  • Prevent files withing the container from enabling extra capabilities
    The above constraints lead to a pod specification that looks like

Now we can ensure that our containers only run as the intended users.

Furthermore, we need also to ensure that our pods do run as with the intended cluster or namespace roles. So that developers run pods within the appropriate security context, a constraint achieved using pod security policies, which is a cluster scoped Resource that controls security sensitive aspects of the pod specification. The PodSecurityPolicy objects define a set of conditions that a pod must run with to be accepted into the system, as well as defaults for related fields.

Kubernetes will then not accept pods that violate the pod security policy. You will, however, need to allow the PodSecurity admission plugin for this to take effect.

Pod security policies not only define the security context that an application must run on but also other security-related settings such as Seccomp and AppArmor profiles.

For example to create a policy that restricts creating of privileged pods we could use a YAML definition that looks like

snippet from https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/example-psp.yaml

Some other settings on security policies to consider include but are not limited to:

  • Limiting host volume mounts
  • Disallowing privileged access.

Policies that control how pods are allowed to communicate.

Kubernetes uses NetworkPolicies to restrict communication between pods.

These policies protect your cluster by:

  • Preventing any attacker that can infiltrate pods from sending network traffic to applications running in pods.
  • Limiting the blast radius of an attack to a single pod as it restricts an attacker from traversing the network laterally from a compromised pod.

This Resource on the sysdig blog provides an excellent overview of how you could leverage on network policies to enforce security boundaries.

Conclusion

This article has tried to cover the steps to take to secure your Kubernetes cluster, it, however, has not in any way exhaustively looked at all edge cases as it pertains to securing your Kubernetes cluster.

Cluster administrators must be in the lookout for new vulnerabilities and patch or update their cluster as soon as possible. Having an elaborate recovery plan in case of a breach is also very important.

Finally, while no security is 100%, incorporating best practices when setting up your Kubernetes cluster will go a long way in ensuring your workloads are secure and, in turn, save your enterprise from embarrassing situations that arise from hacks.

--

--

Pius Dan's Blog

I write about my life as an Entrepreneur and Sofware Engineer; The tools I love, the hacks I have built and some battle-hardened advice.