» Running Vault on Kubernetes

Vault can run directly on Kubernetes in various modes: dev, standalone and ha. For pure-Kubernetes workloads, this enables Vault to also exist purely within Kubernetes.

This page starts with a large how-to section for various specific tasks.

» Helm Chart

The Vault Helm chart is the recommended way to install and configure Vault on Kubernetes. In addition to running Vault itself, the Helm chart is the primary method for installing and configuring Vault to integrate with other services such as Consul for High Availability deployments.

While the Helm chart exposes dozens of useful configurations and automatically sets up complex resources, it does not automatically operate Vault. You are still responsible for learning how to monitor, backup, upgrade, etc. the Vault cluster.

The Helm chart has no required configuration and will install a Vault cluster with sane defaults out of the box. Prior to going to production, it is highly recommended that you learn about the configuration options.

» How-To

» Installing Vault

To install Vault, clone the vault-helm repository, checkout the latest release, and install Vault. You can run helm install with the --dry-run flag to see the resources it would configure. In a production environment, you should always use the --dry-run flag prior to making any changes to the Vault cluster via Helm. See the chart configuration values for additional configuration options.

# Clone the chart repo
$ git clone https://github.com/hashicorp/vault-helm.git
$ cd vault-helm

# Checkout a tagged version
$ git checkout v0.2.1

# Run Helm
$ helm install --name vault ./
...

If standalone or HA mode are being used, the Vault pods must be initialized and unsealed.
For HA deployments, only one of the Vault pods needs to be initialized.

$ kubectl exec -ti vault-0 -- vault operator init
$ kubectl exec -ti vault-0 -- vault operator unseal

For HA deployments, unseal the remaining pods:

$ kubectl exec -ti <NAME OF POD> -- vault operator unseal

» Viewing the Vault UI

The Vault UI is enabled by default when using the Helm chart. For security reasons, it isn't exposed via a Service by default so you must use kubectl port-forward to visit the UI. Once the port is forwarded as shown below, navigate your browser to http://localhost:8200.

$ kubectl port-forward vault-0 8200:8200
...

The UI can also be exposed via a Kubernetes Service. To do this, configure the ui.service chart values.

» Upgrading Vault on Kubernetes

To upgrade Vault on Kubernetes, we follow the same pattern as generally upgrading Vault, except we can use the Helm chart to update the Vault server Statefulset. It is important to understand how to generally upgrade Vault before reading this section.

The Vault Statefulset uses OnDelete update strategy. It is critical to use OnDelete instead of RollingUpdate because standbys must be updated before the active primary. A failover to an older version of Vault must always be avoided.

» Upgrading Vault Servers

To initiate the upgrade, change the global.image value to the desired Vault version. For illustrative purposes, the example below will use vault:123.456.

global:
  image: "vault:123.456"

Next, run the upgrade. You should run this with --dry-run first to verify the changes that will be sent to the Kubernetes cluster.

$ helm upgrade vault ./
...

This should cause no changes (although the resource will be updated). If everything is stable, helm upgrade can be run.

The helm upgrade command should have updated the Statefulset template for the Vault servers, however, no pods have been deleted. The pods must be manually deleted to upgrade. Deleting the pods will not delete any persisted data.

If Vault is not deployed using ha mode, the single Vault server may be deleted by running:

$ kubectl delete pod <name of Vault pod>

If Vault is deployed using ha mode, the standby pods must be upgraded first. To identify which pod is currently the active primary, run the following commad on each Vault pod:

$ kubectl exec -ti <name of pod> -- vault status | grep "HA Mode"

Next, delete every pod that is not the active primary:

$ kubectl delete pod <name of Vault pods>

If auto-unseal is not being used, the newly scheduled Vault standby pods will need to be unsealed:

$ kubectl exec -ti <name of pod> -- vault operator unseal

Finally, once the standby nodes have been updated and unsealed, delete the active primary:

$ kubectl delete pod <name of Vault primary>

Similar to the standby nodes, the former primary will also need to be unsealed:

$ kubectl exec -ti <name of pod> -- vault operator unseal

After a few moments the Vault cluster should elect a new active primary. The Vault cluster is now upgraded!

» Google KMS Auto Unseal

The following example demonstrates configuring Vault Helm to use Google KMS for Auto Unseal.

In order to authenticate and use KMS in Google Cloud, Vault Helm needs credentials. The credentials.json file will need to be mounted as a secret to the Vault container.

» Create the Secret

First, create the secret in Kubernetes:

kubectl create secret generic kms-creds --from-file=credentials.json

Vault Helm will mount this to /vault/userconfig/kms-creds/credentials.json.

» Config Example

The following is an example of how to configure Vault Helm to use Google KMS:

global:
  enabled: true
  image: "vault:1.2.4"

server:
  extraEnvironmentVars:
    GOOGLE_REGION: global
    GOOGLE_PROJECT: <PROJECT NAME>
    GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json

  extraVolumes:
    - type: "secret"
      name: "kms-creds"

  ha:
    enabled: true
    replicas: 3

    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }

      seal "gcpckms" {
        project     = "<NAME OF PROJECT>"
        region      = "global"
        key_ring    = "<NAME OF KEYRING>"
        crypto_key  = "<NAME OF KEY>"
      }

      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }

» Amazon EKS Auto Unseal

The following example demonstrates configuring Vault Helm to use AWS EKS for Auto Unseal.

In order to authenticate and use EKS in AWS, Vault Helm needs credentials. The AWS access key ID and key will be mounted as secret environment variables in the Vault pods.

» Create the Secret

First, create a secret with your EKS access key/secret:

kubectl create secret generic eks-creds\ 
  --from-literal=AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID?}" \
  --from-literal=AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY?}"
» Config Example

The following is an example of how to configure Vault Helm to use AWS EKS:

global:
  enabled: true
  image: "vault:1.2.4"

server:
  extraSecretEnvironmentVars:
  - envName: AWS_ACCESS_KEY_ID
    secretName: eks-creds
    secretKey: AWS_ACCESS_KEY_ID
  - envName: AWS_SECRET_ACCESS_KEY
    secretName: eks-creds
    secretKey: AWS_SECRET_ACCESS_KEY

  ha:
    enabled: true
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }

      seal "awskms" {
        region     = "KMS_REGION_HERE"
        kms_key_id = "KMS_KEY_ID_HERE"
      }

      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }

» Architecture

We recommend running Vault on Kubernetes with the same general architecture as running it anywhere else. There are some benefits Kubernetes can provide that eases operating a Vault cluster and we document those below. The standard production deployment guide is still an important read even if running Vault within Kubernetes.

» Production Deployment Checklist

End-to-End TLS. Vault should always be used with TLS in production. If intermediate load balancers or reverse proxies are used to front Vault, they should not terminate TLS. This way traffic is always encrypted in transit to Vault and minimizes risks introduced by intermediate layers. See the official documentation for example on configuring Vault Helm to use TLS.

Single Tenancy. Vault should be the only main process running on a machine. This reduces the risk that another process running on the same machine is compromised and can interact with Vault. This can be accomplished by using Vault Helm's affinity configurable. See the official documentation for example on configuring Vault Helm to use affinity rules.

Enable Auditing. Vault supports several auditing backends. Enabling auditing provides a history of all operations performed by Vault and provides a forensics trail in the case of misuse or compromise. Audit logs securely hash any sensitive data, but access should still be restricted to prevent any unintended disclosures.
Vault Helm includes a configurable auditStorage option that will provision a persistent volume to store audit logs. See the official documentation for an example on configuring Vault Helm to use auditing.

Immutable Upgrades. Vault relies on an external storage backend for persistence, and this decoupling allows the servers running Vault to be managed immutably. When upgrading to new versions, new servers with the upgraded version of Vault are brought online. They are attached to the same shared storage backend and unsealed. Then the old servers are destroyed. This reduces the need for remote access and upgrade orchestration which may introduce security gaps. See the upgrade section for instructions on upgrading Vault on Kubernetes.

Upgrade Frequently. Vault is actively developed, and updating frequently is important to incorporate security fixes and any changes in default settings such as key lengths or cipher suites. Subscribe to the Vault mailing list and GitHub CHANGELOG for updates.

Restrict Storage Access. Vault encrypts all data at rest, regardless of which storage backend is used. Although the data is encrypted, an attacker with arbitrary control can cause data corruption or loss by modifying or deleting keys. Access to the storage backend should be restricted to only Vault to avoid unauthorized access or operations.