When running Jenkins jobs using the Kubernetes plugin, there are many ways to fetch secrets from HashiCorp Vault. In the previous post we stored the secrets in Kubernetes, but let’s look at options that don’t persist secrets in the cluster.
In the earlier post the secrets were not stored as plain-text in the Jenkins controller, but still the secrets were stored together with the encryption key on the persistent volume. Using a secret manager allows us to increase the security posture further, and make our lives easier when there is one central location to store and modify the secrets, which also records an audit trail of all activities. When using Vault as secret manager the secrets are only fetched into the running worker pods in the beginning of a pipeline. When the pipeline finishes, the pod is terminated and the secrets are no longer anywhere in the cluster.
Options for Integrating Jenkins with Vault#
Here are some of the most popular options of integrating Jenkins with Vault:
1. Jenkins HashiCorp Vault plugin#
Yet another plugin to your Jenkins, yikes! When not using Kubernetes this is a good choice, but it requires providing static credentials to the Jenkins controller which we can avoid in Kubernetes by leveraging the built-in authentication: service accounts.
2. HashiCorp Vault Agent sidecar injector#
The sidecar injector is a mutating admission webhook controller which is installed to the Kubernetes cluster by using the official Vault Helm chart. The authentication requires configuring the Kubernetes auth method and uses the pod’s service account as identity.
3. Use Vault API directly#
Unlike the first two, this option has no external dependencies or software to install into the Kubernetes cluster. In order to use the API we must of course authenticate, for this we can use the Kubernetes auth method for the Kubernetes workers. If running Jenkins VM-based workers on EC2, Azure or GCE there are auth methods that can be used to authenticate against Vault. Since the option two also requires the Kubernetes auth method configuration, this seems like the minimalistic option that meets the requirements.
Which option to go with?#
There’s no one-size-fits-all solution or answer, however here’s a flow chart to help make the decision but it’s not exhaustive:
Based on this we decide to not use the Helm chart or Jenkins plugin and just setup the Kubernetes auth method for directly accessing the Vault API. Let’s see how that can be configured next.
Demo: Using Kubernetes auth method in Jenkins pipelines#
Tools and versions#
- Docker 20.10.17
- k3d v5.3.0
- kubectl v1.22.0
- Terraform v1.2.6
- Vault provider 3.4.1
- jq-1.6
Where possible the version for images etc. is embedded into the code snippets.
Setting up local Kubernetes cluster and Vault#
A lot of tutorials install Vault inside the Kubernetes cluster, but we feel like this is a naive scenario that skips a lot of the setup necessary when integrating with an external Vault. To make the example more interesting let’s run an external Vault inside a Docker container, it’s important to note that the Vault and k3d cluster is running inside the same Docker network, thus the Vault can be reached using the name of the docker container. Neat!
Let’s start the Vault container and then create k3d cluster:
1export DOCKER_NETWORK=k3d-vault-net
2export VAULT_TOKEN=root
3
4docker run -d \
5 --cap-add=IPC_LOCK \
6 -p 8200:8200 \
7 --name=dev-vault \
8 -e "VAULT_DEV_ROOT_TOKEN_ID=$VAULT_TOKEN" \
9 --network ${DOCKER_NETWORK} \
10 vault:1.11.0
11
12export VAULT_ADDR=http://$(docker inspect dev-vault | jq -r ".[0].NetworkSettings.Networks.\"$DOCKER_NETWORK\".IPAddress"):8200
13
14# write a secret into Vault KV engine for our demo
15docker exec dev-vault /bin/sh -c "VAULT_TOKEN=$VAULT_TOKEN VAULT_ADDR=http://127.0.0.1:8200 vault kv put secret/jenkins/dev secretkey=supersecretvalue"
16
17k3d cluster create jenkins \
18 --network $DOCKER_NETWORK \
19 --api-port $(ip route get 8.8.8.8 | awk '{print $7}'):16550
20k3d kubeconfig get jenkins > jenkins-kubeconfig.yaml
21export KUBECONFIG=$PWD/jenkins-kubeconfig.yaml
After the cluster is provisioned and nodes are in Ready
state, create a service account which will be used by Vault to verify the service account tokens by calling the Kubernetes API:
1kubectl create serviceaccount vault
2kubectl create clusterrolebinding vault-reviewer-binding \
3 --clusterrole=system:auth-delegator \
4 --serviceaccount=default:vault
5
6# set the variables needed to configure auth method
7export VAULT_SA_NAME=$(kubectl get secrets --output=json | jq -r '.items[].metadata | select(.name|startswith("vault-token-")).name')
8export TF_VAR_token_reviewer_jwt=$(kubectl get secret $VAULT_SA_NAME --output json | jq -r .data.token | base64 --decode)
9export TF_VAR_kubernetes_ca_cert=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}')
10export TF_VAR_kubernetes_host=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')
Configuring Vault using Terraform#
In order to configure Vault we will use Terraform instead of just running bunch of commands. We already set the necessary environment variables in above code to reach our Vault instance, next we can copy the following code to a main.tf
file:
1terraform {
2 required_providers {
3 vault = {
4 source = "hashicorp/vault"
5 version = "3.4.1"
6 }
7 }
8}
9
10provider "vault" {
11 # Configured with environment variables:
12 # VAULT_ADDR
13 # VAULT_TOKEN
14}
15
16variable "kubernetes_host" {
17 type = string
18 description = "URL for the Kubernetes API."
19}
20
21variable "kubernetes_ca_cert" {
22 type = string
23 description = "Base64 encoded CA certificate of the cluster."
24}
25
26variable "token_reviewer_jwt" {
27 type = string
28 description = "JWT token of the Vault Service Account."
29}
30
31resource "vault_auth_backend" "this" {
32 type = "kubernetes"
33}
34
35resource "vault_kubernetes_auth_backend_config" "example" {
36 backend = vault_auth_backend.this.path
37 kubernetes_host = var.kubernetes_host
38 kubernetes_ca_cert = base64decode(var.kubernetes_ca_cert)
39 token_reviewer_jwt = var.token_reviewer_jwt
40 issuer = "api"
41 disable_iss_validation = "true" # k8s API checks it
42}
43
44resource "vault_policy" "jenkins-dev" {
45 name = "jenkins-dev"
46
47 policy = <<EOT
48path "secret/data/jenkins/dev" {
49 capabilities = ["read"]
50}
51EOT
52}
53
54resource "vault_kubernetes_auth_backend_role" "jenkins-dev" {
55 backend = vault_auth_backend.this.path
56 role_name = "jenkins-dev"
57 bound_service_account_names = ["jenkins-dev"]
58 bound_service_account_namespaces = ["jenkins-dev"]
59 token_ttl = 3600
60 token_policies = ["jenkins-dev"]
61}The heart of the configuration is in the `vault_kubernetes_auth_backend_role` resource:
1 bound_service_account_names = ["jenkins-dev"]
2 bound_service_account_namespaces = ["jenkins-dev"]
3 token_ttl = 3600
4 token_policies = ["jenkins-dev"]
This configuration is what glues the Kubernetes auth and Vault policy together for AuthN and AuthZ, we can also make the Vault token short-lived here as we likely only need it when starting a new Jenkins pipeline.
When done digesting the configuration, terraform
init and apply the configuration:
1terraform init && terraform apply
Installing Jenkins and example pipeline#
Spin up Jenkins with the Helm chart:
1helm install jenkins jenkins/jenkins --namespace jenkins-dev --create-namespace --version 4.1.14
Create service account in the namespace created by Helm as part of the Jenkins installation:
1kubectl create serviceaccount jenkins-dev --namespace jenkins-dev
In order to access the Jenkins UI, follow the instructions printed from the Helm install:
11. Get your 'admin' user password by running:
2 kubectl exec --namespace jenkins-dev -it svc/jenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
32. Get the Jenkins URL to visit by running these commands in the same shell:
4 echo http://127.0.0.1:8080
5 kubectl --namespace jenkins-dev port-forward svc/jenkins 8080:8080
Here’s an example pipeline for reading the secrets in Vault using the Vault CLI:
1pipeline {
2 agent {
3 kubernetes {
4 yaml '''
5apiVersion: v1
6kind: Pod
7spec:
8 serviceAccount: jenkins-dev
9 containers:
10 - name: build
11 image: ubuntu
12 command:
13 - sleep
14 args:
15 - infinity
16 - name: vault
17 image: hashicorp/vault:1.11.0
18 env:
19 - name: VAULT_ADDR
20 value: "http://dev-vault:8200"
21 command:
22 - sleep
23 args:
24 - infinity
25'''
26 defaultContainer 'build'
27 }
28 }
29 stages {
30 stage('Main') {
31 steps {
32 container('vault') {
33 sh '''
34 SA_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
35 export VAULT_TOKEN=$(vault write -field=token auth/kubernetes/login role=jenkins-dev jwt=$SA_TOKEN)
36 vault kv get secret/jenkins/dev > secrets.txt
37 '''
38 }
39 container('build') {
40 sh 'cat secrets.txt'
41 }
42 }
43 }
44 }
45}
Note the serviceAccount
field and name of the role in the Vault CLI command, this is the configuration in Jenkins in order to access the Vault secrets. For ease of use and brevity we will use the Vault CLI instead of the API directly in the pipeline, but to drop the dependency on an additional container/binary the Vault API can be used directly with curl
for example.
Summary/Conclusion#
Looking at the pipeline, there’s actually nothing Jenkins specific to the solution presented here. We could run any pod in the jenkins-dev
namespace with service account jenkins-dev
to access the secrets that the policy in Vault allows. No hard coded and long-lived credentials laying around! No need for complex groovy syntax, Helm charts or Jenkins plugins. This keeps the pipeline flexible in case you (or the management) decide to ditch Jenkins and go for another tool to run your CI/CD pipelines.
This is a follow up blog to Secrets handling in Kubernetes - A Jenkins story, where we explore some ways of getting secrets into Jenkins which we deploy in Kubernetes, without the use of an external secrets manager.