Quickstart - Jenkins (JCasC) in Kubernetes

Andreas Lärfors • 6 minutes • 2022-09-27

Quickstart - Jenkins (JCasC) in Kubernetes

Jenkins is a well-proven CI tool and a key link in the software delivery chain of many organisations, but managing one or more Jenkins instances adds an overhead to your teams. Defining your Jenkins instances as code and automating their scaling is a great way to reduce this management overhead, while improving the maintainability, stability and disaster recovery of your deployments.

In this blog we use Terraform to create a Kubernetes Cluster in Azure Cloud onto which we install our Jenkins Controller using Helm. Both the Jenkins Controller and the Agents are defined as code; our Jenkins Controller using JCasC (Jenkins Configuration as Code), and our Jenkins Agents as part of our Declarative Pipelines.

Introduction#

Jenkins Configuration-as-Code (JCasC) has been covered in great detail already; this blog will focus on the running of both Jenkins Controllers (previously Masters) and Jenkins Agents in Kubernetes. The really exciting thing about this approach is that Jenkins Agents can be defined per-job and spun up when needed! Together with an auto-scaling Kubernetes cluster, this means we have an auto-scaling Jenkins deployment which will run at minimal cost while idling and offer performance when necessary.

Jenkins JCasC in Kubernetes illustration 1

Jenkins JCasC in Kubernetes illustration 2

Terraforming our Kubernetes Cluster#

In this blog, we will use Terraform to create our Kubernetes Cluster in Azure Cloud, but the approach will be the same for whichever IaC tool and Cloud Platform you choose.

We will skip over provider configuration, remote state etc. in this blog to keep things focused on the problem at hand. You can look at the complete example in the GitHub repository if you are interested in seeing the full configuration:

Github - Jenkins in Kubernetes example

 1resource "azurerm_kubernetes_cluster" "jenkinsk8sexample" {
 2  name                = "aks-jenkinsk8sexample"
 3  location            = azurerm_resource_group.jenkinsk8sexample.location
 4  resource_group_name = azurerm_resource_group.jenkinsk8sexample.name
 5  dns_prefix          = "aks-jenkinsk8sexample"
 6
 7  default_node_pool {
 8    name                  = "default"
 9    node_count            = 1
10    vm_size               = "Standard_B2s"
11    enable_auto_scaling   = true
12    min_count             = 1
13    max_count             = 4
14  }
15
16  service_principal {
17    client_id     = var.client_id
18    client_secret = var.client_secret
19  }
20}

As seen above, we’re creating a Kubernetes Cluster with a single auto-scaling nodepool.

Once we’ve run

1terraform apply

and the cluster is created, we can create the Jenkins namespace:

1kubectl create namespace jenkins

Our cluster is now ready to have Jenkins installed using Helm.

Installing our Jenkins Controller#

We will use Helm to install our Jenkins Controller in the Kubernetes Cluster. Helm is best thought of as a package manager for Kubernetes. First we need to add the Jenkins repo to our local Helm configuration:

1helm repo add jenkins https://charts.jenkins.io
2helm repo update

We can then run the install (upgrade) command:

1helm upgrade --install jenkins jenkins/jenkins \
2  --namespace jenkins \
3  --version 4.1.8 \
4  --set controller.adminUser="admin" \
5  --set controller.adminPassword=$JENKINS_ADMIN_PASSWORD \
6  -f custom_values.yaml

Note that our admin password is pre-configured in environment variable JENKINS_ADMIN_PASSWORD. Our custom_values.yaml file contains the JCasC values:

 1controller:
 2  installPlugins:
 3    - kubernetes:3600.v144b_cd192ca_a_
 4    - workflow-aggregator:581.v0c46fa_697ffd
 5    - git:4.11.3
 6    - configuration-as-code:1429.v09b_044a_c93de
 7    - job-dsl:1.79
 8  JCasC:
 9    defaultConfig: true
10    configScripts:
11      welcome-message: |
12       jenkins:
13         systemMessage: Welcome to Jenkins, you are. Managed as code, this instance is.       
14      example-job: |
15        jobs:
16          - script: >
17              multibranchPipelineJob('jenkins-in-kubernetes-example-pipeline') {
18                branchSources {
19                  git {
20                    id('jenkins-in-kubernetes-example-pipeline')
21                    remote('https://github.com/verifa/jenkins-in-kubernetes-example-pipeline.git')
22                  }
23                }
24              }        
25    securityRealm: |-
26      local:
27        allowsSignup: false
28        enableCaptcha: false
29        users:
30        - id: "admin"
31          name: "Jenkins Admin"
32          password: "${chart-admin-password}"      
33    authorizationStrategy: |-
34      loggedInUsersCanDoAnything:
35        allowAnonymousRead: false      

Each Helm chart comes with a default set of values (values.yaml) which is also the set of supported values which can be overriden. This is what our custom_values.yaml file does, so for example we are overriding the list of plugins to be installed, adding the job-dsl plugin so that we can declare our pipeline as code.

Of most interest in the custom_values.yaml file is probably the example-job value, which lists the jobs to be created upon Jenkins Controller instantiation. As you can see, we are creating a Multi-branch Pipeline Job with Git sources in another repository.

Once you have executed the helm upgrade command, the Jenkins Controller Stateful Set should be created in your cluster, which will trigger the creation of the Jenkins Controller Pod, hosting your Jenkins Controller instance. Once Jenkins finishes starting up, it will scan the Git repo defined in the Multi-branch Pipeline Job and run a build on each discovered branch.

The Jenkins deployment above will be deployed with ClusterIP services only, meaning it is not accessible from outside the cluster. This is fine for testing purposes, and the simplest way to access Jenkins locally is to run a Kubernetes port-forward:

1kubectl port-forward svc/jenkins 8080:8080

This will forward traffic from 127.0.0.1:8080 (localhost) to svc/jenkins:8080 (Jenkins Controller ClusterIP), so while the kubectl port-forward command is running, you can navigate to http://localhost:8080 to access your Jenkins instance.

Configuring our Jenkins Agent#

Let’s inspect the master branch of the Git repo configured as the source of our Multi-branch Pipeline Job:

 1pipeline {
 2  agent {
 3    kubernetes {
 4      yaml '''
 5        apiVersion: v1
 6        kind: Pod
 7        spec:
 8          containers:
 9          - name: maven
10            image: maven:alpine
11            command:
12            - cat
13            tty: true
14          - name: busybox
15            image: busybox
16            command:
17            - cat
18            tty: true
19        '''
20    }
21  }
22  stages {
23    stage('Run maven') {
24      steps {
25        container('maven') {
26          sh 'mvn -version'
27        }
28        container('busybox') {
29          sh '/bin/busybox'
30        }
31      }
32    }
33  }
34}

What you see in the pipeline block above are two blocks:

  • agent
  • stages

The agent block defines the agent which will execute the steps in the stages block. As you can see in the agent block, we are able to use YAML to declare the Kubernetes Pod which will execute the job. Note that we declare two separate containers, one running the maven:alpine image and one running the busybox image.

In the stages block you can see that we run one command in each container. This is an example of how you can break up your pipelines into small tasks which can run in specialized containers, instead of building bloated container images containing all the tools you need.

Auto-scaling#

Let’s take a look at Jenkinsfile in the large-pod branch of the pipeline Git repo:

 1pipeline {
 2  agent {
 3    kubernetes {
 4      yaml '''
 5        apiVersion: v1
 6        kind: Pod
 7        spec:
 8          containers:
 9          - name: busybox
10            image: busybox
11            command:
12            - cat
13            tty: true
14            resources:
15              requests:
16                memory: "2Gi"
17                cpu: "1000m"
18        '''
19    }
20  }
21  stages {
22    stage('Run') {
23      steps {
24        container('busybox') {
25          sh '/bin/busybox'
26        }
27      }
28    }
29  }
30}

In the example above, a single busybox container is declared with large resource requests (2 GB memory and 1 CPU core). This exceeds the resources available on the single Node in the Kubernetes cluster initially, so we can see in the log that a second Node is created on which the Pod is then created and the Job scheduled:

 1Started by user Jenkins Admin
 2[Pipeline] Start of Pipeline
 3[Pipeline] podTemplate
 4[Pipeline] {
 5[Pipeline] node
 6Created Pod: kubernetes jenkins/test-2-v4rqd-2w5b2-nvfs2
 7[Warning][jenkins/test-2-v4rqd-2w5b2-nvfs2][FailedScheduling] 0/1 nodes are available: 1 Insufficient memory.
 8Still waiting to schedule task
 9'test-2-v4rqd-2w5b2-nvfs2' is offline
10[Normal][jenkins/test-2-v4rqd-2w5b2-nvfs2][TriggeredScaleUp] pod triggered scale-up: [{aks-default-19934684-vmss 1->2 (max: 4)}]
11[Warning][jenkins/test-2-v4rqd-2w5b2-nvfs2][FailedScheduling] 0/1 nodes are available: 1 Insufficient memory.
12[Warning][jenkins/test-2-v4rqd-2w5b2-nvfs2][FailedScheduling] 0/1 nodes are available: 1 Insufficient memory.
13[Normal][jenkins/test-2-v4rqd-2w5b2-nvfs2][Scheduled] Successfully assigned jenkins/test-2-v4rqd-2w5b2-nvfs2 to aks-default-19934684-vmss000001
14[Normal][jenkins/test-2-v4rqd-2w5b2-nvfs2][Pulling] Pulling image "busybox"
15[Normal][jenkins/test-2-v4rqd-2w5b2-nvfs2][Pulled] Successfully pulled image "busybox" in 2.945562307s
16...

Once the job is complete, the Pod is destroyed. The AKS cluster auto-scaler will then, after a period of low pod activity, scale down the node pool back to the single node hosting the Jenkins Controller.

Going Further#

Security#

The example shown here is still far from production-ready, and there are many considerations to make, such as

  • Persistent storage - what size and performance is required, and how should they be backed up?
  • Networking security - what kind of Ingress/Egress rules should we take? What other restrictions should be put in place?
  • Container security - we should (at minimum) prevent any container from running as root. How do we avoid potentially harmful images from being run?

Many of these considerations can be implemented using policies, perhaps your team or organization has a set of default cluster policies to use; if not, there’s no time like the present to create them.

Automation#

Although we have automated many steps in a typical Jenkins deployment, we are still triggering the deploy itself manually. The deployment commands themselves can be automated - such as in Continuous Delivery.

One commonly-used setup is that each commit to the main branch in your Git repository triggers a deployment. This is a great way to

  • enforce good SCM practices (developers never commit directly to main)
  • ensure “what you see in main is what is deployed in production”

Summary#

In this minimalist example, we’ve looked at how you can define both your infrastructure and Jenkins deployment as code, how to auto-scale both infrastructure and Jenkins agents and how Jenkins agents can be defined as part of your build pipelines to give complete freedom to developers on what kind of Jenkins agent their build pipeline needs. Hopefully this has given you the information you need to develop a solution like this on your own.



Comments


Read similar posts

Blog

2024-01-24

13 minutes

How to secure Terraform code with Trivy

Learn how Trivy can be used to secure your Terraform code and integrated into your development workflow.

Blog

2023-08-22

7 minutes

How to build dashboards of your Kubernetes cluster with Steampipe

In this blog post we will take a look at Steampipe, which is a tool that can be used to query all kinds of APIs using an unified language for the queries; SQL. We’ll be querying a Kubernetes cluster with Steampipe and then building a beautiful dashboard out of our queries without breaking a sweat.

Blog

2023-05-29

8 minutes

How to scale Kubernetes with any metrics using Kubernetes Event-driven Autoscaling (KEDA)

In this blog, we will try to explore how a sample application like Elastic Stack can be scaled based on metrics other than CPU, memory or storage usage.

Sign up for our monthly newsletter.

By submitting this form you agree to our Privacy Policy