Karpenter is an exciting Kubernetes autoscaler that can be used to provision “nodeless” AWS EKS clusters. Nodeless means the EKS cluster can be provisioned with zero nodes to start with, and the Fargate hosted Karpenter pods scale up the actual worker nodes. Here are my learnings from setting it up.
There are plenty of “getting started” and “how to” posts available online for getting Karpenter running. However, most use eksctl or the Terraform module to do all the heavy lifting. In this post we pull back the covers and set it up without eksctl
and without the community Terraform modules, yet still using Terraform.
Background#
Karpenter is a project by AWS which they announced as ready for production in November 2021. It is a Kubernetes operator that manages Kubernetes worker nodes directly, based on provisioner requirements, rather than scaling existing node groups, which is how the popular Cluster Autoscaler works.
Why Karpenter?#
Karpenter promises to:
- Minimize operational overhead by not having to manage static infrastructure (such as node groups), making everything declarative using Kubernetes Custom Resource Definitions.
- Lower compute costs by removing under-utilised nodes and looking for cheaper and more efficient workloads.
- Improve application availability by responding quickly and automatically to changes in application load, scheduling and resource requirements.
The above promises are taken from the Karpenter website but I have reordered them according to my own personal priority.
1. Minimize operational overhead#
Cluster Autoscaler requires you to manage static node groups as part of the cluster infrastructure making it more cumbersome to configure “the right workload”. Karpenter, on the other hand, does this declaratively using Provisioners (and Node Templates) allowing you to manage the set of available nodes the same way you manage your workloads that will use them: via the Kubernetes API. This is especially useful if you work with a wide variety of different workload types (i.e. ones that are compute/memory intensive).
Karpenter also makes it easier to use AWS Spot instances. If you plan to use spot instances, it is recommended to enable interruption handling using AWS SQS, which we will also cover in this post.
2. Lower compute costs#
By default Karpenter is more aggressive than Cluster Autoscaler when it comes to consolidating workloads and reducing the number of nodes that you require, and thereby reducing your cloud costs. This is great for the most part and if you have applications that should not be de-provisioned you can configure that. For me it’s something that I really like, just ensure Pod Disruption Budgets are set so that your workloads are not affected by de-provisioned pods.
3. Improve application availability#
I don’t have much to add here, other than Karpenter is fast (which is relative). Cluster Autoscaler on AWS EKS is slow, it takes several minutes to provision new nodes in my experience (that is, the time you wait from deploying a pod that requires a scale-up). Karpenter has consistently provisioned new nodes in under a minute during my testing. That is a significant improvement!
Fargate#
Before we get stuck in, let’s address the most common question I have received: why Fargate? Fargate on EKS is a nice idea, however practically, there are a lot of considerations you need to take into account. For this reason, I have not used Fargate on EKS previously as I am mostly involved in building platforms on EKS and Fargate would be too restrictive. However, using Fargate just for Karpenter is a win-win, as Karpenter needs to be run somewhere in order to provision new nodes for the general workloads that will run on the Karpenter provisioned EC2 instances.
Now, let’s get to the topic: Karpenter on AWS EKS with Fargate for a nodeless cluster (that is, nodeless until Karpenter schedules EC2 instances for your real workloads). And because Karpenter is fast, your cluster is only nodeless for a very short period of time!
Setting it up#
Three IAM Roles to Rule Them All#
Karpenter will require three IAM roles, and this is most of the work involved with getting Karpenter up and running.
- Karpenter Controller: used by the Karpenter pods to be able to interact with AWS services (e.g. to manage EC2 instances)
- Fargate Profile: the EKS Fargate Profile requires a pod execution role to provision and connect Fargate nodes to the cluster.
- Karpenter Instance Profile: is used by the EC2 instances that Karpenter launches. The instance profile requires an IAM role.
1. Karpenter Controller#
For the Karpenter controller we will need an IAM role that the Karpenter pods will assume. We use IAM Roles for Service Accounts (IRSA) to give Kubernetes service accounts access to AWS. If you need help setting that up, check out my other blog on the topic. In the below example the document policy grants the service account karpenter
in the namespace karpenter
to assume the IAM role that we attach this policy to.
Regarding the actual permissions we give to the IAM role, I used the official CloudFormation template and Terraform module as references and came up with the below. Please review it yourself before putting this in production.
1#
2# IAM Role
3#
4resource "aws_iam_role" "karpenter" {
5 description = "IAM Role for Karpenter Controller (pod) to assume"
6 assume_role_policy = data.aws_iam_policy_document.karpenter_assume_role.json
7 name = "${var.cluster_name}-karpenter-controller"
8 inline_policy {
9 policy = data.aws_iam_policy_document.karpenter.json
10 name = "karpenter"
11 }
12}
13
14#
15# IRSA policy
16#
17data "aws_iam_policy_document" "karpenter_assume_role" {
18 statement {
19 actions = ["sts:AssumeRoleWithWebIdentity"]
20 condition {
21 test = "StringEquals"
22 values = ["system:serviceaccount:karpenter:karpenter"]
23 variable = "${var.cluster_oidc_url}:sub"
24 }
25 condition {
26 test = "StringEquals"
27 values = ["sts.amazonaws.com"]
28 variable = "${var.cluster_oidc_url}:aud"
29 }
30 principals {
31 type = "Federated"
32 identifiers = [var.cluster_oidc_arn]
33 }
34 }
35}
36
37#
38# Inline policy
39#
40data "aws_iam_policy_document" "karpenter" {
41 statement {
42 resources = ["*"]
43 actions = ["ec2:DescribeImages", "ec2:RunInstances", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups", "ec2:DescribeLaunchTemplates", "ec2:DescribeInstances", "ec2:DescribeInstanceTypes", "ec2:DescribeInstanceTypeOfferings", "ec2:DescribeAvailabilityZones", "ec2:DeleteLaunchTemplate", "ec2:CreateTags", "ec2:CreateLaunchTemplate", "ec2:CreateFleet", "ec2:DescribeSpotPriceHistory", "pricing:GetProducts", "ssm:GetParameter"]
44 effect = "Allow"
45 }
46 statement {
47 resources = ["*"]
48 actions = ["ec2:TerminateInstances", "ec2:DeleteLaunchTemplate"]
49 effect = "Allow"
50 # Make sure Karpenter can only delete nodes that it has provisioned
51 condition {
52 test = "StringEquals"
53 values = [var.cluster_name]
54 variable = "ec2:ResourceTag/karpenter.sh/discovery"
55 }
56 }
57 statement {
58 resources = [var.cluster_arn]
59 actions = ["eks:DescribeCluster"]
60 effect = "Allow"
61 }
62 statement {
63 resources = [aws_iam_role.eks_node.arn]
64 actions = ["iam:PassRole"]
65 effect = "Allow"
66 }
67 # Optional: Interrupt Termination Queue permissions, provided by AWS SQS
68 statement {
69 resources = [aws_sqs_queue.karpenter.arn]
70 actions = ["sqs:DeleteMessage", "sqs:GetQueueUrl", "sqs:GetQueueAttributes", "sqs:ReceiveMessage"]
71 effect = "Allow"
72 }
73}
2. Karpenter Fargate Profile#
To use Fargate (and this is nothing specific to Karpenter), you need a Fargate Profile (that specifies the pods to run), an IAM role for Fargate to run as (AKA pod execution role) and the necessary policy attachments for the IAM role.
The below Terraform blocks creates our AWS EKS Fargate profile and the pod execution (IAM) role with the necessary policy attachments.
1#
2# Fargate profile
3#
4resource "aws_eks_fargate_profile" "karpenter" {
5 subnet_ids = var.cluster_subnet_ids
6 cluster_name = var.cluster_name
7 fargate_profile_name = "karpenter"
8 pod_execution_role_arn = aws_iam_role.fargate.arn
9 selector {
10 namespace = "karpenter"
11 }
12}
13
14#
15# IAM Role
16#
17resource "aws_iam_role" "fargate" {
18 description = "IAM Role for Fargate profile to run Karpenter pods"
19 assume_role_policy = data.aws_iam_policy_document.fargate.json
20 name = "${var.cluster_name}-karpenter-fargate"
21}
22
23#
24# Assume role policy document
25#
26data "aws_iam_policy_document" "fargate" {
27 statement {
28 actions = ["sts:AssumeRole"]
29 effect = "Allow"
30 principals {
31 type = "Service"
32 identifiers = ["eks-fargate-pods.amazonaws.com"]
33 }
34 }
35}
36
37#
38# Role attachments
39#
40resource "aws_iam_role_policy_attachment" "fargate_attach_podexecution" {
41 policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
42 role = aws_iam_role.fargate.name
43}
44
45resource "aws_iam_role_policy_attachment" "fargate_attach_cni" {
46 policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
47 role = aws_iam_role.fargate.name
48}
3. Karpenter Instance Profile#
An Instance Profile is required for each node that Karpenter provisions. You can configure the aws.defaultInstanceProfile
setting in the Karpenter ConfigMap as a default for all nodes, or on the Node Template that is referenced by your Provisioner.
1#
2# Instance profile
3#
4resource "aws_iam_instance_profile" "karpenter" {
5 role = aws_iam_role.eks_node.name
6 name = "${var.cluster_name}-karpenter-instance-profile"
7}
8
9#
10# IAM Role
11#
12resource "aws_iam_role" "eks_node" {
13 description = "IAM Role for Karpenter's InstanceProfile to use when launching nodes"
14 assume_role_policy = data.aws_iam_policy_document.eks_node.json
15 name = "${var.cluster_name}-karpenter-node"
16}
17
18#
19# Policy attachments
20#
21resource "aws_iam_role_policy_attachment" "eks_node_attach_AmazonEKSWorkerNodePolicy" {
22 policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
23 role = aws_iam_role.eks_node.name
24}
25
26resource "aws_iam_role_policy_attachment" "eks_node_attach_AmazonEC2ContainerRegistryReadOnly" {
27 policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
28 role = aws_iam_role.eks_node.name
29}
30
31resource "aws_iam_role_policy_attachment" "eks_node_attach_AmazonEKS_CNI_Policy" {
32 policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
33 role = aws_iam_role.eks_node.name
34}
35
36resource "aws_iam_role_policy_attachment" "eks_node_attach_AmazonSSMManagedInstanceCore" {
37 policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
38 role = aws_iam_role.eks_node.name
39}
Enable Interruption Handling (optional)#
Karpenter supports native interruption handling which requires an AWS Simple Queue Service (SQS) where the interruption events are written that the Karpenter controller responds to. And it’s quite easy to setup (note: we already added permissions for the Karpenter Controller AWS IAM role to access the AWS SQS queue above).
1#
2# SQS Queue
3#
4resource "aws_sqs_queue" "karpenter" {
5 message_retention_seconds = 300
6 name = "${var.cluster_name}-karpenter"
7}
8
9#
10# Node termination queue policy
11#
12resource "aws_sqs_queue_policy" "karpenter" {
13 policy = data.aws_iam_policy_document.node_termination_queue.json
14 queue_url = aws_sqs_queue.karpenter.url
15}
16
17data "aws_iam_policy_document" "node_termination_queue" {
18 statement {
19 resources = [aws_sqs_queue.karpenter.arn]
20 sid = "SQSWrite"
21 actions = ["sqs:SendMessage"]
22 principals {
23 type = "Service"
24 identifiers = ["events.amazonaws.com", "sqs.amazonaws.com"]
25 }
26 }
27}
Deploying Karpenter#
So far we have only looked at creating the necessary AWS cloud resources that Karpenter requires. We need to deploy Karpenter on our cluster as well. You can do this with a Helm chart, like the Terraform example does. The deployment is fairly simple and most of the configuration is done using the Karpenter ConfigMap.
Pick your poison and get Karpenter running! All the values like IAM role ARNs, SQS name, and cluster endpoint are all dynamically computed, so here's my not-so-useful Helm command that I would run if I had to.
1KARPENTER_VERSION="v0.25.0"
2
3CLUSTER_NAME=... # Name of the EKS Cluster
4CLUSTER_ENDPOINT=... # Endpoint for the EKS Cluster
5KARPENTER_IAM_ROLE_ARN=... # IAM Role ARN for the Karpenter Controller
6KARPENTER_INSTANCE_PROFILE=... # InstanceProfile name for Karpenter nodes
7KARPENTER_QUEUE_NAME=... # Name of the SQS queue for Karpenter
8
9helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter \
10 --version "${KARPENTER_VERSION}" \
11 --namespace karpenter \
12 --create-namespace \
13 --include-crds \
14 --set settings.aws.clusterName=${CLUSTER_NAME} \
15 --set settings.aws.clusterEndpoint=${CLUSTER_ENDPOINT}
16 --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=${KARPENTER_IAM_ROLE_ARN} \
17 --set settings.aws.defaultInstanceProfile=${KARPENTER_INSTANCE_PROFILE} \
18 --set settings.aws.interruptionQueueName=${KARPENTER_QUEUE_NAME} # Optional
As per the above Fargate Profile, pods in the karpenter
namespace will be scheduled onto Fargate nodes and this could take ~1 minute. So once you have deployed Karpenter and there are no noticeable errors, just give it a moment for the Fargate nodes to be scheduled for you. Once that is done, you should see more nodes being scheduled (and managed) by Karpenter.
Configuring a Provisioner#
In order for Karpenter to provision nodes you must define at least one provisioner, and the provisioner requires a node template (multiple provisioners can reuse the same node template). The node template to use for a provisioner is given by the spec.providerRef
which is not so obvious and not that well documented, but anyway. Also both provisioners and node templates are cluster resources (not namespaced).
You can pretty much copy the default example on the provisioners page and remove the parts you do not care about, something like this.
1apiVersion: karpenter.sh/v1alpha5
2kind: Provisioner
3metadata:
4 name: default
5spec:
6 # References the AWSNodeTemplate called "default"
7 providerRef:
8 name: default
9 requirements:
10 - key: "karpenter.k8s.aws/instance-category"
11 operator: In
12 values: ["m"]
13 - key: "karpenter.k8s.aws/instance-cpu"
14 operator: In
15 values: ["4", "8"]
16 - key: "karpenter.k8s.aws/instance-hypervisor"
17 operator: In
18 values: ["nitro"]
19 - key: "topology.kubernetes.io/zone"
20 operator: In
21 values: ["eu-north-1a", "eu-north-1b", "eu-north-1c"]
22 # Let's go all in on spot instances
23 - key: "karpenter.sh/capacity-type"
24 operator: In
25 values: ["spot"]
26
27 # Resource limits constrain the total size of the cluster.
28 # Limits prevent Karpenter from creating new instances once the limit is exceeded.
29 limits:
30 resources:
31 cpu: "1000"
32 memory: 1000Gi
33 # Kill each node after one hour just for kicks
34 ttlSecondsUntilExpired: 3600
And we need to create our AWSNodeTemplate
resource as well. Note that the subnet selector and security group selector are required, and I use the default Karpenter tags karpenter.sh/discovery: <CLUSTER_NAME>
on the private subnets for the EKS cluster and security group(s) for the worker nodes (make sure you configure this when creating your EKS cluster). This is so that Karpenter can provision nodes in the correct subnets with the appropriate security group(s) attached.
1apiVersion: karpenter.k8s.aws/v1alpha1
2kind: AWSNodeTemplate
3metadata:
4 name: default
5spec:
6 securityGroupSelector:
7 karpenter.sh/discovery: <CLUSTER_NAME>
8 subnetSelector:
9 karpenter.sh/discovery: <CLUSTER_NAME>
10 tags:
11 karpenter.sh/discovery: <CLUSTER_NAME>
Once you apply the Provisioner
and AWSNodeTemplate
(with your updated values) and you deploy some pods you should see Karpenter provision and connect new nodes to your cluster, and it’s fast! Consistently under 1 minute when I have been testing it.
AWS also open sourced a tool they developed internally while testing Karpenter that you might find interesting when playing around with Karpenter: https://github.com/awslabs/eks-node-viewer.
Conclusion#
Personally I really like Karpenter. Not only does it shift the configuration of Kubernetes worker nodes to the Kubernetes control plane, making it easier and more flexible to configure “the right” nodes but it also reduces cloud costs by consolidating pods and draining nodes that are being underutilised. And this is right out of the box.
One incredible feature to highlight is TTL for node expiry. This is really useful for making sure nodes are recycled at some specified interval, as well as preventing costly compute types running for long periods of time. The longer a node is running the greater the chance of it being compromised, or failing, so setting a reasonable value is highly recommended (if omitted, nodes never expire).
I think Karpenter is especially interesting if you are working with a wide range of different compute types, e.g. traditional web services, machine learning models or other compute/memory intensive workloads.
One point to consider is that (as of this writing) Karpenter only works on AWS EKS. There is an open issue for Azure AKS but I have not found anything online about Google’s GKE and Karpenter. This might affect you, if you want to keep your clusters across clouds as similar as possible (e.g. using Cluster Autoscaler), or it might not.
I hope you found this post useful and please reach out to me if you have questions!