AWS Karpenter : Everything You Need To Know

AWS Karpenter - new open source auto scaling tool

Amazon Web Services (AWS) has launched Karpenter, an open-source auto-scaling tool under an Apache 2.0 license for Kubernetes clusters at the tenth edition of AWS re: Invent, a learning conference conducted by AWS for the global cloud computing community. Kubernetes Users can start testing the waters with Karpenter version 0.5 which is now available for use in the production environment.

What is AWS Karpenter?

Karpenter is a high-performance Kubernetes cluster auto scaler that improves resource utilization and application availability. It rapidly responds to the changing application loads and launches right-sized Amazon EC2 instances based on the needs of workloads of a cluster such as acceleration, compute, scheduling requirements, and storage.

Problems Faced By Kubernetes Users Prior to the Launch Of Karpenter?

1) To avoid service breakdowns and ensure the right amount of resources, admins managing the Kubernetes cluster had to monitor them often.

2) Kubernetes users were required to continuously adjust the compute capacity of their clusters to support workloads. In order to support applications using Amazon EC2 Auto Scaling Groups and Kubernetes Cluster Autoscaler, users needed to dynamically scale the compute capacity of their clusters.

3) To take advantage of the elasticity of the cloud, users were required to create multiple EC2 Autoscaling Groups for the Kubernetes cluster. As the clusters grew, it took a toll on the operation and performance.

4) When creating machine learning models, users were required to provision hundreds of diverse EC2 instances thus resulting in increased scheduling latency.

Why Karpenter?

  • Karpenter was designed to practicalize the most often touted yet not fully accomplished cloud computing advantage of automatic scalability to meet a user’s resource needs. Karpenter being open source can be used to send information to any cloud provider about the underlying Kubernetes cluster.
  • Karpenter can provision new EC2 instances and schedule the Kubernetes pods within a minute.
  • With minimal infrastructural and configuration overhead, Karpenter dynamically selects the EC2 instance

types that are best suited to what is needed by Kubernetes pods.

  • Karpenter automatically adds or removes the instances required based on the scaling workloads thus offloading the need for over-provisioning and scaling down.
  • Karpenter directly integrates with EC2 thus ensuring the users get the capacity their clusters need at the apt instance.
  • Karpenter can work with Kubernetes cluster based on any environment irrespective of whether being on-premise or cloud.

How Karpenter Work?

According to Channy Yun, Principal Developer Advocate for AWS, Karpenter launches right-sized compute sources in response to changing application loads thereby improving the application availability and cluster efficiency. It improves performance by automatically optimizing a cluster’s compute resource footprint by providing just-in-time compute services based on the needs of the application.

Karpenter analyses Kubernetes workloads by using the services of Helm, the Kubernetes package manager. Karpenter determines what resources will be required by looking at the pods that cannot launch due to a constraint in resources by analyzing the user’s Kubernetes workloads. The information gained is sent to the cloud provider to add or remove compute resources.

Once you install Karpenter in your cluster, it observes events within the Kubernetes cluster and sends commands to the cloud provider’s compute service. This enables Karpenter to observe the aggregate resource requests of unscheduled pods to make decisions to launch new nodes and terminate them to reduce infrastructure costs and scheduling latencies.

AWS Karpenter vs Cluster Autoscaler

1) As Karpenter manages the nodes directly, it can launch pods to new nodes on the fly without waiting for the scheduler.

As far as Cluster Autoscaler is concerned, pods rely on the Kube-scheduler to create pods to new nodes.

2) Karpenter does not need any orchestration mechanisms based on node groups as it manages the instances directly.

Karpenter launches the right instances based on the workload.
As Cluster Autoscaler is Kubernetes native, we need to tell the Autoscaler when a new node group is to be added and it requires EC2 Auto Scaling groups to scale the node groups.

3) Karpenter can manage the nodes directly and has direct control over EC2 instances. Thus Karpenter can use the instance types and availability zones based on the specifics of a workload.

Cluster Autoscaler on the other hand uses Auto Scaling Groups to control the EC2 instances as it doesn’t have direct control over EC2 instances.

4)Karpenter can assign instances that are based on specific availability zones based on the nature of the workloads. This enables Karpenter to deal with any workloads no matter what their size is.

With Cluster Autoscaler we need to create new node groups to manage workloads that cannot fit into the current node group. The integrations are also comparatively slow as the Cluster Autoscaler uses Auto Scaling Groups.

Getting Started with Karpenter

Install Karpenter in the Kubernetes cluster using Helm charts. But before you get to do this, you must ensure that there is enough compute capacity available. Karpenter requires permissions to provision compute resources that are based on the cloud provider you have chosen.
After successful installation, the default Karpenter provisioner observes incoming Kubernetes pods that were not able to schedule due to limitations in the number of compute resources in the cluster and launches new resources to meet the scheduling requirements.

Features of Karpenter

1) Accelerated Computing

Karpenter performs well for use cases that demand rapid provisioning and de-provisioning of a large number of computing resources. Some of the applications of this type include training of machine learning models and performing complex financial calculations. For use cases that require accelerated EC2 instances, you can make use of the custom resources or instances of Graphics Processing Units (GPU) based on Nvidia, AMD, etc.

2) Provisioners Compatibility
Provisioners are a mixed model with both dynamic and statically managed capacities. Karpenter provisioners can work along with static capacity management solutions like Amazon EKS managed node groups and EC2 Auto Scaling groups. You can manage the entire capacity using provisioners. It is not recommended to simultaneously configure Kubernetes Cluster Autoscaler and Karpenter as both systems step up nodes in response to un schedulable pods. This will result in a racing condition in which both systems try to launch or terminate instances.

Final Thoughts

Karpenter paved the way to automatic resource scaling thus ensuring users need not get into the intricacies and discomforts over managing and monitoring the Kubernetes clusters and adjusting their compute capacity. Users can also leverage this platform to develop and train machine learning models that demand a significant memory footprint.

Activelobby provides services revolving around cloud adoption, management, and migration. As part of cloud adoption, we analyze your business requirements and current workloads. We offer platform management and monitoring services as part of cloud managed services. Our migration services are provided to and from the migration of workloads across different cloud instances. We support all major public and private cloud platforms.


Rohith Krishnan

Rohith SK is an MSC computer science graduate living in Cochin, Kerala. As a technology enthusiast, he is always on the lookout for the latest trends and developments in the field, with a particular interest in cloud computing and DevOps updates. Apart from his passion for technology, Rohith SK is an avid reader and enjoys spending his free time exploring different genres of literature. He believes that reading is one of the best ways to expand one's knowledge and understanding of the world. With his expertise in computer science and a passion for technology, Rohith SK regularly contributes articles and blog posts on the latest trends and updates in the industry. His articles offer insights and valuable perspectives on the various aspects of cloud computing and DevOps, and are widely read and appreciated by readers all over the world. As an experienced technology writer and researcher, Rohith SK's articles are well-researched, informative, and easy to understand, making them accessible to readers of all levels of technical knowledge. Whether you're a beginner looking to learn more about the latest trends in technology, or an experienced professional seeking insights and updates, Rohith's articles are sure to provide valuable information and insights.

Leave a Reply