Ante Miličević
January 22, 2024

ResourceQuota Objects in Kubernetes

In this article, we'll talk about ResourceQuota configurations using a code demo. We will also explore how to use yaml and kubectl for hardware usage constraints.

In Kubernetes, namespaces are responsible for creating virtual clusters in a physical cluster. This helps to avoid using the same name for different resources and manage capacity between clusters. To manage these namespaces and the resource usage by each cluster tenant, ResourceQuota is used. ResourceQuota is nothing but a Kubernetes object that is being used by Kubernetes administrators to restrict the cluster tenants’ resource usage per namespace.

Resource Quota Overview

In Kubernetes, hardware resources consist of worker nodes with a specific number of CPU cores or RAM. Each Kubernetes cluster has a limited number of hardware resources. In a shared environment, if resources are not allocated in advance, it creates conflicts between tenants to fight for the same resources.

ResourceQuota allows each tenant in a shared Kubernetes cluster to use only a specific number of resources. This is done by using resource requests and limits associated with pods within each namespace.

Namespace Overview

A typical Kubernetes environment has multiple namespaces for the separation of clusters on the administrative level. The namespace consists of multiple pods, and both have their own resource configuration files. The ResourceQuota for namespaces defines the computing resource limit for namespaces. Similarly, pod requests and limits depict the computing resources available to containers within each pod.

Resource requests in Kubernetes are responsible for defining resource limits for a pod when it is created. Using this configuration, the Kubernetes scheduler divides the pods on each node where it fits. The resource limit is the maximum available CPU or RAM usage for a pod.

Image of How to Use Recource Requests to Manage Container esource Usage schema

The resource requests and limit values in the YAML file in the Kubernetes JSON API define the pod quota.

How Resource Quota Limits Work

The Kubernetes server API is responsible for dealing with resource creation, deletion, and updates. The admission controllers are used to view and filter requests, and quotas are used until the resource limit is reached.

In the namespace, the resource quota object is defined by the cluster administrator. After the definition of the resource quota, the admission controller starts monitoring and tracking new objects created in that namespace.

Image of of Bespoke Admission control policies in Kubernetes schema

When the resource quota limit is reached and the user requests more resources, the admission controller gives an error and that operation is not allowed.

Practical Usage of Resource Quotas

To configure resource quota usage, first different namespaces are assigned to user teams to deploy resources. A ResourceQuota object is created per namespace by the administrator. Users can only create resources in their specified namespace. The ResourceQuota object is monitored and tracked to ensure it doesn’t exceed the defined limit. If a resource object is created or updated and it exceeds the limit, the ResourceQuota object throws an error.

The authorized users need to specify the limits for CPU and RAM for a quota-enabled namespace. In the quota system, pod creation is only allowed for authorized users. ResourceQuota can be enabled manually through the API server.

You can use the following command to enable ResourceQuota:

<pre class="codeWrap"><code>--enable-admission-plugins= flag</code></pre>

However, it is enabled by default in most of the Kubernetes setups. The resource is enforced in a namespace where it’s enabled.

How to use the Resource Quota

Let's create a CPU resource quota in a namespace with requests and limits. In this example, we will exceed the quota limit and see how it responds to the requests when the resource limit is reached. First, get the status of the Kubernetes cluster to make sure it’s up and running. To follow along, make sure at least one worker node is available in the cluster.

Create a namespace.

<pre class="codeWrap"><code>C02W84XMHTD5:terraform-dev iahmad$ kubectl create
namespace quota-demo  namespace/quota-demo created</code></pre>

Define the CPU quota in the newly created namespace.

<pre class="codeWrap"><code>C02W84XMHTD5:terraform-dev iahmad$ cat cpu-quota.yaml
  apiVersion: v1
 kind: ResourceQuota
 metadata:
   name: test-cpu-quota
   namespace: quota-demo
 spec:
   hard:
     requests.cpu: "200m"
       limits.cpu: "300m"
 C02W84XMHTD5:terraform-dev iahmad$
 C02W84XMHTD5:terraform-dev iahmad$ kubectl create -f cpu-quota.yaml
 resourcequota/test-cpu-quota created</code></pre>

Before moving ahead, make sure that the ResourceQuota object has been created.

<pre class="codeWrap"><code>C02W84XMHTD5:terraform-dev iahmad$ kubectl describe resourcequota/test-cpu-quota --namespace quota-demo
 Name:         test-cpu-quota
 Namespace:    quota-demo
 Resource      Used  Hard
 --------      ----  ----
 limits.cpu    0     300m
 requests.cpu  0     200m</code></pre>

Here, the used column should be empty, as no quota has been used so far.


Now create a test pod with the following requests and limits:

<pre class="codeWrap"><code>C02W84XMHTD5:terraform-dev iahmad$ kubectl create -n quota-demo -f- <<EOF
 apiVersion: v1
 kind: Pod
 metadata:
   name: testpod1
 spec:
   containers:
   - name: quota-test
     image: busybox
     imagePullPolicy: IfNotPresent
     command: ['sh', '-c', 'echo Pod is Running ; sleep 5000']
     resources:
       requests:
         cpu: "100m"
       limits:
         cpu: "200m"
   restartPolicy: Never
 EOF
 pod/testpod1 created</code></pre>

Notice that the namespace settings are updated and the used column has some value.

<pre class="codeWrap"><code>C02W84XMHTD5:terraform-dev iahmad$ kubectl describe resourcequota/test-cpu-quota --namespace quota-demo
 Name:         test-cpu-quota
 Namespace:    quota-demo
 Resource      Used  Hard
 --------      ----  ----
 limits.cpu    200m  300m
 requests.cpu  100m  200m</code></pre>

Repeat the process and create another pod.

<pre class="codeWrap"><code>C02W84XMHTD5:terraform-dev iahmad$ kubectl create -n quota-demo -f- <<EOF
 apiVersion: v1
 kind: Pod
 metadata:
   name: testpod2
 spec:
   containers:
   - name: quota-test
     image: busybox
     imagePullPolicy: IfNotPresent
     command: ['sh', '-c', 'echo Pod is Running ; sleep 5000']
     resources:
       requests:
         cpu: "10m"
       limits:
         cpu: "20m"
   restartPolicy: Never
 EOF
 pod/testpod2 created
 C02W84XMHTD5:terraform-dev iahmad$
 C02W84XMHTD5:terraform-dev iahmad$
 C02W84XMHTD5:terraform-dev iahmad$
 C02W84XMHTD5:terraform-dev iahmad$ kubectl describe resourcequota/test-cpu-quota --namespace quota-demo
 Name:         test-cpu-quota
 Namespace:    quota-demo
 Resource      Used  Hard
 --------      ----  ----
 limits.cpu    220m  300m
 requests.cpu  110m  200m</code></pre>

You can observe that the new pod has consumed more than the available quota limit, and the used column is updated again. Now, when we try to create a pod, it will throw an error message that we have reached our available quota limit and the request can’t be fulfilled.

Best Practices for Using ResourceQuota

Let’s look at some of the best practices that can be used while configuring the ResourceQuota object. In order to avoid any disruption, place the resource request and limits in ResourceQuota object files while defining configurations. This will help in achieving a more stable Kubernetes environment.

Communication between developers and DevOps teams before setting resources is important. Both teams should do CPU and memory profiling in order to set optimal values for resource requests and limits during runtime. To avoid wastage of resources, the allocated and used resources should be compared and monitored regularly.

Facing Challenges in Cloud, DevOps, or Security?
Let’s tackle them together!

get free consultation sessions

In case you prefer e-mail first:

Thank you! Your message has been received!
We will contact you shortly.
Oops! Something went wrong while submitting the form.
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information. If you wish to disable storing cookies, click here.