🇬🇧 Kubernetes Part 3

🇧🇷 para ler este artigo em português clique aqui 

In this series of articles, we explore Kubernetes in 3 parts. This is the third and final part (so far), where I will explain Kubernetes components used to automate tasks, manage configurations, handle security, and ensure that services run in an organized and scalable manner within the cluster.
If you haven’t checked out the previous articles, take a look before starting this one. They help provide a better understanding of what’s covered here.

We’ve already discussed:
Control-plane, Kube-apiserver, cloud-controller-manager, etcd, kube-proxy, pods, kubelet, kube-scheduler, minikube, YAML, ReplicaSets, KNI, namespaces, volumes, services, liveness probes, and more.

Shall we continue?

DaemonSet

A DaemonSet in Kubernetes ensures that a specific type of pod runs on all (or some) nodes in the cluster. It’s useful for things that need to be always available on each node, like monitoring agents, log collectors, or networking tools.

Example 1: Log Collector (Fluentd)

Want to install Fluentd to capture logs from all nodes? A DaemonSet is perfect.

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: fluentd

  namespace: kube-system

spec:

  selector:

    matchLabels:

      app: fluentd

  template:

    metadata:

      labels:

        app: fluentd

    spec:

      containers:

        – name: fluentd

          image: fluent/fluentd:v1.14

          resources:

            limits:

              memory: “200Mi”

              cpu: “0.5”

          volumeMounts:

            – name: varlog

              mountPath: /var/log

      volumes:

        – name: varlog

          hostPath:

            path: /var/log

In this example, a pod running Fluentd will be created on every node in the cluster to collect local logs.

Example 2: Node Checker (Node Exporter)

If you want to monitor node resources using something like Node Exporter:

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: node-exporter

spec:

  selector:

    matchLabels:

      app: node-exporter

  template:

    metadata:

      labels:

        app: node-exporter

    spec:

      containers:

        – name: node-exporter

          image: prom/node-exporter:latest

          ports:

            – containerPort: 9100

              name: metrics

Here, each node will have a Node Exporter pod exposing metrics to a monitoring system like Prometheus.

Usage

Use a DaemonSet whenever you need something to run on all nodes (or specific nodes).
Examples include monitoring, proxying, or essential local services.

Example: Using Node Selectors

If you want to run a DaemonSet only on nodes labeled as disktype=ssd:

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: custom-daemonset

spec:

  selector:

    matchLabels:

      app: custom-daemon

  template:

    metadata:

      labels:

        app: custom-daemon

    spec:

      nodeSelector:

        disktype: ssd

      containers:

        – name: custom-container

          image: custom/image:latest

In this case, the DaemonSet will only be deployed on nodes with the label disktype=ssd.

Example: Using Node Affinity

If you need something more flexible, such as preferring nodes in a specific region (region=us-east), but not requiring it:

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: affinity-daemonset

spec:

  selector:

    matchLabels:

      app: affinity-daemon

  template:

    metadata:

      labels:

        app: affinity-daemon

    spec:

      affinity:

        nodeAffinity:

          requiredDuringSchedulingIgnoredDuringExecution:

            nodeSelectorTerms:

              – matchExpressions:

                  – key: region

                    operator: In

                    values:

                      – us-east

      containers:

        – name: affinity-container

          image: custom/image:latest

Here, the DaemonSet will only be deployed on nodes in the us-east region.

Example: Using Tolerations

If you have tainted nodes for specific workloads, you can use tolerations to allow the DaemonSet to run on those nodes:

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: tainted-daemonset

spec:

  selector:

    matchLabels:

      app: tainted-daemon

  template:

    metadata:

      labels:

        app: tainted-daemon

    spec:

      tolerations:

        – key: “dedicated”

          operator: “Equal”

          value: “special-nodes”

          effect: “NoSchedule”

      containers:

        – name: tainted-container

          image: custom/image:latest

This DaemonSet will only be deployed on nodes with the taint dedicated=special-nodes:NoSchedule.

Using DaemonSets on Specific Nodes

To deploy agents on nodes with specific configurations (e.g., GPUs, SSDs).

To isolate workloads in hybrid environments.

To run services that only make sense on particular types of nodes.

These filters help prevent resources from being wasted on nodes where the DaemonSet is unnecessary.

Summary

DaemonSets are simple: you write them once, and Kubernetes takes care of deploying them to all the necessary parts of the cluster.

Jobs

Jobs in Kubernetes are like tasks that you execute once or occasionally, unlike normal pods that run continuously. It’s like asking the system to perform a specific job, such as processing a file, generating a report, or running a script. Once the job is completed, it stops.

How It Works in Practice

You create the Job, and it spins up a pod to perform the task.

When the task completes successfully, it’s marked as “done.”

If there’s an error, Kubernetes retries the task based on the configuration you define.


Importance of Job Technical Specifications

The Technical Specifications (TS) of a job in Kubernetes are like a detailed and clear instruction list that tells the cluster exactly what to do and how to do it. They are essential for ensuring the task is executed correctly, predictably, and efficiently. Without well-defined specifications, it’s like assigning a task without explaining the steps or necessary tools—the result could be unpredictable.


restartPolicy

The restartPolicy in Kubernetes tells the cluster what to do if the pod fails. It’s like giving clear instructions to Kubernetes on when to retry and when to stop. This setting is crucial for jobs, as it ensures you avoid unnecessary loops or tasks stopping without retries.

The Three Main Values

Always

Makes Kubernetes always restart the pod, regardless of how it ended (error or success).

Commonly used in Deployments to ensure the service is always running.

Not suitable for jobs, as infinite restarts don’t make sense for them.

OnFailure

Restarts the pod only if it fails (exit code not 0).

Ideal for jobs, as it retries if something went wrong, like a network issue or a temporary bug.

Never

If the pod fails, it leaves it as is without retrying.

Useful when you want to manually debug what went wrong without Kubernetes intervening.


Example: Lottery Simulation Job

This example demonstrates a Job in Kubernetes that simulates a lottery draw. It generates random numbers and logs them.

YAML File for the Job

apiVersion: batch/v1

kind: Job

metadata:

  name: lottery-job

spec:

  template:

    spec:

      containers:

      – name: lottery

        image: python:3.9

        command: [“python”, “-c”]

        args:

          – |

            import random

            numbers = sorted(random.sample(range(1, 61), 6))

            print(f”Drawn numbers: {numbers}”)

      restartPolicy: Never


Explanation

metadata.name:

The name of the job. In this case, it’s lottery-job to identify it as the lottery simulation task.

image:

Uses the official Python image (python:3.9) to directly run the script. No need to build anything, just plug and play.

command and args:

This is where the magic happens. It uses the command python -c to run the script that generates 6 random numbers between 1 and 60 (similar to a lottery draw).

random.sample(range(1, 61), 6) selects 6 unique numbers in the range.

sorted() arranges the numbers in ascending order for better readability.

restartPolicy:

Set to Never, as a lottery draw makes sense to run only once. If there’s an error, you fix it manually without automatic retries.


How to Run It in Kubernetes

Save the file as lottery-job.yaml.

Run the command:
kubectl apply -f lottery-job.yaml

To check the results:

kubectl logs job/lottery-job

You’ll see output similar to:

Drawn numbers: [5, 12, 23, 34, 45, 56]

CronJobs

A CronJob in Kubernetes is like setting an alarm on your phone to remind you to drink water but for cluster tasks. It schedules tasks to run automatically at specific times, following the same syntax as Linux crontabs. Examples include running a backup every midnight or cleaning logs every Friday.

Practical Example:

Suppose you want to run a script that deletes old files every morning at 2 a.m.:

apiVersion: batch/v1

kind: CronJob

metadata:

  name: clean-logs

spec:

  schedule: “0 2 * * *”  # Runs daily at 2 a.m.

  jobTemplate:

    spec:

      template:

        spec:

          containers:

          – name: clean-logs

            image: python:3.9

            command: [“python”, “-c”]

            args:

              – |

                import os

                import time

                folder = “/tmp/logs”

                now = time.time()

                for file in os.listdir(folder):

                    path = os.path.join(folder, file)

                    if os.path.isfile(path) and now – os.path.getmtime(path) > 7 * 24 * 60 * 60:

                        os.remove(path)

                        print(f”Deleted: {path}”)

          restartPolicy: Never

Explanation:

schedule:

“0 2 * * *” specifies the CronJob runs daily at 2 a.m.

First number: Minutes.

Second number: Hours.

The remaining fields represent days, months, and weekdays.

jobTemplate:

Defines the task, which here deletes files in the /tmp/logs folder that haven’t been modified in the past 7 days.

restartPolicy:

Set to Never to avoid automatic retries. You can check logs manually if something goes wrong.


Suspend a CronJob:

To pause a CronJob without modifying its YAML file:

Suspend:
kubectl patch cronjob daily-backup -p ‘{“spec”: {“suspend”: true}}’

Resume:
kubectl patch cronjob daily-backup -p ‘{“spec”: {“suspend”: false}}’


Parallelism

In Kubernetes, parallelism for CronJobs (and Jobs) works like assigning multiple workers to a single task. It defines how many pods can run simultaneously to execute the same job. This is useful for breaking a large task into smaller chunks and processing them faster.

How it Works:

spec.parallelism: Specifies the number of pods that can run in parallel.

Default: 1 (runs one pod at a time).

Higher values launch multiple pods simultaneously.

spec.completions: Specifies how many pods must finish to consider the job complete.

For example, if you need five tasks done, set completions: 5.

Teamwork Example: Suppose you have a job to process 100 files and want to split it among 5 pods, with 2 running at the same time:

apiVersion: batch/v1

kind: Job

metadata:

  name: process-files

spec:

  parallelism: 2  #Number of pods running simultaneously

  completions: 5  #Total pods needed to finish the task

  template:

    spec:

      containers:

      – name: processor

        image: python:3.9

        command: [“python”, “-c”]

        args:

          – |

            import os

            import random

            file = f”file_{random.randint(1, 100)}.txt”

            print(f”Processing: {file}”)

            # Simulate work

            import time

            time.sleep(10)

      restartPolicy: Never

Here:

Parallelism: Limits the number of pods to 2 running simultaneously.

Completions: Ensures the job ends after 5 pods finish processing.

ConfigMaps

ConfigMaps and Secrets are Kubernetes resources used to store and manage configurations and sensitive information separately from application logic.


ConfigMaps

Purpose: Stores data such as configurations, parameters, or file paths.

Common Uses: File paths, API URLs, default environment variables.

Basic Example:

apiVersion: v1

kind: ConfigMap

metadata:

  name: my-app-config

data:

  config.json: |

    {

      “db_host”: “localhost”,

      “db_port”: 5432

    }


Accessing ConfigMaps

ConfigMaps can be used as environment variables or mounted as volumes in a pod. Below are examples of both methods.


Example 1: Accessing ConfigMap as Environment Variables

This example creates a ConfigMap called my-app-config and uses it as environment variables in a pod.

apiVersion: v1

kind: Pod

metadata:

  name: app-with-config

spec:

  containers:

  – name: my-app

    image: my-app-image

    env:

      – name: DB_HOST

        valueFrom:

          configMapKeyRef:

            name: my-app-config

            key: config.json

      – name: DB_PORT

        valueFrom:

          configMapKeyRef:

            name: my-app-config

            key: config.json

Here, DB_HOST and DB_PORT values are taken from the my-app-config ConfigMap.


Updating a ConfigMap

Updating a ConfigMap involves modifying its data or configurations to reflect necessary changes.

Example: Updating a ConfigMap with YAML

Create an updated ConfigMap:

apiVersion: v1

kind: ConfigMap

metadata:

  name: my-app-config

data:

  config.json: |

    {

      “db_host”: “new-server”,

      “db_port”: 5432

    }

Apply the updated ConfigMap:

kubectl apply -f updated-configmap.yaml

Verify the update:

kubectl get configmap my-app-config -o yaml


Example: Partial Update of a ConfigMap

To update specific fields, use the kubectl patch command:

kubectl patch configmap my-app-config -p ‘{“data”: {“config.json”: “{\”db_host\”: \”new-host\”}”}}’

In this case, only the db_host field is updated.


Using ConfigMap as a Volume

You can mount a ConfigMap as a volume to use its content directly in your application:

apiVersion: v1

kind: Pod

metadata:

  name: app-with-config

spec:

  containers:

  – name: my-app

    image: my-app-image

    command: [ “cat”, “/etc/config/app-settings.json” ]

    volumeMounts:

    – name: config-volume

      mountPath: /etc/config

  volumes:

  – name: config-volume

    configMap:

      name: my-app-config

In this example, the command cat /etc/config/app-settings.json is executed in the container to access the ConfigMap content.

Secrets

Imagine you have an application running in containers on Kubernetes that needs to access sensitive information like passwords, API tokens, or SSH keys. This is where Secrets come into play.

Secrets are Kubernetes objects designed to securely store sensitive information. You can create a Secret and use it in your pods, volumes, or applications to access this data in a controlled and secure way.


Basic Example

Creating a Secret:

You can create a Secret using the kubectl command:

kubectl create secret generic my-secret –from-literal=username=admin –from-literal=password=1234

In this example, a Secret named my-secret is created with two fields: username and password.


YAML Example:

You can also define a Secret using a YAML file:

apiVersion: v1

kind: Secret

metadata:

  name: my-secret

data:

  username: YWRtaW4=  # ‘admin’ in base64

  password: MTIzNA==  # ‘1234’ in base64


Using Secrets in a Pod

Once the Secret is created, you can link it to your pod. This can be done via environment variables or volumes.

Example: Using Secret as Environment Variables

apiVersion: v1

kind: Pod

metadata:

  name: my-app

spec:

  containers:

  – name: my-app-container

    image: my-app-image

    envFrom:

      – secretRef:

          name: my-secret

In this case, the my-app pod accesses username and password as environment variables without exposing them directly in the code or environment.


Example: Using Secrets as Volumes

Another approach is to mount the Secret as a volume instead of using environment variables.

apiVersion: v1

kind: Pod

metadata:

  name: my-app-with-volume

spec:

  containers:

  – name: my-app-container

    image: my-app-image

    volumeMounts:

      – mountPath: /etc/secrets

        name: my-secret-volume

  volumes:

    – name: my-secret-volume

      secret:

        secretName: my-secret

Here, the Secret’s data is available as files at /etc/secrets.


Advantages of Secrets

Security: Keeps sensitive information away from exposed configurations or hard-coded values.

Access Control: Manage who can access these secrets using RBAC (Role-Based Access Control).

Ease of Use: Accessing Secrets is as simple as referencing them in a pod or application.


This is the general idea behind Secrets in Kubernetes. They help keep your sensitive information secure and organized in a containerized environment. 

StatefulSets

StatefulSets in Kubernetes are used to manage stateful applications, i.e., those that require unique and persistent identities across pods. Unlike Deployments, StatefulSets ensure that pods are created, updated, and deleted in a specific order while maintaining their associated storage volumes.


When to Use StatefulSets?

Applications requiring persistent data, such as databases (e.g., PostgreSQL, MySQL, MongoDB).

Services needing unique identification, like ZooKeeper, Kafka, or Redis.

Apps that require a specific startup order.


Key Features:

Persistent Identity: Each pod has a unique name that remains consistent, even if recreated.

Order of Creation/Deletion: Pods are created and removed sequentially.

Persistent Storage: Each pod can have its own PersistentVolume that is not shared with others.


Basic Example of a StatefulSet:

Here is an example of a StatefulSet that creates two pods for a simple app:

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: my-app

spec:

  replicas: 2

  serviceName: “my-app-service”

  selector:

    matchLabels:

      app: my-app

  template:

    metadata:

      labels:

        app: my-app

    spec:

      containers:

      – name: my-container

        image: nginx

        ports:

        – containerPort: 80

  volumeClaimTemplates:

  – metadata:

      name: my-storage

    spec:

      accessModes: [“ReadWriteOnce”]

      resources:

        requests:

          storage: 1Gi


Explanation of the Example:

replicas: 2: Creates two pods.

serviceName: Should be associated with a Headless Service to enable communication between pods.

volumeClaimTemplates: Each pod gets its own 1Gi volume for persistence.


Scenario:

You are responsible for managing a web system that uses Nginx as a server. Each server instance must retain its own configuration files or specific data, which must persist even if the pods are recreated.


What Happens When You Apply the Example?

1. Creation of the StatefulSet:

Kubernetes creates two pods named my-app-0 and my-app-1 (in order and based on the StatefulSet name).

Each pod has its unique identity, which is important for tracking logs or maintaining unique data.

2. Headless Service:

The serviceName: “my-app-service” field requires a Headless Service (without a cluster IP) to enable direct communication between pods using DNS names like:

my-app-0.my-app-service

my-app-1.my-app-service

This is useful if the application needs inter-pod communication.

3. Persistent Volumes:

For each pod, Kubernetes provisions a PersistentVolumeClaim (PVC) based on the defined volumeClaimTemplates.

Pod my-app-0 gets a volume named my-storage-my-app-0.

Pod my-app-1 gets a volume named my-storage-my-app-1.
These volumes are independent and persist even if the pods are restarted.

4. Execution of Nginx:

Each pod runs a container with the nginx image and exposes port 80.


Step-by-Step in the Cluster:

Kubernetes Creates the Volumes:

The StatefulSet controller checks the volumeClaimTemplates and creates PVCs for each pod, ensuring persistence.

Kubernetes Creates the Pods in Order:

First, it creates my-app-0, waits for it to be ready, and then creates my-app-1.

This ensures the required order in case the application depends on it.

Pods Communicate with Each Other (if needed):

With the Headless Service, pods can communicate directly via DNS (useful for distributed databases or clustered services).

Pod Recreation:

If my-app-0 fails, Kubernetes recreates the same pod (with the same name and associated volume) without affecting my-app-1.


Real-World Behavior Example:

Imagine that my-app-0 writes specific data to its volume, such as:

/data/logs/nginx.log

These logs remain intact even if the pod restarts. Meanwhile, my-app-1 maintains a different set of data in its own persistent volume.


Benefits of StatefulSets:

Separate Logs: Ensures that each instance retains its own logs.

Exclusive Persistent Storage: Guarantees that each pod has its dedicated storage.

Order and Identity: Provides ordered creation and unique identities for stateful applications.

Headless Service

A Headless Service in Kubernetes is a service type that does not assign a fixed IP address. Instead, it enables direct access to the pods of a StatefulSet or Deployment. Instead of creating a single IP for load balancing, it generates DNS names for each pod in the format <pod-name>.<service-name>.

This is useful for distributed applications, such as databases, where each instance needs to be uniquely identified. You enable this by setting clusterIP: None in the service’s YAML file. As a result, pods can communicate directly without relying on a centralized IP.

Example of a Headless Service

apiVersion: v1

kind: Service

metadata:

  name: my-app-service

spec:

  clusterIP: None  # Makes the service Headless

  selector:

    app: my-app

  ports:

    – port: 80

      targetPort: 80

Explanation:

This service selects pods with the label app: my-app.

It does not create a single IP for the service.

It allows direct access to pods via DNS:

my-app-0.my-app-service

my-app-1.my-app-service

Ideal for applications that need direct communication between instances, such as database clusters.


PersistentVolume (PV) and PersistentVolumeClaim (PVC)

PersistentVolume (PV): A storage resource provisioned by the Kubernetes cluster administrator. It exists independently of pods and provides persistent storage.

PersistentVolumeClaim (PVC): A request for storage by a user. The PVC specifies requirements such as size and access mode (e.g., ReadWriteOnce, ReadOnlyMany).

The PVC automatically binds to a suitable PV in the cluster.

Practical Examples

Persistent Volume (PV)

apiVersion: v1

kind: PersistentVolume

metadata:

  name: my-pv

spec:

  capacity:

    storage: 1Gi

  accessModes:

    – ReadWriteOnce

  hostPath:

    path: “/data”

Creates a 1Gi volume on the local directory /data.

Persistent Volume Claim (PVC)

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: my-pvc

spec:

  accessModes:

    – ReadWriteOnce

  resources:

    requests:

      storage: 500Mi

Requests 500Mi of storage from an available PV.

Pod Using the PVC

apiVersion: v1

kind: Pod

metadata:

  name: my-pod

spec:

  containers:

    – name: my-container

      image: nginx

      volumeMounts:

        – mountPath: “/app-data”

          name: my-volume

  volumes:

    – name: my-volume

      persistentVolumeClaim:

        claimName: my-pvc

Mounts the volume requested by the PVC at /app-data.


Pod Management Policy

Pod Management Policy in Kubernetes defines how the pods of a StatefulSet are managed in terms of creation, deletion, and updates. The default policy, “OrderedReady”, is set in the podManagementPolicy field of a StatefulSet.

What is “OrderedReady”?

Ensures that pods in a StatefulSet are created, updated, and deleted in sequential order.

A pod is only created or updated after the previous pod is in a Ready state.

When to Use?

When applications rely on a specific startup order, such as distributed databases (e.g., MongoDB, Cassandra).

When ensuring that each instance is ready before proceeding to the next.

Example with a StatefulSet

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: my-app

spec:

  replicas: 3

  podManagementPolicy: OrderedReady  # Default policy

  selector:

    matchLabels:

      app: my-app

  template:

    metadata:

      labels:

        app: my-app

    spec:

      containers:

      – name: app

        image: nginx

        ports:

        – containerPort: 80

Explanation:

Creates 3 pods in sequence, ensuring that one pod is ready before creating the next.

Ensures the required order for applications with initialization dependencies.

What Happens?

The pod my-app-0 is created and waits until it is in the “Ready” state.

Then, my-app-1 is created and also waits for the same state.

Finally, my-app-2 is created.

Alternative: “Parallel”

If you want all pods to be created or updated simultaneously, use the “Parallel” policy instead of “OrderedReady”.

Endpoints

What Are Endpoints in Kubernetes?

Endpoints in Kubernetes represent the IP addresses and ports of one or more pods associated with a Service. They are automatically created when a Service is configured to route traffic to corresponding pods.
Endpoints connect a Service to the pods it manages, ensuring traffic reaches the correct pods.


How It Works:

The Service selects pods based on the labels configured in the selector.

Kubernetes creates Endpoints to list the IPs and ports of the pods matching the selector.

When you access the Service, it uses these Endpoints to route traffic.


Example:

Pod Configuration:

apiVersion: v1

kind: Pod

metadata:

  name: my-pod

  labels:

    app: my-app

spec:

  containers:

  – name: my-container

    image: nginx

    ports:

    – containerPort: 80

Service Configuration:

apiVersion: v1

kind: Service

metadata:

  name: my-service

spec:

  selector:

    app: my-app

  ports:

  – protocol: TCP

    port: 80

    targetPort: 80

Generated Endpoints:

apiVersion: v1

kind: Endpoints

metadata:

  name: my-service

subsets:

– addresses:

    – ip: 192.168.1.10  # Pod IP

  ports:

    – port: 80


What Happens:

The my-service Service uses the Endpoints to route traffic to the my-pod at IP 192.168.1.10 on port 80.

If another pod with the same label (app: my-app) is added, it will automatically be included in the Endpoints.

EndpointSlices

What Are EndpointSlices?

EndpointSlices are a more efficient alternative to traditional Endpoints in Kubernetes. They are designed to better handle large clusters and services with many pods, avoiding the size limits of traditional Endpoints objects.


Characteristics:

Slicing: Each EndpointSlice can hold up to 100 endpoints by default.

Scalable: Reduces overhead in large clusters.

Automatic: Managed by Kubernetes when a Service is created.

Compatible: Works seamlessly with existing services without manual adjustments.


Example:

Pod Configuration:

apiVersion: v1

kind: Pod

metadata:

  name: my-pod

  labels:

    app: my-app

spec:

  containers:

  – name: my-container

    image: nginx

    ports:

    – containerPort: 80

Service Configuration:

apiVersion: v1

kind: Service

metadata:

  name: my-service

spec:

  selector:

    app: my-app

  ports:

  – protocol: TCP

    port: 80

    targetPort: 80

Generated EndpointSlice:

apiVersion: discovery.k8s.io/v1

kind: EndpointSlice

metadata:

  name: my-service-xyz

addressType: IPv4

endpoints:

– addresses:

  – 192.168.1.10

ports:

– port: 80


What Happens:

The EndpointSlice lists the IP and port of the my-pod more efficiently.

For services with many pods, Kubernetes splits the Endpoints into multiple slices to avoid overload.

This improves performance in large clusters, ensuring services remain responsive.


Labels and Addressing in EndpointSlices

EndpointSlice Labels

Labels in EndpointSlices allow for grouping and optimizing how endpoints are accessed and managed in Kubernetes.


Uses:

Segmentation: Group endpoints by specific categories like regions or availability zones.

Discovery: Help controllers identify and access specific endpoints (e.g., based on version or service).

Example with Labels:

apiVersion: discovery.k8s.io/v1

kind: EndpointSlice

metadata:

  name: my-service-xyz

  labels:

    version: v1

    zone: us-west

addressType: IPv4

endpoints:

  – addresses:

    – 192.168.1.10

    ports:

    – port: 80

This EndpointSlice groups endpoints with labels version: v1 and zone: us-west.


Supported Address Types:

IPv4: Traditional IPv4 addresses (e.g., 192.168.1.10).

IPv6: IPv6 addresses (e.g., fd00::1).

Example with IPv6 Address:

apiVersion: discovery.k8s.io/v1

kind: EndpointSlice

metadata:

  name: my-service-xyz

addressType: IPv6

endpoints:

  – addresses:

    – fd00::1

    ports:

    – port: 80


Summary:

Labels in EndpointSlices enable better organization and traffic management.

Supported address types (IPv4 and IPv6) allow for flexible pod networking configurations in Kubernetes clusters.

RBAC


RBAC (Role-Based Access Control) in Kubernetes is a way to control who can do what within the cluster, based on roles. It allows defining permissions for users, groups, or services to access Kubernetes resources, such as pods, deployments, or namespaces.

How it works?

Roles: Define a set of permissions within a namespace or at the cluster level.

RoleBindings and ClusterRoleBindings: Assign roles to users or groups.

Verbs: Permissions are granted using verbs like get, list, create, delete, etc.

Types of RBAC:

Role: Defined within a single namespace. Controls access to resources within that namespace.

ClusterRole: Controls access to resources across the entire cluster (in any namespace).

Examples:

Role (for a specific namespace):

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

  namespace: my-namespace

  name: pod-reader

rules:

  – apiGroups: [“”]

    resources: [“pods”]

    verbs: [“get”, “list”]

What it does: Creates a role called pod-reader that allows reading (get, list) pods in the my-namespace namespace.

RoleBinding (to bind Role to a user):

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: pod-reader-binding

  namespace: my-namespace

subjects:

  – kind: User

    name: “gustavo-santana”

    apiGroup: rbac.authorization.k8s.io

roleRef:

  kind: Role

  name: pod-reader

  apiGroup: rbac.authorization.k8s.io

What it does: Binds the pod-reader role to the user gustavo-santana within the my-namespace namespace.

ClusterRole (for global access):

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: cluster-admin

rules:

  – apiGroups: [“”]

    resources: [“pods”, “services”, “deployments”]

    verbs: [“get”, “list”, “create”, “delete”, “update”]

What it does: Creates a ClusterRole that allows full access to pods, services, and deployments across the entire cluster.

ClusterRoleBinding (to bind ClusterRole globally):

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: admin-binding

subjects:

  – kind: User

    name: “admin-user”

    apiGroup: rbac.authorization.k8s.io

roleRef:

  kind: ClusterRole

  name: cluster-admin

  apiGroup: rbac.authorization.k8s.io

What it does: Binds the cluster-admin ClusterRole to the user admin-user across the entire cluster.

Summary:
RBAC in Kubernetes controls who can access and perform actions within a cluster. Roles define permissions, and RoleBindings or ClusterRoleBindings connect those permissions to users or groups.


Generating Key and Security Certificate for RBAC
To generate a key and security certificate for RBAC in Kubernetes, you can create a certificate that will be used for authentication via a ServiceAccount, which is often used together with RBAC.

Here is a simple process to generate and use a key and certificate.

Generate the Private Key and Certificate
You can use OpenSSL to generate the key and certificate.

Generate the Private Key:

openssl genpkey -algorithm RSA -out my-key.pem

  • Generate the Certificate (CSR):

openssl req -new -key my-key.pem -out my-csr.csr

During execution, you will be prompted to provide information such as country, state, and common name. For a certificate for specific use like Kubernetes, fill in these fields as necessary.

Generate the Signed Certificate:

openssl x509 -req -in my-csr.csr -signkey my-key.pem -out my-cert.crt

Now you have:

my-key.pem: The private key.

my-cert.crt: The certificate.

Create the ServiceAccount in Kubernetes
With the key and certificate generated, you can create a ServiceAccount in Kubernetes. Here is an example of how to create the ServiceAccount that will use this certificate.

Example of ServiceAccount:

apiVersion: v1

kind: ServiceAccount

metadata:

  name: my-service-account

Save this in a file, for example service-account.yaml, and apply it with:

kubectl apply -f service-account.yaml

Create a Role or ClusterRole and RoleBinding or ClusterRoleBinding
Now, you need to configure the permissions for this ServiceAccount. Here is an example of a Role with read permissions for pods and how to bind it to the ServiceAccount:

Example of Role (in the default namespace):

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

  namespace: default

  name: pod-reader

rules:

  – apiGroups: [“”]

    resources: [“pods”]

    verbs: [“get”, “list”]

Example of RoleBinding (binding the Role to the ServiceAccount):

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: pod-reader-binding

  namespace: default

subjects:

  – kind: ServiceAccount

    name: my-service-account

    namespace: default

roleRef:

  kind: Role

  name: pod-reader

  apiGroup: rbac.authorization.k8s.io

Save this in a file role-binding.yaml and apply it with:

kubectl apply -f role-binding.yaml

Using the ServiceAccount with the Key and Certificate
Now that the ServiceAccount has the Role associated, you can use the key and certificate for authentication with the Kubernetes API or within your pods. To do this, you should associate the key and certificate with the ServiceAccount.

Summary:

Generate the key and certificate with OpenSSL.

Create a ServiceAccount for Kubernetes.

Create a Role or ClusterRole and bind it to the ServiceAccount with RoleBinding or ClusterRoleBinding.

Use the ServiceAccount with the key and certificate in your pods or services.

This key and certificate process is often used for secure authentication and access-level authorizations.

Conclusion:
Here, I conclude our series of articles on Kubernetes. I hope I have helped you better understand this tool and its components. If I made any mistakes or if you would like to contribute with more information, please feel free to reach out to me. Thank you for reading, and I look forward to seeing you in the next articles. 🙂