menu

Search By Label

KEDA is a tool that helps Kubernetes automatically start or stop workers based on how busy things are.

Imagine you have some background jobs like Celery tasks. If there are no tasks, you don't need any workers running — that saves money.
But when new tasks come in, KEDA wakes up the workers to handle them.

Example config:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: celery-worker-scaledobject
spec:
  scaleTargetRef:
    name: celery-worker
  minReplicaCount: 0
  maxReplicaCount: 2
  pollingInterval: 15 # seconds
  cooldownPeriod: 300 # seconds
  triggers:
    - type: redis
      metadata:
        address: 10.147.77.27:6379
        listName: dev_process_files_queue # your custom Celery queue
        listLength: "8"


Automatically scale down pods when CPU/RAM is low:

- Scales up: when pods are unschedulable due to lack of resources.
- Scales down: when nodes are underutilized and workloads can be moved elsewhere.

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: api-hpa
  namespace: dev
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
  minReplicas: 1
  maxReplicas: 3
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 60

You can use the command:
kubectl logs <pod-id> --previous
To show the logs from the version that crashes or use the command:
kubectl get pod celery-worker-5558fbffb-25dmw -o jsonpath="{.status.containerStatuses[0].lastState.terminated.reason}"
To show the terminated reason only.
To create a new secret use the command:
kubectl create secret generic <secret-name> --from-file=key.json=gcloud_keys.json -n <namespace>
 Where we are setting the key.json file from our local file gcloud_keys.json 

After configure it you can verify is working by running:
kubectl describe secret <secret_name> -n <namespace>
Or see the content with:
kubectl get secret <secret_name> -n <namespace> -o jsonpath="{.data.key\.json}" | base64 --decode

And finally configure the secret on your deploy files.
You can use the -f flag:
-f, --follow=false: Specify if the logs should be streamed.

kubectl logs -f <pod_name>
To restart the pods for all deployments you can run:
kubectl rollout restart deployment
To get the yaml for a deployment (service, pod, secret, etc):

kubectl get deploy deploymentname -o yaml
A ConfigMap in Kubernetes is an API object used to store non-confidential data in key-value pairs
Create your configmap by running:
kubectl create configmap <name-configmap> --from-env-file=.env
And finally, view your config:
kubectl get configmap <name-configmap> -o yaml
If you need to edit a value you can run
kubectl edit configmap <name-configmap>

Set the config on your kubernetes deploy files adding:
          envFrom:
            - configMapRef:
                name: <name-configmap>
The kubernetes concept (and term) context only applies in the kubernetes client-side, i.e. the place where you run the kubectl command, e.g. your command prompt. The kubernetes server-side doesn't recognise this term 'context'.

As an example, in the command prompt, i.e. as the client:

  • when calling the kubectl get pods -n dev, you're retrieving the list of the pods located under the namespace 'dev'.
  • when calling the kubectl get deployments -n dev, you're retrieving the list of the deployments located under the namespace 'dev'.
If you know that you're targetting basically only the 'dev' namespace at the moment, then instead of adding "-n dev" all the time in each of your kubectl commands, you can just:

  1. Create a context named 'context-dev'.
  2. Specify the namespace='dev' for this context.
  3. Set the current-context='context-dev'.
This way, your commands above will be simplified to as followings:

  • kubectl get pods
  • kubectl get deployments
You can set different contexts, such as 'context-dev', 'context-staging', etc., whereby each of them is targeting different namespace. BTW it's not obligatory to prefix the name with 'context'. You can also the name with 'dev', 'staging', etc.

Source: https://stackoverflow.com/questions/61171487/what-is-the-difference-between-namespaces-and-contexts-in-kubernetes
Verify the available contexts with:
kubectl config get-contexts
And then delete by name running:
kubectl config delete-context <context-name>
Create a namespaces with the following command:
kubectl create namespace <name>

Namespaces

In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc.) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc.).

When to Use Multiple Namespaces
Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via resource quota).

It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.

Source: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#:~:text=%EE%80%80Namespaces%EE%80%81%20are%20a%20way%20to%20isolate%20groups


To configure a cluster created on GCP you only need to run the following command to auth, set project id and then get the credentials by cluster name and zone:
gcloud auth login
gcloud config set project YOUR_PROJECT_ID
gcloud container clusters get-credentials YOUR_CLUSTER_NAME --zone YOUR_CLUSTER_ZONE
  • kubectl get deployments: List all the running deployments.
  • kubectl describe deployment <deploy name>: Print out details about a specific deployment
  • kubectl apply -f <config file name>: Create a deployment out of a config file.
  • kubectl delete deployment <deploy name>: Delete a deploy
  • kubectl rollout restart deployment <deploy name>: Restart all pods created by deployment.
  • Cluster: A collection of nodes + master to manage them.
  • Node: Virtual machine to run our containers
  • Pod: More or less a running container. Technically,  a pod can run multiple containers.
  • Deploy: Monitors a set of pods, make sure they are running and restarts them if they crashed.
  • Service: Provides an easy-to-remember URL to access to a running container.