menu

Search By Label

The CAP Theorem is a principle that says a distributed database can only guarantee two of the following three things at the same time:
  1. Consistency (C): Every read gets the most recent write.
  2. Availability (A): Every request gets a response (either success or failure).
  3. Partition Tolerance (P): The system keeps working even if parts of the network fail (e.g., servers can't talk to each other).

Example:
Imagine you have a system with multiple servers that store user data.
  • Consistency (C): If you update your email address on one server, all other servers will immediately reflect that change.
  • Availability (A): Even if one server goes down, the system will still respond to your requests.
  • Partition Tolerance (P): Even if a network issue occurs and some servers can't communicate with others, the system will continue working.

The CAP Theorem says:
You can pick two of the three properties (Consistency, Availability, Partition Tolerance), but not all three at once. For example:
  • CA (Consistency + Availability): If you want consistency and availability, the system might fail when the network partitions.
  • CP (Consistency + Partition Tolerance): If you want consistency and partition tolerance, the system might not be available (i.e., not respond to requests).
  • AP (Availability + Partition Tolerance): If you want availability and partition tolerance, the system might serve stale data (not consistent).


Why You Can't Have All Three
1. Consistency and Availability without Partition Tolerance (CA):
  • Scenario: Imagine a system where you have two servers. You want to guarantee that every request gets a response (Availability) and that all users see the same data (Consistency).
  • Problem: If a network partition occurs (e.g., one server can't communicate with the other), the system has to choose between responding to requests (Availability) and returning the most recent data (Consistency). It can't do both, because if one server isn't reachable, the system can't guarantee that the data will be the same on all servers.
  • Conclusion: In the case of network partitions, you must choose Availability over Consistency or Consistency over Availability. Therefore, you can't have both consistency and availability without partition tolerance.

2. Consistency and Partition Tolerance without Availability (CP):
  • Scenario: Suppose you want to ensure that your system is always consistent (all data is synchronized across servers) and can still function if parts of the network fail (Partition Tolerance).
  • Problem: In the event of a network partition, some servers might be unreachable, but the system will still ensure that all nodes have consistent data by refusing to answer requests until the partition is resolved. This means Availability is sacrificed — the system might not respond to some requests because it is waiting for the partition to be resolved to keep consistency.
  • Conclusion: If the system guarantees Consistency and Partition Tolerance, it may not be able to respond to requests during a partition, thus sacrificing Availability.

3. Availability and Partition Tolerance without Consistency (AP):
  • Scenario: You want your system to respond to every request (availability) and keep working even if some parts of the network fail (partition tolerance).
  • Problem: If there is a network partition, different parts of the system may start serving different versions of the data because the system will continue to process requests even when some servers can't talk to each other. This can result in inconsistent data between nodes.
  • Conclusion: In this case, you give up consistency to ensure both Availability and Partition Tolerance.
Tuples are used to store multiple items in a single variable.

Example:
thistuple = ("apple", "banana", "cherry")
print(thistuple) // ('apple', 'banana', 'cherry')

In Python, arrays and tuples are both used to store collections of data, but they have distinct characteristics and use cases.

A tuple is an ordered and immutable collection of items. Once a tuple is created, its elements cannot be changed. Tuples are defined using parentheses () and can store elements of different data types.
https://github.com/topics/cloudflare-firewall-rules
To create a new secret use the command:
kubectl create secret generic <secret-name> --from-file=key.json=gcloud_keys.json -n <namespace>
 Where we are setting the key.json file from our local file gcloud_keys.json 

After configure it you can verify is working by running:
kubectl describe secret <secret_name> -n <namespace>
Or see the content with:
kubectl get secret <secret_name> -n <namespace> -o jsonpath="{.data.key\.json}" | base64 --decode

And finally configure the secret on your deploy files.
The unnest() function in  PostgreSQL is used to expand an array into a set of rows. It takes an array as input and returns a new table where each element of the array occupies a separate row. This function is particularly useful for normalizing denormalized data stored in array formats and for performing operations that require each array element to be processed individually.

Uses of the PostgreSQL UNNEST() Function
  • Normalize Data: Transform array data into individual rows for easier processing.
  • Facilitate Joins: Enable joins with other tables by expanding arrays into rows.
  • Aggregate Data: Perform aggregate functions on individual array elements.
  • Filter Array Elements: Apply filters to specific elements within an array.
  • Convert Arrays to Tables: Turn arrays into tabular format for better data manipulation.
  • Combine with Other Functions: Use in conjunction with other PostgreSQL functions for advanced data operations.

Example of use
-- Selecting and unnesting an array of integers
SELECT 
    unnest(ARRAY[1, 2]); -- Expands the array [1, 2] into individual rows

Source: https://www.w3resource.com/PostgreSQL/postgresql_unnest-function.php
You can use the -f flag:
-f, --follow=false: Specify if the logs should be streamed.

kubectl logs -f <pod_name>
To restart the pods for all deployments you can run:
kubectl rollout restart deployment
To get the yaml for a deployment (service, pod, secret, etc):

kubectl get deploy deploymentname -o yaml
A ConfigMap in Kubernetes is an API object used to store non-confidential data in key-value pairs
Create your configmap by running:
kubectl create configmap <name-configmap> --from-env-file=.env
And finally, view your config:
kubectl get configmap <name-configmap> -o yaml
If you need to edit a value you can run
kubectl edit configmap <name-configmap>

Set the config on your kubernetes deploy files adding:
          envFrom:
            - configMapRef:
                name: <name-configmap>
The kubernetes concept (and term) context only applies in the kubernetes client-side, i.e. the place where you run the kubectl command, e.g. your command prompt. The kubernetes server-side doesn't recognise this term 'context'.

As an example, in the command prompt, i.e. as the client:

  • when calling the kubectl get pods -n dev, you're retrieving the list of the pods located under the namespace 'dev'.
  • when calling the kubectl get deployments -n dev, you're retrieving the list of the deployments located under the namespace 'dev'.
If you know that you're targetting basically only the 'dev' namespace at the moment, then instead of adding "-n dev" all the time in each of your kubectl commands, you can just:

  1. Create a context named 'context-dev'.
  2. Specify the namespace='dev' for this context.
  3. Set the current-context='context-dev'.
This way, your commands above will be simplified to as followings:

  • kubectl get pods
  • kubectl get deployments
You can set different contexts, such as 'context-dev', 'context-staging', etc., whereby each of them is targeting different namespace. BTW it's not obligatory to prefix the name with 'context'. You can also the name with 'dev', 'staging', etc.

Source: https://stackoverflow.com/questions/61171487/what-is-the-difference-between-namespaces-and-contexts-in-kubernetes
Verify the available contexts with:
kubectl config get-contexts
And then delete by name running:
kubectl config delete-context <context-name>
Create a namespaces with the following command:
kubectl create namespace <name>

Namespaces

In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc.) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc.).

When to Use Multiple Namespaces
Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via resource quota).

It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.

Source: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#:~:text=%EE%80%80Namespaces%EE%80%81%20are%20a%20way%20to%20isolate%20groups


To configure a cluster created on GCP you only need to run the following command to auth, set project id and then get the credentials by cluster name and zone:
gcloud auth login
gcloud config set project YOUR_PROJECT_ID
gcloud container clusters get-credentials YOUR_CLUSTER_NAME --zone YOUR_CLUSTER_ZONE
You can run a LLM model locally with the Ollama package.
Ollama supports a list of models available on ollama.com/library

Pull a model
ollama pull llama3.2
Run the model
ollama run llama3.2
Source: https://github.com/ollama/ollama