menu

Namespaces

In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc.) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc.).

When to Use Multiple Namespaces
Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via resource quota).

It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.

Source: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#:~:text=%EE%80%80Namespaces%EE%80%81%20are%20a%20way%20to%20isolate%20groups


To configure a cluster created on GCP you only need to run the following command to auth, set project id and then get the credentials by cluster name and zone:
gcloud auth login
gcloud config set project YOUR_PROJECT_ID
gcloud container clusters get-credentials YOUR_CLUSTER_NAME --zone YOUR_CLUSTER_ZONE
You can run a LLM model locally with the Ollama package.
Ollama supports a list of models available on ollama.com/library

Pull a model
ollama pull llama3.2
Run the model
ollama run llama3.2
Source: https://github.com/ollama/ollama
  • kubectl get deployments: List all the running deployments.
  • kubectl describe deployment <deploy name>: Print out details about a specific deployment
  • kubectl apply -f <config file name>: Create a deployment out of a config file.
  • kubectl delete deployment <deploy name>: Delete a deploy
  • kubectl rollout restart deployment <deploy name>: Restart all pods created by deployment.
  • Cluster: A collection of nodes + master to manage them.
  • Node: Virtual machine to run our containers
  • Pod: More or less a running container. Technically,  a pod can run multiple containers.
  • Deploy: Monitors a set of pods, make sure they are running and restarts them if they crashed.
  • Service: Provides an easy-to-remember URL to access to a running container.
  • Pending: The pod has been accepted by the Kubernetes system but hasn't started running yet. This could mean that Kubernetes is still scheduling the pod to a node, or it’s waiting for resources (like storage or network) to become available.
  • Running: The pod has been successfully scheduled to a node, and at least one of its containers is running.
  • Succeeded: All containers in the pod have completed successfully, and the pod’s restartPolicy is set to Never or OnFailure.
  • Failed: All containers in the pod have terminated, and at least one container exited with a non-zero status, indicating an error.
  • CrashLoopBackOff: A container in the pod is repeatedly failing to start. Kubernetes is trying to restart it, but it continues to fail, resulting in a “crash loop.”

You can perform multiplication using the method SUM and get decimals complementing with numeric in the following way:

Example:
select 
  us.product_id,
  ROUND(SUM(us.units * p.price)::numeric / SUM(us.units), 2) AS average_price 
from Prices p
left join UnitsSold us on us.product_id = p.product_id and us.purchase_date between p.start_date and p.end_date
group by us.product_id
order by us.product_id
Cross Join in PostgreSQL is like a full outer join, but without any matching condition. It simply combines every row from the first table with every row from the second table, creating a result set that is the product of the number of rows in both tables.

Example:

Let's say you have two tables:
  • Customers with columns customer_id and name
  • Products with columns product_id and name

A cross join between these two tables would produce a result set where every customer is paired with every product. This could be useful if you want to generate a list of all possible combinations of customers and products, perhaps for a recommendation system or a marketing campaign.

SELECT customers.customer_id, customers.name, products.product_id, products.name
FROM customers
CROSS JOIN products;

This query would create a table with a row for every combination of customer and product, showing each customer's ID and name along with each product's ID and name.
If you need to create an initial array with default values and specific length you can use:
arr = new Array(3).fill(0)
Push your docker image using the tag.
docker push NAME[:TAG]


Tagging an image in Docker is important for the following reasons:
  • Version control: Tags help identify different versions of an image.
  • Traceability: Tags allow you to track changes and updates to an image.
    Automation: Tags are used in automated processes, such as deploying new versions.
    Troubleshooting: Tags help pinpoint specific image versions when debugging.
    Managing large-scale projects: Tags organize images in complex projects.

docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]

Example:
docker tag 518a41981a6a myRegistry.com/myImage


A GET request expresses the user's intent to not have any side effects. Naturally, there will always be side effects on the server like log entries for example, but the important distinction here is whether the user had asked for a side effect or not.

Another reason to stay away from GET surfaces if you respond with the recommended 201 Created response for a request where the resource is being created on the server. The next request would result in a different response with status 200 OK and thus it cannot be cached as is usually the case with GET requests.

Instead, I would suggest to use PUT, which is defined as

The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI.

If a new resource is created, the origin server MUST inform the user agent via the 201 (Created) response. If an existing resource is modified, either the 200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful completion of the request. If the resource could not be created or modified with the Request-URI, an appropriate error response SHOULD be given that reflects the nature of the problem.
In the above form, it should be considered a "create or update" action.

To implement pure "get or create" you could respond with 409 Conflict in case an update would result in a different state.
However, especially if you are looking for idempotence, you might find that "create or update" semantics could actually be a better fit than "get or create". This depends heavily on the use case though.

source: https://stackoverflow.com/questions/21900868/best-http-method-for-get-or-create#:~:text=I'm%20writing%20an%20HTTP%20based%20API,%20and%20I%20have%20a
You can intercept requests or responses before they are handled by then or catch.
// Add a request interceptor
axios.interceptors.request.use(function (config) {
    // Do something before request is sent
    return config;
  }, function (error) {
    // Do something with request error
    return Promise.reject(error);
  });

// Add a response interceptor
axios.interceptors.response.use(function (response) {
    // Any status code that lie within the range of 2xx cause this function to trigger
    // Do something with response data
    return response;
  }, function (error) {
    // Any status codes that falls outside the range of 2xx cause this function to trigger
    // Do something with response error
    return Promise.reject(error);
  });
In sofware and tech idempotency typically refers to the idea that you can perform an operation multiple times without triggering any side effects more than once.

Here's the main facts you need to know about idempotency:

  • Idempotency is a property of operations or API requests that ensures repeating the operation multiple times produces the same result as executing it once.
  • Safe methods are idempotent but not all idempotent methods are safe.

  • HTTP methods like GET, HEAD, PUT, DELETE, OPTIONS, and TRACE are idempotent, while POST and PATCH are generally non-idempotent.


Source: https://blog.dreamfactory.com/what-is-idempotency#:~:text=Idempotency%20is%20a%20property%20of,all%20idempotent%20methods%20are%20safe.
you can use the debugger statement directly in your Node code to create breakpoints. When the Node runtime hits the debugger statement, it will pause execution if a debugger is attached.
function exampleFunction() {
    const value = 42;
    debugger; // Execution will pause here if a debugger is attached
    console.log(value);
}

exampleFunction();
Start Your Application in Debug Mode using the --inspect or --inspect-brk flag when running your app.
node --inspect --5120 lib/app.js
If all goes as expected you will see the port to access to the breakpoints, example:
Debugger listening on ws://127.0.0.1:9229/65b96f6d-6202-49db-bbe8-63b706a580a2
Visit the port on your browser and open the Node DevTools.
image.png 54.1 KB
More info: https://nodejs.org/en/learn/getting-started/debugging