menu

Search By Label

A headless application is a software application that separates the front-end (user interface) from the back-end (server-side logic). This separation allows for greater flexibility and scalability, as the front-end and back-end can be developed, maintained, and updated independently.

Key characteristics of headless applications:
  • Decoupled front-end and back-end: The front-end and back-end components are separate entities that communicate through APIs.
  • API-driven: The back-end provides APIs that the front-end can use to fetch data and perform actions.
  • Multiple front-ends: A headless application can support multiple front-ends, such as web, mobile, and desktop applications.
  • Flexibility: This architecture allows for rapid changes to the front-end without affecting the back-end, and vice versa.
  • Scalability: The front-end and back-end can be scaled independently to meet changing demands.

Benefits of headless applications:
  • Faster development: The decoupled architecture allows for parallel development of the front-end and back-end.
  • Improved flexibility: The ability to update the front-end without affecting the back-end enables rapid changes and experimentation.
  • Enhanced scalability: The ability to scale the front-end and back-end independently ensures optimal performance and resource utilization.
  • Reusability: The back-end can be reused for multiple front-ends, reducing development effort.
  • Improved user experience: Headless applications can provide a more consistent and responsive user experience across different platforms.

Example:
A headless e-commerce application could have a back-end that manages product information, orders, and inventory. The front-end could be a web application, a mobile app, or even a voice-activated assistant. The front-end would communicate with the back-end through APIs to fetch product data, place orders, and manage the shopping cart.

An API gateway acts as a single entry point for clients to access and interact with multiple backend services. It provides a unified interface, abstracting the complexity of the underlying systems and simplifying communication.

Key functions of an API gateway:
  • Unified interface: It presents a consistent API to clients, regardless of the backend services being accessed.
  • Authentication and authorization: It can handle authentication and authorization, ensuring that only authorized users can access specific resources.
  • Rate limiting: It can implement rate limiting to prevent abuse and protect backend services from overload.
  • Caching: It can cache frequently accessed data to improve performance and reduce load on backend services.
  • Transformation: It can transform data between different formats, ensuring compatibility between clients and backend services.
  • Security: It can provide security features like encryption, data validation, and protection against common attacks.

Benefits of using an API gateway:
  • Simplified development: It reduces the complexity of client-side development by providing a single endpoint.
  • Improved performance: It can improve performance by caching data and optimizing communication with backend services.
  • Enhanced security: It can strengthen security by implementing authentication, authorization, and rate limiting.
  • Scalability: It can handle increased traffic by distributing load across multiple backend services.
  • Flexibility: It can adapt to changes in backend services without affecting clients.

Example:
In a mobile application, an API gateway can act as a central point for clients to access different backend services, such as user authentication, product catalog, and order processing. The API gateway can handle authentication, rate limiting, and data transformation, providing a consistent and secure interface for the mobile app.

A load balancer can be a device or software that distributes incoming network traffic across multiple servers to improve performance, reliability, and scalability. It acts as a traffic manager, ensuring that workload is evenly distributed among available servers, preventing any single server from becoming overloaded.

Key functions of a load balancer:
  • Traffic distribution: It directs incoming traffic to the most appropriate server based on various factors, such as server load, health status, and application requirements.
  • Failover: If a server fails or becomes unresponsive, the load balancer can automatically redirect traffic to a working server, ensuring uninterrupted service.
  • Session persistence: It can maintain session affinity, ensuring that requests from a particular client are always routed to the same server, preserving the state of the session.
  • Performance optimization: Load balancers can improve performance by reducing latency, increasing throughput, and enhancing overall system responsiveness.
  • Security: Some load balancers can provide security features like SSL termination, DDoS protection, and access control.


Example:
In an e-commerce website, a load balancer can distribute incoming traffic from customers across multiple web servers, ensuring that the website remains responsive even during peak traffic periods. If one server fails, the load balancer can automatically redirect traffic to another server, preventing service interruptions.

FaaS (Function as a Service) is a type of serverless computing where developers can write and deploy individual functions without having to manage the underlying infrastructure. These functions are typically small, single-purpose pieces of code that are triggered by events.

Key characteristics of FaaS:
  • Function-level granularity: You can deploy and manage functions independently, without having to package them into larger applications.
  • Event-driven: FaaS functions are triggered by events, such as HTTP requests, API calls, or messages from other services.
  • Automatic scaling: The platform automatically adjusts the number of instances based on demand, ensuring optimal performance and cost-efficiency.
  • Pay-per-use: You only pay for the resources consumed by your functions, rather than paying for a fixed server capacity.

Example:
Imagine you want to create a web application that processes images. Instead of setting up and managing a server to handle the image processing, you could use a FaaS platform to deploy a function that triggers when an image is uploaded. The function would then process the image and store the result in a cloud storage service.

Serverless computing is a cloud computing model where developers can build, run, and manage applications without having to provision or manage servers. Instead of managing infrastructure, developers can focus on writing code and deploying it to a serverless platform.

Key characteristics of serverless computing:
  • Pay-per-use: You only pay for the resources consumed by your application, rather than paying for a fixed server capacity.
  • Automatic scaling: The platform automatically adjusts the number of instances based on demand, ensuring optimal performance and cost-efficiency.
  • Event-driven: Serverless functions are typically triggered by events, such as HTTP requests, API calls, or messages from other services.
  • Managed infrastructure: The platform handles all the underlying infrastructure, including servers, networking, and security.

Example:
Imagine you want to create a web application that processes images. Instead of setting up and managing servers to handle the image processing, you could use a serverless platform to deploy a function that triggers when an image is uploaded. The function would then process the image and store the result in a cloud storage service.

In short, O(1) means that it takes a constant time, like 14 nanoseconds, or three minutes no matter the amount of data in the set.
def add_items(n):
    return n + n + n
 
print add_items(10)
O(n) means it takes an amount of time linear with the size of the set, so a set twice the size will take twice the time. You probably don't want to put a million objects into one of these.
This is always the best scenario possible to take and is represented as constant in the initial image.
It is also known as Quadratic Time complexity. In this type the running time of algorithm grows as square function of input size. This can cause very large time for large inputs. Here, the notation O in O(n2) represents its worst cases complexity.

The O(N^2) time complexity means that the running time of an algorithm grows quadratically with the size of the input. It often involves nested loops, where each element in the input is compared with every other element.

A code example is:
def print_items(n):
    for i in range(n):
        for j in range(n):
            print(i,j) 

print_items(10)

When you have an algorithm like the following one where there are more scenarios and the expected notation would be O(n^2 + n) the rule is keeping only the dominan value resulting on O(n^2)
def print_items(n):
    for i in range(n):
        for j in range(n):
            print(i,j)
    
    for k in range(n):
        print(k)

print_items(10)
Big O notation focuses on the worst case, which is O(n) for the simple search. It's a guarantee that the simple search will never be slower than O(n) time. In a code where the worst scenario is go thought all the elements the notation is O(n)
def print_items(n):
    for i in range(n):
        print(i)

print_items(10)

The time complexity increases proportionally to the given input.
Big O notation is a way of describing the speed or complexity of a given algorithm.

Simply, Big O notation tells you the number of operations an algorithm will perform. It takes its name from the “Big O” in front of the estimated number of operations.

What the Big O notation does not tell you is how fast the algorithm will be in seconds. There are too many factors that influence how long it takes for an algorithm to run. Instead, you will use the Big O notation to compare different algorithms by the number of operations they do.
The Time complexity of an algorithm can be defined as a measure of the amount of time taken to run an algorithm as a function of the size of the input. In simple words, it tells about the growth of time taken as a function of input data.

Time complexity is categorized in 3 types:
  • Best-Case Time Complexity: The minimum amount of time required by an algorithm to run, based on the given most suitable input is known as best-case complexity.
  • Average Case Complexity: The average amount of time required by an algorithm to run, taking into consideration all the possible inputs is known as average-case complexity.
  • Worst Case Complexity: The maximum amount of time required by an algorithm to run, taking into consideration possible worst case is known as worst-case complexity.
Install Redis from terminal using brew:
brew install redis

Run service:
brew services start redis

Verify service is working:
redis-cli ping

If it is configured properly, you will get the response PONG.
// Custom Hook
const useProducts = () => {
  const [products, setProducts] = useState([]);

  useEffect(() => {
    fetch("/api/products")
      .then((res) => res.json())
      .then((data) => setProducts(data));
  }, []);

  return products;
};

// Container Component
function ProductContainer() {
  const products = useProducts();

  return <ProductList products={products} />;
}

// Presentational Component
function ProductList({ products }) {
  return (
    <ul>
      {products.map((product) => (
        <li key={product.id}>{product.name}</li>
      ))}
    </ul>
  );
}
1. Generate a New SSH Key
Create a new SSH key using the ed25519 algorithm to securely identify yourself:
ssh-keygen -t ed25519 -C "brisamedina05@gmail.com"


2. Start the SSH Agent
Activate the SSH agent to manage your SSH keys:
eval "$(ssh-agent -s)"


3. Add Your SSH Key to the Agent
Add the newly generated SSH key to the SSH agent for authentication:
ssh-add ~/.ssh/id_ed25519


4. Copy the Public SSH Key
Retrieve your public SSH key, which you'll need to add to your GitHub account:
cat ~/.ssh/id_ed25519.pub


5. Set Up the Project Directory
Create a directory for your project and navigate into it to organize your files:
mkdir project-directory
cd project-directory


6. Clone the Repository
Clone the Git repository into your project directory:
git clone git@github.com:your-organization/your-repository.git


7. Install Project Dependencies
Navigate to the cloned repository's folder and install all necessary dependencies:
cd your-repository
npm install


8. Run the Project
Start your project in development mode using the appropriate command:
npm run dev


9. Install Node.js (If Needed)
If Node.js isn't already installed, you can install it using:
sudo apt install nodejs


10. Install Node Version Manager (NVM)
If you prefer to manage Node.js versions with NVM, install it using:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash


11. Manage Dependencies
If necessary, clean up old dependencies and reinstall them to ensure everything is up to date:
rm -rf node_modules
npm install


12. Run the Project Again
Finally, run your project again to ensure everything is working:
npm run dev


These steps guide you through setting up your project environment after configuring SSH access.

The aggregation frameworks is a pipeline of steps that runs on your data to convert the output on the format you need it. It provides a set of stages that you can chain together to create pipelines that process your data in a series of steps.

Here are some of the common aggregation stages:
  • $match: Filters documents based on specified criteria.
  • $project: Selects or excludes fields from documents.
  • $group: Groups documents by a specified field and calculates aggregate values.
  • $sort: Sorts documents by a specified field.
  • $limit: Limits the number of documents returned.
  • $skip: Skips a specified number of documents.
  • $unwind: Unwinds an array field, creating a new document for each element in the array.
  • $lookup: Joins two collections based on a specified field.
  • $redact: Redacts fields in documents based on specified criteria.
  • $bucket: Buckets documents into groups based on a specified field.
  • $sample: Samples a random subset of documents.
  • $geoNear: Finds documents near a specified point.


Example: Calculating Average Order Value
Scenario: Let's assume we have a collection named orders with documents representing individual orders. Each document has fields like order_id, customer_id, product_name, and price. We want to calculate the average order value for each customer.
db.orders.aggregate([
  {
    $group: {
      _id: "$customer_id",
      total_spent: { $sum: "$price" },
      total_orders: { $count: {} }
    }
  },
  {
    $project: {
      average_order_value: { $divide: ["$total_spent", "$total_orders"] }
    }
  }
])

Explanation:
  1. $group:
    • Groups the documents by customer_id.
    • Calculates the total spent for each customer using $sum.
    • Counts the total number of orders for each customer using $count.
  2. $project:
    • Calculates the average order value by dividing total_spent by total_orders.

Result:
The aggregation pipeline will return a list of documents, each containing the customer_id and the calculated average_order_value.

When you open a new connection for `mongosh` from your terminal you will find really useful the following commands:

First of all, connect to a server:
mongosh "mongodb+srv://{MONGO_COLLECTION_USER}:{MONGO_COLLECTION_PASSWORD}@{MONGO_APP_NAME}.yng1j.mongodb.net/?appName={MONGO_APP_NAME}"

Then the basic commands to start interacting (the most of them are little obvious):
// show databases
show dbs
// use database
use <db_name>
// show collections
show collections
// finally interact with them, for example
db.users.findOne()