menu

Search By Label

FaaS (Function as a Service) is a type of serverless computing where developers can write and deploy individual functions without having to manage the underlying infrastructure. These functions are typically small, single-purpose pieces of code that are triggered by events.

Key characteristics of FaaS:
  • Function-level granularity: You can deploy and manage functions independently, without having to package them into larger applications.
  • Event-driven: FaaS functions are triggered by events, such as HTTP requests, API calls, or messages from other services.
  • Automatic scaling: The platform automatically adjusts the number of instances based on demand, ensuring optimal performance and cost-efficiency.
  • Pay-per-use: You only pay for the resources consumed by your functions, rather than paying for a fixed server capacity.

Example:
Imagine you want to create a web application that processes images. Instead of setting up and managing a server to handle the image processing, you could use a FaaS platform to deploy a function that triggers when an image is uploaded. The function would then process the image and store the result in a cloud storage service.

Serverless computing is a cloud computing model where developers can build, run, and manage applications without having to provision or manage servers. Instead of managing infrastructure, developers can focus on writing code and deploying it to a serverless platform.

Key characteristics of serverless computing:
  • Pay-per-use: You only pay for the resources consumed by your application, rather than paying for a fixed server capacity.
  • Automatic scaling: The platform automatically adjusts the number of instances based on demand, ensuring optimal performance and cost-efficiency.
  • Event-driven: Serverless functions are typically triggered by events, such as HTTP requests, API calls, or messages from other services.
  • Managed infrastructure: The platform handles all the underlying infrastructure, including servers, networking, and security.

Example:
Imagine you want to create a web application that processes images. Instead of setting up and managing servers to handle the image processing, you could use a serverless platform to deploy a function that triggers when an image is uploaded. The function would then process the image and store the result in a cloud storage service.

In short, O(1) means that it takes a constant time, like 14 nanoseconds, or three minutes no matter the amount of data in the set.
def add_items(n):
    return n + n + n
 
print add_items(10)
O(n) means it takes an amount of time linear with the size of the set, so a set twice the size will take twice the time. You probably don't want to put a million objects into one of these.
This is always the best scenario possible to take and is represented as constant in the initial image.
It is also known as Quadratic Time complexity. In this type the running time of algorithm grows as square function of input size. This can cause very large time for large inputs. Here, the notation O in O(n2) represents its worst cases complexity.

The O(N^2) time complexity means that the running time of an algorithm grows quadratically with the size of the input. It often involves nested loops, where each element in the input is compared with every other element.

A code example is:
def print_items(n):
    for i in range(n):
        for j in range(n):
            print(i,j) 

print_items(10)

When you have an algorithm like the following one where there are more scenarios and the expected notation would be O(n^2 + n) the rule is keeping only the dominan value resulting on O(n^2)
def print_items(n):
    for i in range(n):
        for j in range(n):
            print(i,j)
    
    for k in range(n):
        print(k)

print_items(10)
Big O notation focuses on the worst case, which is O(n) for the simple search. It's a guarantee that the simple search will never be slower than O(n) time. In a code where the worst scenario is go thought all the elements the notation is O(n)
def print_items(n):
    for i in range(n):
        print(i)

print_items(10)

The time complexity increases proportionally to the given input.
Big O notation is a way of describing the speed or complexity of a given algorithm.

Simply, Big O notation tells you the number of operations an algorithm will perform. It takes its name from the “Big O” in front of the estimated number of operations.

What the Big O notation does not tell you is how fast the algorithm will be in seconds. There are too many factors that influence how long it takes for an algorithm to run. Instead, you will use the Big O notation to compare different algorithms by the number of operations they do.
The Time complexity of an algorithm can be defined as a measure of the amount of time taken to run an algorithm as a function of the size of the input. In simple words, it tells about the growth of time taken as a function of input data.

Time complexity is categorized in 3 types:
  • Best-Case Time Complexity: The minimum amount of time required by an algorithm to run, based on the given most suitable input is known as best-case complexity.
  • Average Case Complexity: The average amount of time required by an algorithm to run, taking into consideration all the possible inputs is known as average-case complexity.
  • Worst Case Complexity: The maximum amount of time required by an algorithm to run, taking into consideration possible worst case is known as worst-case complexity.
Install Redis from terminal using brew:
brew install redis

Run service:
brew services start redis

Verify service is working:
redis-cli ping

If it is configured properly, you will get the response PONG.
// Custom Hook
const useProducts = () => {
  const [products, setProducts] = useState([]);

  useEffect(() => {
    fetch("/api/products")
      .then((res) => res.json())
      .then((data) => setProducts(data));
  }, []);

  return products;
};

// Container Component
function ProductContainer() {
  const products = useProducts();

  return <ProductList products={products} />;
}

// Presentational Component
function ProductList({ products }) {
  return (
    <ul>
      {products.map((product) => (
        <li key={product.id}>{product.name}</li>
      ))}
    </ul>
  );
}
1. Generate a New SSH Key
Create a new SSH key using the ed25519 algorithm to securely identify yourself:
ssh-keygen -t ed25519 -C "brisamedina05@gmail.com"


2. Start the SSH Agent
Activate the SSH agent to manage your SSH keys:
eval "$(ssh-agent -s)"


3. Add Your SSH Key to the Agent
Add the newly generated SSH key to the SSH agent for authentication:
ssh-add ~/.ssh/id_ed25519


4. Copy the Public SSH Key
Retrieve your public SSH key, which you'll need to add to your GitHub account:
cat ~/.ssh/id_ed25519.pub


5. Set Up the Project Directory
Create a directory for your project and navigate into it to organize your files:
mkdir project-directory
cd project-directory


6. Clone the Repository
Clone the Git repository into your project directory:
git clone git@github.com:your-organization/your-repository.git


7. Install Project Dependencies
Navigate to the cloned repository's folder and install all necessary dependencies:
cd your-repository
npm install


8. Run the Project
Start your project in development mode using the appropriate command:
npm run dev


9. Install Node.js (If Needed)
If Node.js isn't already installed, you can install it using:
sudo apt install nodejs


10. Install Node Version Manager (NVM)
If you prefer to manage Node.js versions with NVM, install it using:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash


11. Manage Dependencies
If necessary, clean up old dependencies and reinstall them to ensure everything is up to date:
rm -rf node_modules
npm install


12. Run the Project Again
Finally, run your project again to ensure everything is working:
npm run dev


These steps guide you through setting up your project environment after configuring SSH access.

The aggregation frameworks is a pipeline of steps that runs on your data to convert the output on the format you need it. It provides a set of stages that you can chain together to create pipelines that process your data in a series of steps.

Here are some of the common aggregation stages:
  • $match: Filters documents based on specified criteria.
  • $project: Selects or excludes fields from documents.
  • $group: Groups documents by a specified field and calculates aggregate values.
  • $sort: Sorts documents by a specified field.
  • $limit: Limits the number of documents returned.
  • $skip: Skips a specified number of documents.
  • $unwind: Unwinds an array field, creating a new document for each element in the array.
  • $lookup: Joins two collections based on a specified field.
  • $redact: Redacts fields in documents based on specified criteria.
  • $bucket: Buckets documents into groups based on a specified field.
  • $sample: Samples a random subset of documents.
  • $geoNear: Finds documents near a specified point.


Example: Calculating Average Order Value
Scenario: Let's assume we have a collection named orders with documents representing individual orders. Each document has fields like order_id, customer_id, product_name, and price. We want to calculate the average order value for each customer.
db.orders.aggregate([
  {
    $group: {
      _id: "$customer_id",
      total_spent: { $sum: "$price" },
      total_orders: { $count: {} }
    }
  },
  {
    $project: {
      average_order_value: { $divide: ["$total_spent", "$total_orders"] }
    }
  }
])

Explanation:
  1. $group:
    • Groups the documents by customer_id.
    • Calculates the total spent for each customer using $sum.
    • Counts the total number of orders for each customer using $count.
  2. $project:
    • Calculates the average order value by dividing total_spent by total_orders.

Result:
The aggregation pipeline will return a list of documents, each containing the customer_id and the calculated average_order_value.

When you open a new connection for `mongosh` from your terminal you will find really useful the following commands:

First of all, connect to a server:
mongosh "mongodb+srv://{MONGO_COLLECTION_USER}:{MONGO_COLLECTION_PASSWORD}@{MONGO_APP_NAME}.yng1j.mongodb.net/?appName={MONGO_APP_NAME}"

Then the basic commands to start interacting (the most of them are little obvious):
// show databases
show dbs
// use database
use <db_name>
// show collections
show collections
// finally interact with them, for example
db.users.findOne()
Stored Procedures are precompiled code blocks that reside within a database. They provide a way to encapsulate and reuse frequently executed SQL statements, improving performance, maintainability, and security.

Benefits of Stored Procedures:
  • Performance: Precompiled code executes faster than executing statements directly.
  • Modularity: Encapsulate complex logic, making code more organized and reusable.
  • Security: Centralize security rules and permissions.
  • Data validation: Enforce data integrity and consistency.

Example:
CREATE PROCEDURE GetCustomers
AS
BEGIN
    SELECT CustomerID, CustomerName, City
    FROM Customers;
END;

// How to use?

EXEC GetCustomers;

Example using params:
CREATE PROCEDURE GetCustomersByCity
    @City nvarchar(50)
AS
BEGIN
    SELECT CustomerID, CustomerName
    FROM Customers
    WHERE City = @City;
END;

// How to use?

EXEC GetCustomersByCity @City = 'London';
Time to Live (TTL) is a feature in MongoDB that allows you to automatically expire documents after a specified amount of time. This is useful for scenarios where you want to keep data for a limited duration, such as temporary data, session data, or cache entries.

How TTL works:
  1. Index creation: To enable TTL for a collection, you create an index on a field that represents the expiration time. This field must be of type Date.
  2. Expiration time setting: When creating the index, you specify the TTL value in seconds.
  3. Document expiration: MongoDB periodically scans the collection and deletes documents whose expiration time has passed.

Example:
db.sessions.createIndex({ expiresAt: 1 }, { expireAfterSeconds: 3600 });
This creates an index on the expiresAt field and sets the TTL to 1 hour (3600 seconds). Any documents in the sessions collection with an expiresAt value that is older than 1 hour will be automatically deleted.

Use cases for TTL:
  • Session management: Store session data with a TTL to automatically expire inactive sessions.
  • Temporary data: Keep temporary data for a limited time, such as cached results or temporary files.
  • Data retention policies: Implement data retention policies by setting appropriate TTL values for different types of data.
The explain("executionStats") command in MongoDB provides detailed information about the execution plan and performance metrics of a query.
When used with the find() method, it returns a document containing statistics about how MongoDB executed the query.

Example:
db.products.explain("executionStats").find({ price: { $gt: 10 } });