menu

Search By Label

Install Redis from terminal using brew:
brew install redis

Run service:
brew services start redis

Verify service is working:
redis-cli ping

If it is configured properly, you will get the response PONG.
// Custom Hook
const useProducts = () => {
  const [products, setProducts] = useState([]);

  useEffect(() => {
    fetch("/api/products")
      .then((res) => res.json())
      .then((data) => setProducts(data));
  }, []);

  return products;
};

// Container Component
function ProductContainer() {
  const products = useProducts();

  return <ProductList products={products} />;
}

// Presentational Component
function ProductList({ products }) {
  return (
    <ul>
      {products.map((product) => (
        <li key={product.id}>{product.name}</li>
      ))}
    </ul>
  );
}
1. Generate a New SSH Key
Create a new SSH key using the ed25519 algorithm to securely identify yourself:
ssh-keygen -t ed25519 -C "brisamedina05@gmail.com"


2. Start the SSH Agent
Activate the SSH agent to manage your SSH keys:
eval "$(ssh-agent -s)"


3. Add Your SSH Key to the Agent
Add the newly generated SSH key to the SSH agent for authentication:
ssh-add ~/.ssh/id_ed25519


4. Copy the Public SSH Key
Retrieve your public SSH key, which you'll need to add to your GitHub account:
cat ~/.ssh/id_ed25519.pub


5. Set Up the Project Directory
Create a directory for your project and navigate into it to organize your files:
mkdir project-directory
cd project-directory


6. Clone the Repository
Clone the Git repository into your project directory:
git clone git@github.com:your-organization/your-repository.git


7. Install Project Dependencies
Navigate to the cloned repository's folder and install all necessary dependencies:
cd your-repository
npm install


8. Run the Project
Start your project in development mode using the appropriate command:
npm run dev


9. Install Node.js (If Needed)
If Node.js isn't already installed, you can install it using:
sudo apt install nodejs


10. Install Node Version Manager (NVM)
If you prefer to manage Node.js versions with NVM, install it using:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash


11. Manage Dependencies
If necessary, clean up old dependencies and reinstall them to ensure everything is up to date:
rm -rf node_modules
npm install


12. Run the Project Again
Finally, run your project again to ensure everything is working:
npm run dev


These steps guide you through setting up your project environment after configuring SSH access.

The aggregation frameworks is a pipeline of steps that runs on your data to convert the output on the format you need it. It provides a set of stages that you can chain together to create pipelines that process your data in a series of steps.

Here are some of the common aggregation stages:
  • $match: Filters documents based on specified criteria.
  • $project: Selects or excludes fields from documents.
  • $group: Groups documents by a specified field and calculates aggregate values.
  • $sort: Sorts documents by a specified field.
  • $limit: Limits the number of documents returned.
  • $skip: Skips a specified number of documents.
  • $unwind: Unwinds an array field, creating a new document for each element in the array.
  • $lookup: Joins two collections based on a specified field.
  • $redact: Redacts fields in documents based on specified criteria.
  • $bucket: Buckets documents into groups based on a specified field.
  • $sample: Samples a random subset of documents.
  • $geoNear: Finds documents near a specified point.


Example: Calculating Average Order Value
Scenario: Let's assume we have a collection named orders with documents representing individual orders. Each document has fields like order_id, customer_id, product_name, and price. We want to calculate the average order value for each customer.
db.orders.aggregate([
  {
    $group: {
      _id: "$customer_id",
      total_spent: { $sum: "$price" },
      total_orders: { $count: {} }
    }
  },
  {
    $project: {
      average_order_value: { $divide: ["$total_spent", "$total_orders"] }
    }
  }
])

Explanation:
  1. $group:
    • Groups the documents by customer_id.
    • Calculates the total spent for each customer using $sum.
    • Counts the total number of orders for each customer using $count.
  2. $project:
    • Calculates the average order value by dividing total_spent by total_orders.

Result:
The aggregation pipeline will return a list of documents, each containing the customer_id and the calculated average_order_value.

When you open a new connection for `mongosh` from your terminal you will find really useful the following commands:

First of all, connect to a server:
mongosh "mongodb+srv://{MONGO_COLLECTION_USER}:{MONGO_COLLECTION_PASSWORD}@{MONGO_APP_NAME}.yng1j.mongodb.net/?appName={MONGO_APP_NAME}"

Then the basic commands to start interacting (the most of them are little obvious):
// show databases
show dbs
// use database
use <db_name>
// show collections
show collections
// finally interact with them, for example
db.users.findOne()
Stored Procedures are precompiled code blocks that reside within a database. They provide a way to encapsulate and reuse frequently executed SQL statements, improving performance, maintainability, and security.

Benefits of Stored Procedures:
  • Performance: Precompiled code executes faster than executing statements directly.
  • Modularity: Encapsulate complex logic, making code more organized and reusable.
  • Security: Centralize security rules and permissions.
  • Data validation: Enforce data integrity and consistency.

Example:
CREATE PROCEDURE GetCustomers
AS
BEGIN
    SELECT CustomerID, CustomerName, City
    FROM Customers;
END;

// How to use?

EXEC GetCustomers;

Example using params:
CREATE PROCEDURE GetCustomersByCity
    @City nvarchar(50)
AS
BEGIN
    SELECT CustomerID, CustomerName
    FROM Customers
    WHERE City = @City;
END;

// How to use?

EXEC GetCustomersByCity @City = 'London';
Time to Live (TTL) is a feature in MongoDB that allows you to automatically expire documents after a specified amount of time. This is useful for scenarios where you want to keep data for a limited duration, such as temporary data, session data, or cache entries.

How TTL works:
  1. Index creation: To enable TTL for a collection, you create an index on a field that represents the expiration time. This field must be of type Date.
  2. Expiration time setting: When creating the index, you specify the TTL value in seconds.
  3. Document expiration: MongoDB periodically scans the collection and deletes documents whose expiration time has passed.

Example:
db.sessions.createIndex({ expiresAt: 1 }, { expireAfterSeconds: 3600 });
This creates an index on the expiresAt field and sets the TTL to 1 hour (3600 seconds). Any documents in the sessions collection with an expiresAt value that is older than 1 hour will be automatically deleted.

Use cases for TTL:
  • Session management: Store session data with a TTL to automatically expire inactive sessions.
  • Temporary data: Keep temporary data for a limited time, such as cached results or temporary files.
  • Data retention policies: Implement data retention policies by setting appropriate TTL values for different types of data.
The explain("executionStats") command in MongoDB provides detailed information about the execution plan and performance metrics of a query.
When used with the find() method, it returns a document containing statistics about how MongoDB executed the query.

Example:
db.products.explain("executionStats").find({ price: { $gt: 10 } });
MongoDB Semantic Search refers to the ability to search for documents in a MongoDB collection based on the meaning or context of the data, rather than just exact keyword matches. Traditional database searches are often keyword-based, which means they only return documents that contain an exact match to the search query. Semantic search, on the other hand, aims to understand the intent behind the query and return results that are contextually similar, even if the exact keywords aren't present.


Single-Threaded Execution: JavaScript operates on a single thread, meaning it executes one task at a time using the call stack, where functions are processed sequentially.

Call Stack: Picture the call stack as a stack of plates. Each time a function is invoked, a new plate (function) is added to the stack. Once a function completes, the plate is removed.

Web APIs: Asynchronous tasks like setTimeout, DOM events, and HTTP requests are managed by the browser’s Web APIs, operating outside the call stack.

Callback Queue: After an asynchronous task finishes, its callback is placed in the callback queue, which waits for the call stack to clear before moving forward.

Event Loop: The event loop constantly monitors the call stack. When it's empty, the loop pushes the next callback from the queue onto the stack.

Microtasks Queue: Tasks like promises are placed in a microtasks queue, which has higher priority than the callback queue. The event loop checks the microtasks queue first to ensure critical tasks are handled immediately.

Priority Handling: To sum up, the event loop prioritizes microtasks before handling other callbacks, ensuring efficient execution.
Page to compare different LLM:

https://arena.lmsys.org/
We can delete a property from an object in python with `.pop('property_name', None)` to avoid errors if the object does not exist.
image.png 81.1 KB

Source: https://www.javatpoint.com/difference-between-del-and-pop-in-python#:~:text=In%20Python%2C%20%22del%22%20can,removes%20an%20object%20from%20memory
This aggregation stage performs a left outer join to a collection in the same database.

There are four required fields:
  • from: The collection to use for lookup in the same database
  • localField: The field in the primary collection that can be used as a unique identifier in the from collection.
  • foreignField: The field in the from collection that can be used as a unique identifier in the primary collection.
  • as: The name of the new field that will contain the matching documents from the from collection.

Example:
db.comments.aggregate([
  {
    $lookup: {
      from: "movies",
      localField: "movie_id",
      foreignField: "_id",
      as: "movie_details",
    },
  },
  {
    $limit: 1
  }
])
Within the breakpoint() debugger, type dir(object) to list all available attributes and methods of the object.
This will give you an overview of what you can access.
Shallow copy
A shallow copy duplicates only the top-level properties. If those properties are references (like objects or arrays), the copy will reference the same objects.
let original = { a: 1, b: { c: 2 } }
let copy = { ...original }
copy.b.c = 3 // Changes "original.b.c"
When to use:
- Small objects with primitive data types.
- Situations where performance is critical.
- Cases where changes to nested objects should reflect in all copies.

Deep copy
A deep copy creates a complete clone of the original object, duplicating all nested objects and arrays.
let original = { a: 1, b: { c: 2 } }
let copy = JSON.parse(JSON.stringify(original));
copy.b.c = 3 // The "original.b.c" remains 2
When to use:
- Complex objects with nested structures.
- Scenarios where complete independence from the original object is needed.
- Preventing unintended side-effects from shared references.