Profile
Sign out
menu
create
New Post
My learned
Sign In
Search By Label
All
Html
CSS
Javascript
NodeJs
Ruby
Ruby On Rails
React
Vue
Angular
NextJs
Webpack
Heroku
Postgres
Express
Typescript
Helpers
Sass
Shell
Elasticsearch
Software
Microservices
Git
Iterm
V Scode
Vscode
Shorcut
Sidekiq
Databases
Npm
Terminal
Python
Api Design
Security
Docker
Ai
Anti Pattern
Flask
Aws
Unlabeled
Mongo Db
Redis
Big O
Design System
Axios
Kubernetes
Llm
Cloudflare
Lazy Vim
System Design
Run llama models locally
Llm
You can run a LLM model locally with the Ollama package.
Ollama supports a list of models available on
ollama.com/library
Pull a model
ollama pull llama3.2
Run the model
ollama run llama3.2
Source: https://github.com/ollama/ollama
Compare Language Models
Llm
Page to compare different LLM:
https://arena.lmsys.org/