Profile
Sign out
menu
create
New Post
My learned
Sign In
Run llama models locally
Llm
You can run a LLM model locally with the Ollama package.
Ollama supports a list of models available on
ollama.com/library
Pull a model
ollama pull llama3.2
Run the model
ollama run llama3.2
Source: https://github.com/ollama/ollama
arrow_back
back