Getting Started with Ollama
How to install and run local LLMs using Ollama and integration with OpenClaw
February 27, 2026
Getting Started with Ollama
Ollama is an open-source tool that makes it very easy to install and run large language models (LLMs) such as Llama 3, Mistral, and Gemma in a local environment.
It is characterized by providing an optimized environment without the need for complex GPU configuration.
It is characterized by providing an optimized environment without the need for complex GPU configuration.
Installation
In a Linux environment, you can run the installation script using the single
curl command below.Install Ollamash
Verification
Once installation is complete, enter the
ollama command in the terminal to verify the installation and check available options.Check Ollama versionsh
Running Models
To run a desired model, use the
If the model file is not available locally, it will automatically download it from the Ollama Library before starting the conversation.
run command.If the model file is not available locally, it will automatically download it from the Ollama Library before starting the conversation.
Run LLM modelsh
Core Commands
The key commands for model management and server control are as follows.
| Command | Description |
|---|---|
ollama list | Check the list of models installed locally |
ollama pull <model> | Pre-download the model file without running it |
ollama rm <model> | Delete a specific installed model |
ollama serve | Run the background server for API usage |
Using with OpenClaw
OpenClaw is a tool that provides extended features or a UI based on Ollama.
While the Ollama server is active, you can call the interface using the command below.
While the Ollama server is active, you can call the interface using the command below.
Launch OpenClawsh