Getting Started with Ollama

How to install and run local LLMs using Ollama and integration with OpenClaw
February 27, 2026

Getting Started with Ollama

Ollama is an open-source tool that makes it very easy to install and run large language models (LLMs) such as Llama 3, Mistral, and Gemma in a local environment.
It is characterized by providing an optimized environment without the need for complex GPU configuration.

Installation

In a Linux environment, you can run the installation script using the single curl command below.
Install Ollama
sh

Verification

Once installation is complete, enter the ollama command in the terminal to verify the installation and check available options.
Check Ollama version
sh

Running Models

To run a desired model, use the run command.
If the model file is not available locally, it will automatically download it from the Ollama Library before starting the conversation.
Run LLM model
sh

Core Commands

The key commands for model management and server control are as follows.
CommandDescription
ollama listCheck the list of models installed locally
ollama pull <model>Pre-download the model file without running it
ollama rm <model>Delete a specific installed model
ollama serveRun the background server for API usage

Using with OpenClaw

OpenClaw is a tool that provides extended features or a UI based on Ollama.
While the Ollama server is active, you can call the interface using the command below.
Launch OpenClaw
sh
Jooojub
System S/W engineer
Explore Tags
Series
    Recent Post
    © 2026. jooojub. All right reserved.