⚡ Automation

Local Model Optimizer

Auto-detect hardware, recommend the best Ollama models for your setup, and configure hybrid cloud/local routing. Cut API costs 50-90% in one command.

What it does

Local Model Optimizer scans your hardware — GPU VRAM, system RAM, CPU architecture — and matches you with the best Ollama-compatible model for your tier. It installs Ollama if needed, pulls the recommended model, and configures OpenClaw's provider system to route intelligently: simple tasks go local (free), complex reasoning stays in the cloud.

The cost analysis command reads your actual OpenClaw usage logs and calculates your monthly savings with break-even projections. Most setups see 50-90% API cost reduction after one run.

Use cases

Hardware tiers

TierVRAMRecommended Models
Tiny≤4GBGemma 4 E2B, Phi-3.5 Mini, Qwen2.5-3B
Small4–8GBGemma 4 E4B, Llama 3.1 8B, Mistral 7B
Medium8–16GBGemma 4 12B, Llama 3.1 8B Q8, CodeGemma
Large16–32GBGemma 4 27B, Llama 3.1 70B Q4, Mixtral 8x7B
XL32GB+Gemma 4 27B Q8, Llama 3.1 70B Q8, DeepSeek V2

Quick start

python3 scripts/local-model-optimizer.py auto

Full auto-setup: detects hardware → recommends models → installs Ollama → pulls model → configures routing → runs verification test.

Requirements

How to install

After purchase you'll receive a download link. Extract the files and copy the local-model-optimizer/ folder into your OpenClaw skills/ directory, then restart OpenClaw.

Ready to get started?

One-time purchase. Instant delivery. Works with OpenClaw 3.28+.

Buy Now — $4.99 Browse all skills →