โ† Back to Guides Tips & Tricks

7 Things I Wish Someone Told Me Before I Almost Gave Up on OpenClaw

OpenClaw is incredible in theory, but getting real value from it takes some know-how. This guide covers the practical lessons that turned it from a frustrating token-burner into something genuinely useful.

Last updated: February 17, 2026

๐Ÿ“– Read time: ~12 minutes
๐ŸŽฏ Audience: All skill levels
๐Ÿงฉ Type: Best practices

This guide assumes you already have OpenClaw installed and running. If you're on Windows and haven't set it up yet, start with our WSL setup guide first.

1 Stop running everything through your best model

This is the single biggest mistake people make โ€” and the single biggest reason costs spiral out of control.

By default, OpenClaw sends every single request to whatever model you set as your primary. That includes heartbeats (periodic "are you still alive?" pings that happen every 30 minutes), sub-agents that spin up when your main agent does parallel work, quick lookups like "what's on my calendar?", and complex coding tasks. If your primary model is Claude Opus or GPT-5.2, all of that hits the expensive model โ€” even the simple stuff.

Think of it like this: using Opus for a heartbeat ping is like hiring a lawyer to check your letterbox. It works, but it makes no financial sense.

What to do instead

Set up a tiered model config. Use a cheap, fast model as your primary for everyday tasks, and keep the smart (expensive) model as a fallback for when your agent actually needs it.

Open your OpenClaw config file. This is where all your settings live:

Terminal
nano ~/.openclaw/openclaw.json

If you prefer a different text editor, you can also use vim or code (VS Code) instead of nano. On Windows with WSL, you can also open it in File Explorer by typing explorer.exe . in your Ubuntu terminal and navigating to the .openclaw folder.

Here's an example config that uses a cheap model as the default, with smarter models as fallbacks:

openclaw.json โ€” model routing example
{ "agents": { "defaults": { "model": { // Cheap model handles most requests "primary": "anthropic/claude-haiku-4-5", // Smarter models kick in when needed "fallbacks": [ "anthropic/claude-sonnet-4-5", "anthropic/claude-opus-4-6" ] }, "models": { "anthropic/claude-haiku-4-5": { "alias": "Haiku" }, "anthropic/claude-sonnet-4-5": { "alias": "Sonnet" }, "anthropic/claude-opus-4-6": { "alias": "Opus" } } } } }

The alias fields let you switch models quickly during a chat session by typing:

In your OpenClaw chat
/model Opus

That switches your current session to Opus for a complex task. When you're done, switch back to Haiku with /model Haiku. Your wallet will thank you.

Some users have reduced per-request token costs from 20โ€“40k tokens down to 1.5k just by routing smarter. The difference between $300/month and $30/month is often just this one config change.

2 Your agent needs rules โ€” a lot of them

Out of the box, OpenClaw is a blank slate. It doesn't know how you want it to behave, so it does unpredictable things: it loops on the same answer, repeats itself, forgets what it was doing, makes weird decisions, and chews through tokens doing nothing useful.

This is normal. The agents you see people showing off online โ€” the ones that build apps overnight and manage email like a real assistant โ€” all have heavily customised instruction sets behind them.

How to add rules

OpenClaw uses skills โ€” these are folders containing a SKILL.md file (a plain text file with instructions) that your agent reads and follows. Think of a skill as a "rulebook" for a specific job.

Skills live in your workspace folder. Here's how to create one:

Terminal
# Create a folder for your skill mkdir -p ~/.openclaw/workspace/skills/my-rules # Create the instruction file nano ~/.openclaw/workspace/skills/my-rules/SKILL.md

Inside that file, write clear rules in plain English. Here's an example of rules that solve common frustrations:

SKILL.md โ€” example guardrails
# Agent Behaviour Rules ## Never Loop If you've answered a question, do NOT answer it again. If you notice you're repeating yourself, stop immediately and say: "I notice I'm looping. Let me try a different approach." ## Be Concise Keep responses short and actionable. Don't explain what you're about to do โ€” just do it. Only explain if I ask. ## Memory Management Before compacting memory, always write a summary of: - What tasks are in progress - What decisions have been made - What the user asked for most recently Save this summary to a state file before compacting. ## Task Management Before asking me a question, check if the answer is already in your task list or memory files. Don't ask me things you should already know. ## Error Handling If something fails, try to fix it yourself first. Only ask me for help after you've tried at least two different approaches and explain what you tried.

You can create as many skill folders as you want, each with its own SKILL.md. For example, you might have separate skills for email handling, coding standards, calendar management, etc.

Pro tip: You can also ask your agent to help you write better rules. Tell it: "Review your own behaviour over the last 10 messages and suggest rules we should add to prevent the issues that came up." Then save its suggestions as a skill.

3 "Work on this overnight" doesn't work the way you think

One of the most common complaints: "I told my agent to work on something while I sleep, and it just... didn't."

Here's why: when you chat with OpenClaw, that conversation lives in a session. When you close the chat or walk away, the session eventually ends. Your agent doesn't "keep working in the background" โ€” it just stops.

If you want your agent to do things on a schedule or work independently, you need cron jobs.

What's a cron job?

A cron job is just a task that runs automatically on a timer. Like an alarm clock for your agent. You can set it to do something every hour, every morning at 6am, every weeknight at midnight โ€” whatever you want.

The key is setting sessionTarget: "isolated" โ€” this tells OpenClaw to spin up a completely fresh, independent agent session for the cron job. It runs on its own and messages you the results when it's done.

Setting up a cron job

The easiest way is to ask your agent to create one for you in a chat session. For example:

Tell your agent
Create a cron job that runs every morning at 7am. It should check my email for anything urgent, summarise the top 3 items, and send me the summary on Telegram. Use sessionTarget: "isolated" so it runs independently.

Your agent will create the cron configuration for you. You can also do this by directly editing your config file:

openclaw.json โ€” cron job example
{ "cron": [ { "name": "morning-briefing", "schedule": "0 7 * * *", "message": "Check my email for urgent items. Summarise the top 3 and message me.", "sessionTarget": "isolated" } ] }

The "0 7 * * *" part is a cron schedule โ€” it means "at minute 0 of hour 7, every day." Here are some common patterns:

Schedule What it means
0 7 * * *Every day at 7:00 AM
0 */2 * * *Every 2 hours
*/30 * * * *Every 30 minutes
0 9 * * 1-5Weekdays at 9:00 AM
0 22 * * *Every night at 10:00 PM

For one-off deferred tasks

Cron jobs repeat on a schedule. If you want a one-off task (like "build this app tonight"), you need a different approach. The simplest method:

Option A: Ask your agent to create a cron job, let it run once, then delete the cron job.

Option B: Set up a task queue. Create a Notion page, a text file, or a simple database, and have a cron job check it every 30 minutes for new tasks. When the agent finds a task, it works on it. When it's done, it marks the task as complete. Several users have built full overnight pipelines this way โ€” breaking a project into phases, adding them to a task list, and letting the agent work through them one at a time.

If your cron jobs aren't firing, test with a simple one first. Tell your agent: "Create a cron job that sends me a test message in 2 minutes." If that doesn't arrive, you have a config issue to debug before building anything more complex.

4 Start with one thing working end-to-end

It's tempting to set up email + calendar + Telegram + web scraping + cron jobs + file management all at once. Don't. Every integration is a separate failure mode, and when three of them break simultaneously, it's nearly impossible to figure out which one caused the problem.

The sanity path

Step 1: Pick one tiny workflow. Something like: "every morning at 8am, my agent checks the weather and sends me a message."

Step 2: Get it working perfectly. End to end. No shortcuts, no "I'll fix that later."

Step 3: Once it's reliable, add the next thing. Maybe email checking. Get that working.

Step 4: Keep layering, one integration at a time.

Keep explicit logs at every step. When something breaks, you can look at the inputs and outputs to see exactly where it went wrong, instead of guessing.

A useful debugging command: openclaw doctor --fix โ€” this validates your config, applies any needed migrations, and repairs common issues automatically.

5 Save what works โ€” you'll need it again

OpenClaw uses compaction to manage its memory. As conversations get long, it summarises older messages to stay within the model's context window. This is necessary โ€” but it means things get lost. Configs that were working, decisions that were made, nuances that matter.

Here's how to fight context loss:

Use state files

State files are persistent files your agent can read and write to. Unlike chat history, they survive compaction. Tell your agent to save important information to state files.

Tell your agent
Save the current project status to a state file at ~/.openclaw/workspace/memory/project-status.md Include: what's been completed, what's in progress, what decisions we've made, and what's next.

Use workspace docs

Your OpenClaw workspace (at ~/.openclaw/workspace/) has special files your agent reads automatically:

File What it's for
USER.mdInfo about you โ€” preferences, timezone, how you like things done
AGENTS.mdInstructions for how agents should behave
TOOLS.mdNotes on which tools to use and how
HEARTBEAT.mdWhat the agent should check during heartbeat pings

Fill these in. The more context your agent has in these files, the less you have to repeat yourself.

Push context in, not just save it out

State files and workspace docs help your agent remember things from past conversations. But there's another angle most people miss: feeding your agent new context automatically, without having to copy-paste things into chat.

Think about how you actually work. You spend hours reading docs, researching tools, comparing options โ€” and your agent knows none of it. You'd have to manually summarise what you learned and paste it into a message. Most people don't bother, which means the agent is always working with stale or incomplete context.

๐Ÿฆž This is why we built Clawfy

Your browser talks to your agent โ€” so you don't have to

Clawfy is a Chrome extension that watches what you're researching and automatically sends relevant context to your OpenClaw agent. When you're reading docs about a new API, comparing deployment options, or debugging a stack trace โ€” Clawfy detects the tech signals, scrapes the page content, and pushes it straight into your agent's memory.

Your agent gets the context it needs without you having to explain what you've been reading. It also surfaces relevant ClawHub skills based on what you're working on โ€” so your agent gets new capabilities, not just information.

Add to Chrome โ€” Free ยท Pro available Free tier includes passive context detection. Pro ($12/mo) adds deep page scraping and priority skill suggestions.

6 The model matters more than anything else

Most frustration with OpenClaw comes from using a model that can't handle tool calls reliably.

OpenClaw isn't a chatbot โ€” it's an agent. Your model doesn't just need to write nice text. It needs to correctly call tools (browser, file system, shell commands, APIs) using structured function calls. A model can write beautiful prose and still produce malformed tool calls that crash your entire workflow.

OpenClaw also needs models with a large context window โ€” at least 64,000 tokens is recommended. Smaller context windows lead to context overflow errors, which is a common cause of freezing and crashes.

What actually works

Model Tool Calls Cost Notes
Claude Opus 4.6 Excellent $$$ Best quality. Expensive for primary use.
Claude Sonnet 4.5 Excellent $$ Great all-rounder. Good fallback.
Claude Haiku 4.5 Good $ Cheap. Great for heartbeats/simple tasks.
GPT-5.2 Good $$ Solid alternative provider as fallback.
MiniMax M2.1 Good $ Great value via API. Popular in the community.
Kimi K2 (API) Good $ Strong tool calling. Cloud API, not local.
Gemini 3 Flash Decent ยข Very fast (~250 tok/s). Good for sub-agents.
DeepSeek V3.2 Decent ยข Cheap. Avoid the Reasoner variant โ€” it produces malformed tool calls.
GPT-5.1 Mini Weak ยข Very cheap, but struggles with agent tasks.

Avoid using GPT-5.1 Mini or similar very small models as your only model. Multiple community members report it's "pretty useless" for agent workflows despite being cheap. You'll spend more time fixing its mistakes than you save on tokens.

Switching models on the fly

You don't have to commit to one model for everything. During a chat session, you can switch models instantly:

In your OpenClaw chat
# Check what model you're currently using /model # Switch to a specific model /model Opus # Or use the full model path /model anthropic/claude-sonnet-4-5

Working on something complex? Switch to Opus. Quick lookup? Stay on Haiku. This alone can cut your costs dramatically.

You can also list all your available models from the terminal:

Terminal
openclaw models list

What about local models?

Running a model on your own computer (via Ollama or LM Studio) means zero API costs and full privacy. But there are trade-offs:

You need a decent GPU. For usable results, 16GB+ of VRAM is the minimum. Models like Qwen3 8B will run on less, but quality drops fast. Qwen3 Coder 30B (quantised) is a popular mid-range choice for people with 16โ€“24GB VRAM.

The honest recommendation from most experienced users: use a local model as your primary for simple daily tasks, and keep a cloud model as a fallback for the hard stuff. You get privacy and zero cost for 80% of requests, and quality for the ones that matter.

OpenClaw makes this easy with models.mode: "merge" โ€” it combines your local models and cloud models into one pool that your agent can use:

openclaw.json โ€” local + cloud hybrid
{ "models": { "mode": "merge", "providers": { "ollama": { "baseUrl": "http://127.0.0.1:11434/v1", "apiKey": "ollama-local", "api": "openai-completions", "models": [ { "id": "qwen3:8b", "name": "Qwen3 8B", "reasoning": false, "input": ["text"], "cost": { "input": 0, "output": 0 }, "contextWindow": 65536, "maxTokens": 8192 } ] } } }, "agents": { "defaults": { "model": { "primary": "ollama/qwen3:8b", "fallbacks": ["anthropic/claude-sonnet-4-5"] } } } }

This uses your local Qwen3 model by default (free), and falls back to Claude Sonnet via the cloud only when the local model can't handle the task.

7 You're not bad at this โ€” it's genuinely hard right now

If you've spent two weeks babysitting your agent and feel like you're getting nothing done โ€” welcome to the club. You're not alone, and you're not doing it wrong.

OpenClaw is not a finished product. It's an incredibly powerful framework at a very early stage. The people posting "my agent built a full web app overnight" have spent weeks โ€” sometimes months โ€” tuning their setup, writing custom skills, building task pipelines, and working through exactly the same frustrations you're experiencing.

The gap between the demo and real daily use is real. It's closing fast, but it's still there.

What helps

Think of it as training a new employee, not installing software. It needs onboarding. It needs rules. It needs to be told your preferences. The upfront investment is real, but once you've built that foundation, things compound. Your agent gets more useful the more context and structure you give it.

Start with tasks that have clear success criteria. "Check my email and summarise it" is easier to get right than "manage my entire digital life." Build trust with small wins, then expand.

Save everything that works. When you find a config, a prompt, or a skill that produces good results, save it. Write it down. Put it in a state file. Future-you will be glad you did.

Use the community. The OpenClaw Discord is where most people get help fastest. The subreddit (r/openclaw) is also active. Don't suffer in silence โ€” there's almost always someone who's already solved the exact problem you're hitting.

If you want to skip some of the manual context work, Clawfy automates the browser โ†’ agent context bridge so your agent always knows what you're working on. We built it because we were tired of copy-pasting into chat.

Common Issues
My agent keeps looping / repeating itself โ–ผ
This is usually a compaction issue โ€” when memory gets compressed, the agent loses track of what it already said. Add explicit anti-looping rules to a SKILL.md file (see Tip 2). Also try setting compaction mode to "safeguard" in your config, which is more conservative about what gets compacted. Add "compaction": { "mode": "safeguard" } under agents.defaults in your openclaw.json.
My costs are way too high โ–ผ
Almost always caused by running an expensive model as your primary (see Tip 1). Heartbeats alone on Opus can cost $30/month for doing basically nothing. Switch to a tiered model config. Also check for runaway sub-agents โ€” set "maxConcurrent": 4 and "subagents": { "maxConcurrent": 8 } to limit parallel work. Monitor costs in your provider dashboard (Anthropic Console, OpenAI Usage, OpenRouter Activity).
The gateway keeps crashing โ–ผ
Run openclaw doctor --fix first โ€” this validates your config and repairs common issues automatically. If that doesn't help, check your logs at /tmp/openclaw/openclaw.log (or wherever your logging path is set). Common culprits: a malformed openclaw.json (try pasting it into a JSON validator), expired auth tokens, or a model provider that's down.
My agent "freezes" or stops responding mid-task โ–ผ
This is almost always a context overflow โ€” the conversation got too long for the model's context window. Solutions: use a model with a larger context window (64k+ recommended), enable compaction with "compaction": { "mode": "safeguard" }, or break complex tasks into smaller sub-tasks. If you're using a local model, make sure the context window in your config matches what your model actually supports โ€” setting it too high causes silent failures.
Tool calls keep failing or producing errors โ–ผ
Your model probably doesn't support structured function calling well enough. This is model-dependent, not an OpenClaw bug. Switch to a model known for reliable tool calls โ€” Claude Sonnet/Opus, GPT-5.2, or Kimi K2 via API. Avoid DeepSeek Reasoner specifically โ€” it's great for chain-of-thought reasoning but produces malformed tool calls. If using a local model, make sure you have "api": "openai-responses" or "openai-completions" set correctly for your provider.
I see old command names like "moltbot" or "clawdbot" in tutorials โ–ผ
OpenClaw has been renamed twice: Clawdbot โ†’ Moltbot โ†’ OpenClaw. Old tutorials may reference the old names. The commands are mostly interchangeable โ€” moltbot, clawdbot, and openclaw all work. If openclaw gives you "command not found," try moltbot instead, or update to the latest version with npm install -g openclaw@latest.