nanobot: Build AI Agents in 4,000 Lines You Can Actually Read
Why This Matters
There is a recurring complaint in developer forums about modern AI agent frameworks: you spend more time understanding the framework than building your actual agent. LangGraph's dependency graph, OpenClaw's 430,000+ lines of code, CrewAI's layered abstractions — these are powerful, but they impose a learning cliff that slows down experimentation.
nanobot, released February 2, 2026 by the Data Intelligence Lab at the University of Hong Kong (HKUDS), takes the opposite bet. The entire core agent loop — message routing, LLM calls, memory management, tool execution, cron scheduling — fits in roughly 4,000 lines of Python. You can read it in an afternoon. You can fork it by lunch.
That constraint is not a limitation. It is the design. nanobot implements around 90% of OpenClaw's core capabilities with 99% less code. By April 2026 it had accumulated over 34,000 GitHub stars, making it one of the fastest-growing open-source agent frameworks of the year.
This guide walks through what nanobot actually does, how to get it running, and where it fits compared to heavier alternatives like smolagents and OpenClaw.
What Is nanobot? Core Architecture
nanobot is a personal AI agent that you deploy as a long-running process. It listens on one or more messaging channels (Telegram, Discord, WhatsApp, Slack, and others), routes incoming messages to an LLM, executes tool calls via MCP or custom skills, and persists memory between sessions.
The architecture is deliberately flat:
Incoming message
↓
Channel adapter (Telegram / Discord / WhatsApp / ...)
↓
Message bus
↓
Agent loop
├── LLM call (11+ providers supported)
├── Tool execution (MCP stdio / HTTP)
└── Memory read/write (session + Dream)
↓
Response dispatch
Each of these layers is a small, standalone module. There is no hidden orchestration engine, no graph traversal, no pre-built DAG. The agent loop itself is the kind of code you can step through in a debugger in ten minutes.
The project requires Python 3.11 or higher and is licensed under MIT. The latest release at time of writing is v0.1.5.post2 (April 21, 2026), which added Windows and Python 3.14 support, Office document reading, SSE streaming for the OpenAI-compatible API endpoint, and improved session reliability.
Getting Started: Install in Under 5 Minutes
Three installation paths are available. PyPI is recommended for most users:
pip install nanobot-ai
Or with uv for faster installs:
uv tool install nanobot-ai
To track the latest development branch:
git clone https://github.com/HKUDS/nanobot
cd nanobot
pip install -e .
Minimal YAML Configuration
nanobot is configured entirely via a YAML (or JSON) file. A minimal setup with Telegram and Claude:
llm:
provider: anthropic
model: claude-sonnet-4-6
api_key: YOUR_ANTHROPIC_KEY
channels:
- type: telegram
token: YOUR_TELEGRAM_BOT_TOKEN
Save this as ~/.nanobot/config.yaml and run:
nanobot start
That is the entire setup. nanobot will start listening on Telegram and responding with Claude Sonnet 4.6. No Docker, no Kubernetes, no environment config beyond the YAML file.
Supported LLM Providers
nanobot ships with adapters for 11+ providers out of the box:
- Cloud APIs: Anthropic (Claude), OpenAI (GPT), Google Gemini, DeepSeek, Moonshot, Groq, AiHubMix
- Aggregators: OpenRouter, DashScope, Zhipu (智谱)
- Self-hosted: vLLM (for local models on your own GPU)
Switching providers is one line in the config. You can also configure multiple providers and route specific channels to specific models.
The Dream Memory System
One of the more interesting engineering choices in nanobot is its two-tier memory architecture.
Session history stores the raw conversation turns for each active chat thread in JSON files under sessions/. This is the short-term buffer — what the agent knows right now, in this conversation.
Dream is the long-term consolidation layer. It runs as a background process that periodically reads session history and extracts durable facts, summaries, and user preferences into a MEMORY.md file. Think of it as the agent sleeping on what it learned and writing notes before waking up.
The underlying storage for Dream is git-versioned, which means every memory state is recoverable. You can roll back to any earlier memory checkpoint with a standard git checkout. This is an elegant solution to a real problem: long-running agents accumulate incorrect or stale memories, and having a full audit trail makes debugging far less painful.
Dream's behavior is configured via DreamConfig in your YAML:
memory:
dream:
enabled: true
interval_minutes: 60
max_facts: 200
For agents running multi-day workflows — the kind described in guides on Temporal durable execution patterns — the Dream system fills a gap that most lightweight frameworks ignore entirely.
MCP Integration: External Tools Without the Overhead
nanobot connects to MCP (Model Context Protocol) servers via two transport modes:
- stdio — for local MCP servers running as child processes
- HTTP — for remote servers with optional custom authentication headers
Tools from connected MCP servers are auto-discovered and registered at startup. The agent can call any exposed tool the same way it would call a built-in skill, with no additional plumbing required.
Example YAML configuration for an MCP server:
mcp_servers:
- name: filesystem
transport: stdio
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
- name: brave-search
transport: http
url: https://your-mcp-server.example.com
headers:
Authorization: "Bearer YOUR_KEY"
This direct MCP support is meaningful in 2026, when the MCP ecosystem has crossed 97 million monthly SDK downloads. You can drop in any of the thousands of public MCP servers — GitHub, Brave Search, databases, home automation — without modifying the agent's core logic.
Multi-Platform Messaging
nanobot connects to 8+ messaging platforms through channel adapters:
- Telegram — most mature, best supported
- Discord — full slash command support
- WhatsApp — via WhatsApp Business API
- Slack — workspace bot
- Feishu / DingTalk — enterprise Chinese platforms
- Email — IMAP/SMTP polling
- QQ — via Lagrange bridge
Each channel runs as an independent adapter. You can run multiple channels simultaneously and isolate production from testing environments by spinning up separate instances on the same machine.
This breadth matters for teams building internal AI assistants. Your sales team uses Slack, your ops team uses DingTalk, and nanobot can serve both from a single config file.
Cron Scheduling and Subagents
nanobot includes a cron system built on apscheduler for time-based automation:
# Schedule a daily briefing at 9am
nanobot cron add --name "morning" --message "Summarize today's GitHub activity" --cron "0 9 * * *"
# Check a webhook every hour
nanobot cron add --name "monitor" --message "Check deployment status" --every 3600
Cron jobs can also be defined in YAML:
cron_jobs:
- name: daily_digest
schedule: "0 18 * * *"
message: "Prepare and send daily digest to team Telegram"
Subagents allow the main agent to spin up specialized child agents for scoped tasks. Subagents work in CLI mode and communicate via the internal message bus. A common pattern is a routing agent that delegates research to a subagent, code editing to another, and aggregates their results before responding to the user.
Skills: Extending nanobot With Pre-Built Behaviors
Beyond MCP tools, nanobot has a skills system. Skills are Markdown files that describe repeatable behaviors — think of them as stored prompts with light tool wiring. The agent loads skills from a skills/ directory and can invoke them by name.
nanobot ships with pre-bundled skills for GitHub, weather, system commands, and general task management. Community skills can be discovered and installed through the ClawHub skill registry:
nanobot skill search web-scraper
nanobot skill install web-scraper
Unlike the massive Hermes Agent skill ecosystem (118 bundled skills, self-improving closed learning loop), nanobot keeps the default skill set minimal. That is intentional — you install exactly what you need, and the codebase stays readable.
nanobot vs The Alternatives
| Feature | nanobot | smolagents | LangGraph | OpenClaw |
|---|---|---|---|---|
| Core codebase | ~4,000 lines | ~1,000 lines | 50,000+ lines | 430,000+ lines |
| License | MIT | Apache 2.0 | MIT | MIT |
| Messaging platforms | 8+ built-in | None built-in | None built-in | 20+ |
| MCP support | Yes (stdio + HTTP) | Yes (limited) | Partial | Yes (full) |
| Long-term memory | Dream (git-versioned) | External only | Checkpoints | Full memory graph |
| Cron scheduling | Built-in (apscheduler) | No | External | Built-in |
| Self-host difficulty | Low | Very low | Medium | Medium |
| Best for | Personal agents, hackable infra | Code-first prototyping | Complex multi-agent graphs | Production multi-platform |
The choice comes down to what you're optimizing for. smolagents is the right tool when you need a minimal code-execution agent fast. LangGraph wins on complex stateful multi-agent graphs with human-in-the-loop requirements. OpenClaw has the widest platform coverage and a large community skills ecosystem.
nanobot's sweet spot is the developer who wants a persistent personal agent — one that lives on a cheap VPS, watches multiple channels, handles cron jobs, and grows with you — without reading 430,000 lines of someone else's code to understand what's happening.
Common Mistakes
Running without a persistent process manager. nanobot is a long-running daemon. Run it with systemd, supervisord, or at minimum nohup. Crashes in Telegram or Discord adapters will disconnect your channels and you won't notice until someone complains.
Over-engineering the skill system early. Skills are for repeatable, well-defined tasks. Using them as a workaround for prompt quality issues just adds indirection. Fix the prompts first.
Ignoring session history growth. The sessions/ directory grows indefinitely if Dream consolidation is disabled. A 3-month-old agent on an active Telegram group can accumulate gigabytes of JSON. Configure Dream with a reasonable interval_minutes and cap max_facts.
Misconfiguring MCP server lifecycles. Stdio MCP servers are child processes of nanobot. If nanobot crashes, those servers crash with it. HTTP MCP servers survive independently. Structure your architecture accordingly when uptime matters.
Running multiple instances against the same config directory. Session and memory files are not designed for concurrent writes. Use separate config directories for each instance.
FAQ
Q: Does nanobot work with local LLMs?
Yes. Configure vLLM as the provider with a local endpoint URL and model name. Any OpenAI-compatible API server works:
llm:
provider: openai_compatible
base_url: http://localhost:8000/v1
model: llama-4-scout
api_key: none
Q: How does nanobot compare to OpenClaw in terms of stability?
OpenClaw is more battle-tested in high-volume production deployments with community skill ecosystems. nanobot is newer (February 2026) but has been moving fast — v0.1.5.post2 added Windows support and SSE streaming in April 2026. For a personal agent or small team, nanobot's stability is adequate. For enterprise scale, OpenClaw or LangGraph are safer bets today.
Q: Can I run nanobot without connecting it to a messaging platform?
Yes. nanobot ships with a CLI interface — run nanobot chat to interact directly in the terminal without configuring any channel adapter. This is useful for testing skills and memory behavior before deploying to Telegram or Discord.
Q: How does the Dream memory consolidation handle incorrect facts?
Dream stores memory as Markdown files under git version control. To remove or correct a fact, edit MEMORY.md directly and commit the change. The agent picks up the updated file on the next session. There is currently no automated conflict resolution — incorrect memories require manual intervention.
Q: What is the minimum VPS spec to run nanobot?
nanobot's own footprint is small — approximately 100MB RAM for the process itself. Add the memory of whichever cloud LLM provider you use (network only, no local inference), plus any MCP server processes. A $4/month VPS with 512MB RAM runs nanobot comfortably. For local LLM inference via vLLM, the hardware requirements of the model dominate.
Key Takeaways
- nanobot is a ~4,000-line MIT-licensed Python agent framework from HKUDS, released February 2, 2026
- Install with
pip install nanobot-ai, configure with a single YAML file, and start receiving messages in under five minutes - Supports 8+ messaging platforms, 11+ LLM providers, MCP (stdio and HTTP), cron scheduling, subagents, and a two-tier Dream memory system
- The entire core codebase is readable in an afternoon — this is a feature, not a constraint
- Best suited for personal agents and developer experiments where understanding and forking the code matters more than enterprise-scale throughput
- For production multi-agent orchestration with complex state, LangGraph or OpenClaw remain stronger choices
nanobot delivers a genuinely useful personal AI agent in a codebase you can actually read, fork, and understand. If the opacity of larger frameworks has been your reason for not deploying an agent yet, nanobot removes that excuse. Ship it to a VPS, wire up Telegram, and extend it from there.
Prefer a deep-dive walkthrough? Watch the full video on YouTube.
Need content like this
for your blog?
We run AI-powered technical blogs. Start with a free 3-article pilot.