# Features

- [Realtime Mascot](https://tinyhumans.gitbook.io/openhuman/features/mascot.md): The on-screen face of OpenHuman, a desktop mascot that speaks, reacts, joins your meetings, and thinks in the background even when you aren't looking at it.
- [Meeting Agents](https://tinyhumans.gitbook.io/openhuman/features/mascot/meeting-agents.md): The mascot joins meetings as a real participant: listens, takes notes, speaks back into the call, animates its face into the camera grid, and uses tools mid-meeting. More than a notetaker.
- [Obsidian-Style Memory](https://tinyhumans.gitbook.io/openhuman/features/obsidian-wiki.md): Every memory chunk also lives as a Markdown file in an Obsidian-compatible vault you can open and edit. Inspired by Karpathy's obsidian-wiki workflow.
- [Memory Trees](https://tinyhumans.gitbook.io/openhuman/features/obsidian-wiki/memory-tree.md): OpenHuman's local-first knowledge base. Ingest from your tools, canonicalize into Markdown, chunk, score, and fold into hierarchical summary trees.
- [agentmemory backend](https://tinyhumans.gitbook.io/openhuman/features/obsidian-wiki/agentmemory-backend.md): Optional \`Memory\` trait backend that delegates to a locally-running agentmemory REST server, for users who self-host agentmemory across Claude Code, Cursor, Codex, OpenCode, and OpenHuman.
- [Auto-fetch from Integrations](https://tinyhumans.gitbook.io/openhuman/features/obsidian-wiki/auto-fetch.md): Every twenty minutes, OpenHuman walks every active integration and folds new data into your memory tree. No prompts, no polling loops you have to write.
- [Third-party Integrations (118+)](https://tinyhumans.gitbook.io/openhuman/features/integrations.md): 118+ third-party integrations - Gmail, Notion, GitHub, Slack, Stripe, Calendar and more - with one-click OAuth and zero API keys.
- [Triggers](https://tinyhumans.gitbook.io/openhuman/features/integrations/triggers.md): Live events from connected integrations (a new Gmail message, a Notion edit, a Stripe charge) arrive as triggers, get classified by a triage agent, and can fire agent actions automatically.
- [Smart Token Compression](https://tinyhumans.gitbook.io/openhuman/features/token-compression.md): TokenJuice - a rule overlay that compacts verbose tool output before it ever enters LLM context. Sweeping through thousands of emails stays cheap.
- [Automatic Model Routing](https://tinyhumans.gitbook.io/openhuman/features/model-routing.md): One subscription, many models. Tasks pick their model via hint prefixes: reasoning goes to a strong model, fast paths go to a fast one, vision to vision.
- [Local AI (optional)](https://tinyhumans.gitbook.io/openhuman/features/model-routing/local-ai.md): Optional, opt-in local AI via Ollama or LM Studio. Powers memory embeddings, summary-tree building, and background loops on-device. Chat / vision / voice are cloud.
- [Available Tools](https://tinyhumans.gitbook.io/openhuman/features/native-tools.md): The full toolset OpenHuman's agent has out of the box - research, code, control your machine, schedule jobs, talk back to you, and call into 118+ third-party services.
- [Web Search](https://tinyhumans.gitbook.io/openhuman/features/native-tools/web-search.md): A native search tool the agent can call directly - no API key required.
- [Web Scraper](https://tinyhumans.gitbook.io/openhuman/features/native-tools/web-scraper.md): A purpose-built "GET-and-read" tool that returns clean text, not raw HTML.
- [Coder](https://tinyhumans.gitbook.io/openhuman/features/native-tools/coder.md): A complete toolset for working on real codebases - read, write, edit, search, git, lint, test.
- [Browser & Computer Control](https://tinyhumans.gitbook.io/openhuman/features/native-tools/browser-and-computer.md): Open URLs, take screenshots, click, type, and move the mouse - natively.
- [Cron & Scheduling](https://tinyhumans.gitbook.io/openhuman/features/native-tools/cron.md): Recurring jobs, one-off reminders, and scheduled agent runs - first-class.
- [Voice](https://tinyhumans.gitbook.io/openhuman/features/native-tools/voice.md): Native voice - speech-to-text in, text-to-speech out, mascot lip-sync, and a live Google Meet agent that listens and speaks.
- [Memory Tools](https://tinyhumans.gitbook.io/openhuman/features/native-tools/memory-tools.md): How the agent reads, writes, and searches its own long-term memory.
- [Tool-Scoped Memory](https://tinyhumans.gitbook.io/openhuman/features/native-tools/tool-memory.md): Durable, tool-scoped rules for safety-critical guidance and learnings.
- [Third-party Integrations](https://tinyhumans.gitbook.io/openhuman/features/native-tools/integrations.md): The agent's view of the 118+ connected third-party services.
- [Agent Coordination](https://tinyhumans.gitbook.io/openhuman/features/native-tools/agent-coordination.md): Tools the agent uses to plan, delegate, and ask for help.
- [System & Utilities](https://tinyhumans.gitbook.io/openhuman/features/native-tools/system-and-utilities.md): Shell, node, SQL, current time, push notifications - the small tools that round out the toolbelt.
- [Subconscious Loop](https://tinyhumans.gitbook.io/openhuman/features/subconscious.md): Background loop that evaluates user / system tasks against the workspace and decides what to do.
- [Privacy & Security](https://tinyhumans.gitbook.io/openhuman/features/privacy-and-security.md)
- [Platform & Availability](https://tinyhumans.gitbook.io/openhuman/features/platform.md): What OpenHuman ships as (native React + Tauri v2 desktop app with a Rust core), supported platforms, and what's in scope today.
- [Cloud Deploy](https://tinyhumans.gitbook.io/openhuman/features/cloud-deploy.md): Hosting the headless openhuman-core in the cloud - DigitalOcean App Platform or Docker Compose on any VPS.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://tinyhumans.gitbook.io/openhuman/features.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
