Skip to main content

GoatDBGoatDB

GoatDB reads from memory on every device.
Your AI agent thinks at microsecond speed — no round-trips, no waiting.

GoatDB reads from memory — 300× faster than SQLite in the browser.
Queries update live as data changes. Syncs automatically. Every write cryptographically signed.

GoatDB combines the best of both worlds.

Remote databases

PostgreSQL, MySQL, DynamoDB

Shared across devices with strong consistency. Designed for centralized workloads where all clients connect to one server.

Embedded databases

SQLite, LevelDB, RocksDB

Fast reads, no network. Designed for single-device workloads where data stays local to one process.

AI agents reason locally, run on any device, and collaborate across instances. They need both: local speed and shared state. That's GoatDB.

Local speed. Shared state. One key to protect.

Data lives in memory on every device. Reads run at microsecond speed — no network, no disk. Changes sync through the server automatically. Conflicts resolve with Git-style three-way merge at the field level. Every write is cryptographically signed with Ed25519.

Like Git, the server coordinates — but every client holds the full commit graph. If the server crashes, any client restores it. The only thing the operator protects is the root key. Scale horizontally by adding repositories — each one syncs, persists, and fails independently.

RemotePostgreSQL, MySQL
All reads go through the server
Devices share one central copy
EmbeddedSQLite, LevelDB
Data stays on one device
Each device keeps a separate local copy
GoatDB
Every client is a full replica.
Server coordinates — clients can restore it.

Built for the main thread

GoatDB is browser-native. Query scans run as cooperative coroutines that yield every ~20ms — one-third of a 60fps frame. The scheduler picks the shortest-running task first, so UI interactions always win. If a scan becomes irrelevant, cancel it mid-flight.

File I/O runs in a dedicated worker with zero-copy transfer. Query results persist to disk and restore on reopen — no re-scan on page load. On the server, the same queries run as synchronous loops with zero scheduling overhead.

TRADITIONALlong-running query
queryUI
Query runs to completion
UI waits for the full scan
GOATDB
UIUIUI
Tasks yield every ~20ms
UI stays responsive between chunks

Four steps. That's the whole API.

Define, query, sync, verify. No migrations. No ORM. No glue code.

1

Define & create

TypeScript schemas. No SQL, no migrations, no ORM. The source field lets you track which agent instance wrote each memory.

const AgentMemorySchema = {
ns: 'agent-memory',
version: 1,
fields: {
observation: { type: 'string', required: true },
confidence: { type: 'number', default: () => 0.5 },
source: { type: 'string', required: true },
recordedAt: { type: 'date', default: () => new Date() }
}
} as const;

// Create a memory item — source identifies which agent wrote it
const mem = db.create('/data/memories/pref-1', AgentMemorySchema, {
observation: 'User prefers concise responses',
confidence: 0.9,
source: 'preference-agent'
});
2

Query live

After the first scan, queries update incrementally — no re-query, no polling. In React, wrap any query with useQuery() for automatic re-renders on data changes — no useEffect, no subscriptions, no cleanup. Same predicate API on Deno, Node.js, and the browser.

// Warm query runs in microseconds — data is already in memory
const memories = db.query({
source: '/data/memories',
schema: AgentMemorySchema,
predicate: ({ item }) => item.get('confidence') > 0.7,
sortBy: 'recordedAt'
});

// In React: const memories = useQuery(db, { source: '/data/memories', ... });
// Results auto-update when data changes
for (const mem of memories.results()) {
console.log(mem.get('observation'));
}
3

It just syncs

Concurrent agents, multiple devices, offline edits — everything merges automatically with Git-style three-way merge. No conflict-resolution code to write.

// Agent A — offline, updates confidence
mem.set('confidence', 0.95);

// Agent B — offline, same item, updates observation
mem.set('observation', 'User prefers bullet points over paragraphs');

// Both reconnect — GoatDB merges automatically.
// Result: confidence = 0.95, observation = 'User prefers bullet points...'
4

Signed by default

Every commit is cryptographically signed with Ed25519. Verify which agent wrote what, build audit trails, or enforce authorization — without trusting the server.

// Every write is signed automatically with Ed25519
const task = db.create('/data/tasks/plan-1', TaskSchema, {
title: 'Analyze Q4 metrics',
assignedTo: 'analytics-agent',
status: 'pending'
});
task.set('status', 'complete');

// Verify which agent wrote what, cryptographically
task.commit.session; // Ed25519 session that signed this commit

Where GoatDB fits

GoatDB loads the full repository into memory at open time — under a second for 100k items. After that one-time cost, reads complete in ~1µs. Each write is individually signed with Ed25519, so bulk inserts trade throughput for cryptographic attribution — ~16× slower than unsigned SQLite writes. Instead of SQL joins, GoatDB uses predicate-based queries that subscribe to changes and update incrementally.

GoatDB is built for apps that AI tools scaffold in one prompt and agents that run on any device. For SQL analytics, pair it with PostgreSQL. For warehousing, pair it with ClickHouse — GoatDB handles the local-first layer.

See the full benchmarks →

Open source. MIT licensed. Built in public.

GoatDB is built in public. The binary commit format, custom binary codec, and P2P sync protocol are all shipping now. Here's what's next — and where your help matters most.

GitHub stars GitHub forks