Gherk

Native Enterprise AI
Orchestration for Go
Stop importing heavily bloated Python DAGs. Orchestrate your LLMs with a high-performance, 0-dependency native framework built purely for Go determinism.
No Python.
No Experimental Bloat.
Langchain and LangGraph force you into heavy dependencies, fragile DAG environments, and massive abstract syntax trees. go-brain flips the paradigm. We provide a single, compiled, 0-dependency standard library framework. Predictable deployments, instant cold-starts, and flawless architectural security bounds.
Technical Implementation
import"github.com/gherk-lib/go-brain/router"
// 1. Add States to your Agent
bot.WithRouter("TriageNode").
AddState("TriageNode", triageHandlerFunc).
AddState("CheckoutNode", checkoutHandlerFunc)
// 2. Add Graph Edges rigidly
bot.Router().AddTransition("TriageNode", "start_checkout", "CheckoutNode")
// 3. Execute Loop safely
bot.Run(context.Background())
Polymorphic Memory Engine
Swap between the 5 R.A.P.I.D. persistence layers in real-time through the same interface. Migrate conversation state seamlessly from a cheap Sliding Window buffer, right into an autonomous LLM Summary Compressor when context tokens max out.
Additionally, harness the Workspace Ingestion Engine. Point the bot to your monorepo, and Go-Brain natively crawls, cleanses (`.git`, `node_modules`), and streams your heavy codebases directly to memory while enforcing rigid OOM (Out of Memory) bite limits per file.
Recent
Continuously drops the oldest messages when context token capacity is reached. Ideal for cheap, fast, and casual conversation chains where deep history is irrelevant.
bot.WithMemory(memory.NewWindowBuffer(4000))
Abstract
Triggers an autonomous LLM sub-agent the moment limits are hit. It seamlessly compresses thousands of past tokens into a hyper-dense semantic summary before continuing.
bot.WithMemory(memory.NewSummaryBuffer(llm, 15000))
Profile
Extracts strictly typed JSON key-value properties from the ongoing chat. It builds a persistent background profile available perfectly synced across sessions.
bot.WithMemory(memory.NewKVEntity(db, "user_pref"))
Indexed
Embeds massive codebases and document troves locally via Vector Databases. Implements semantic RAG directly inside the agent context securely.
bot.WithMemory(memory.NewVectorMemory(pgVector, 5))
Database
Dumps raw conversational state seamlessly into PostgreSQL or Redis, allowing absolute conversational resilience and state-resumption across physical server restarts.
bot.WithMemory(memory.NewSQLPersist(pool, "sess_id"))
StructGPT:
The Auto-Healer
Stop building huge custom prompts asking the LLM to return JSON. Define your Go Struct natively.
The Go-Brain extractor recursively parses the output. If the LLM hallucinates an invalid character, the extractor spins up a retrospective loop, injecting the `Unmarshal` error back to the LLM and demanding a fix. Give it up to 3 retries and secure absolute data dominance.
BrainTrace⢠Telemetry
Go-Brain ships with an exclusive Zero-Dependency Graphical Dashboard hosted locally. Access http://localhost:9090 and watch your agents deliberate in real-time. FSM transitions, token consumption loops, and tool-calling flows are graphed visually via a live MermaidJS Engine through pure Server-Sent Events.
Public Testing in v2.0
Go-Brain is currently in closed beta and restricted to Gherk internal development. Package installation capability will be tracked as an issue and unlocked in the upcoming Version 2.
Install Framework
AI Assistants Workflow
Developers today use tools like Cursor, Windsurf, or Copilot. By executing the Brain Rules Injector, the framework will automatically bridge all internal SDK contexts directly into your local IDE workspaces `.cursorrules` or `.windsurfrules`. Your AI Assistant will natively understand the framework structure.