Gherk

Native Enterprise AI
Orchestration for Go
Stop importing heavily bloated Python DAGs. Orchestrate your LLMs with a high-performance, 0-dependency native framework built purely for Go determinism.
No Python.
No Experimental Bloat.
Langchain and LangGraph force you into heavy dependencies, fragile DAG environments, and massive abstract syntax trees. go-brain flips the paradigm. We provide a single, compiled, 0-dependency standard library framework. Predictable deployments, instant cold-starts, and flawless architectural security bounds.
AI Orchestration: The Reality
GIL Bottleneck
Asyncio suffers from severe event-loop blocking. Scaling swarms natively demands complex multi-processing layers and massive IPC overhead.
True Concurrency
Goroutines provide lightweight, hardware-level parallelism. Execute millions of agents simultaneously over multiple cores without forced locks.
Fragile Deployments
The AI environment nightmare: `pip`, `poetry`, OS-level dependency conflicts, and C++ bindings constantly breaking production Docker images.
Single Static Binary
Absolutely 0 dependencies. Compiled natively. Drop a single executable binary into a sterile scratch container and it runs infallibly forever.
Memory Greed
Instantiating a framework like Langchain spawns massive interpreters. Idle memory footprints easily exceed hundreds of MBs per agent, penalizing Serverless cold-starts.
< 20MB Footprint
Surgical memory allocation. A sleeping orchestrated daemon idles optimally at just ~15MB, enabling instant microsecond cold-starts for massive Kubernetes scaling.
Dynamic Typing
Forcing LLM payload schemas with `Pydantic` decorators injects massive runtime overhead and delays API parsing validation cycles.
Compile-Time Structs
LLM Extracted JSON schemas fall directly into rigid native Go Structs, completely eliminating Null Pointer runtime errors on the parsing side natively.
Technical Implementation
import"github.com/gherk-lib/go-brain/router"
// 1. Build the Node Execution Logic
compileHandler := func(ctx context.Context, state *router.State) error {
res, err := state.Agent.ExecuteTool("run_cmd", "go build")
if err != nil {
return state.Transition("ErrorNode")
}
state.Memory.InsertEpisodic("Built successfully: " + res)
return state.Transition("QANode")
}
// 2. Map the Ghost State Machine
bot.WithRouter("CompileNode").
AddState("CompileNode", compileHandler).
AddState("ErrorNode", recoveryHandler).
AddState("QANode", testHandler)
// 3. Run safely without prompt hallucinations blocking the loop
bot.Run(context.Background())
Polymorphic Memory Engine
Swap between 5 advanced persistence layers in real-time through the same interface. Migrate conversation state seamlessly from a cheap Sliding Window buffer, right into an autonomous LLM Summary Compressor when context tokens max out.
Additionally, harness the Workspace Ingestion Engine. Point the bot to your monorepo, and Go-Brain natively crawls, cleanses (`.git`, `node_modules`), and streams your heavy codebases directly to memory while enforcing rigid OOM (Out of Memory) bite limits per file.
Recent
Continuously drops the oldest messages when context token capacity is reached. Ideal for cheap, fast, and casual conversation chains where deep history is irrelevant.
bot.WithMemory(memory.NewWindowBuffer(4000))
Abstract
Triggers an autonomous LLM sub-agent the moment limits are hit. It seamlessly compresses thousands of past tokens into a hyper-dense semantic summary before continuing.
bot.WithMemory(memory.NewSummaryBuffer(llm, 15000))
Profile
Extracts strictly typed JSON key-value properties from the ongoing chat. It builds a persistent background profile available perfectly synced across sessions.
bot.WithMemory(memory.NewKVEntity(db, "user_pref"))
Indexed
Embeds massive codebases and document troves locally via Vector Databases. Implements semantic RAG directly inside the agent context securely.
bot.WithMemory(memory.NewVectorMemory(pgVector, 5))
Database
Dumps raw conversational state seamlessly into PostgreSQL or Redis, allowing absolute conversational resilience and state-resumption across physical server restarts.
bot.WithMemory(memory.NewSQLPersist(pool, "sess_id"))
StructGPT:
The Auto-Healer
Stop building huge custom prompts asking the LLM to return JSON. Define your Go Struct natively.
The Go-Brain extractor recursively parses the output. If the LLM hallucinates an invalid character, the extractor spins up a retrospective loop, injecting the `Unmarshal` error back to the LLM and demanding a fix. Give it up to 3 retries and secure absolute data dominance.
Core Architecture
Built purely for production resilience and strict type-safe execution, ensuring that autonomous agents run continuously without breaking.
Ghost State Engine
Unlike unstructured loops that endlessly hallucinate, Go-Brain uses deterministic Finite State Machines. Agents gracefully transition between pure GO logic nodes, never getting stuck.
Zero-Dependency
Runs entirely as a statically compiled `go` binary daemon. Say goodbye to fragile Python environments and fragmented NodeJS runtimes. Fast execution natively.
Local MCP Integration
Equipped natively to interface with Model Context Protocols. Allow your orchestrators to read local workspaces, execute bash scripts, and interact with the physical OS bounds without latency overhead.
BrainTrace™ Telemetry
Go-Brain ships with an exclusive Zero-Dependency Graphical Dashboard hosted locally. Access http://localhost:9090 and watch your agents deliberate in real-time. FSM transitions, token consumption loops, and tool-calling flows are graphed visually via a live MermaidJS Engine through pure Server-Sent Events.
FSM State
Window
JSON Healer
Deep dive into the Documentation
Ready to evaluate robust Execution Guarantees, Idempotency Contracts, and System Controls? Read the complete architecture documentation.
Ready for Production
The core Go-Brain framework SDK is a proprietary B2B component. Configure your private Go environment to integrate advanced LLM orchestration locally in minutes.
Install Framework
AI Assistants Workflow
Developers today use tools like Cursor, Windsurf, or Copilot. By executing the Brain Rules Injector, the framework will automatically bridge all internal SDK contexts directly into your local IDE workspaces `.cursorrules` or `.windsurfrules`. Your AI Assistant will natively understand the framework structure.