Gherk Logo
Gherkgo-brain framework

Gherk

Gherk Brain

Native Enterprise AI Orchestration for Go

Stop importing heavily bloated Python DAGs. Orchestrate your LLMs with a high-performance, 0-dependency native framework built purely for Go determinism.

OpenAI Anthropic Claude Google Gemini Ollama Local GitHub Copilot Alibaba Qwen

No Python.
No Experimental Bloat.

Langchain and LangGraph force you into heavy dependencies, fragile DAG environments, and massive abstract syntax trees. go-brain flips the paradigm. We provide a single, compiled, 0-dependency standard library framework. Predictable deployments, instant cold-starts, and flawless architectural security bounds.

AI Orchestration: The Reality

Legacy🐍 Python 3.13

GIL Bottleneck

Asyncio suffers from severe event-loop blocking. Scaling swarms natively demands complex multi-processing layers and massive IPC overhead.

Go-BrainGo 1.22+

True Concurrency

Goroutines provide lightweight, hardware-level parallelism. Execute millions of agents simultaneously over multiple cores without forced locks.

Fragile Deployments

The AI environment nightmare: `pip`, `poetry`, OS-level dependency conflicts, and C++ bindings constantly breaking production Docker images.

Single Static Binary

Absolutely 0 dependencies. Compiled natively. Drop a single executable binary into a sterile scratch container and it runs infallibly forever.

Memory Greed

Instantiating a framework like Langchain spawns massive interpreters. Idle memory footprints easily exceed hundreds of MBs per agent, penalizing Serverless cold-starts.

< 20MB Footprint

Surgical memory allocation. A sleeping orchestrated daemon idles optimally at just ~15MB, enabling instant microsecond cold-starts for massive Kubernetes scaling.

Dynamic Typing

Forcing LLM payload schemas with `Pydantic` decorators injects massive runtime overhead and delays API parsing validation cycles.

Compile-Time Structs

LLM Extracted JSON schemas fall directly into rigid native Go Structs, completely eliminating Null Pointer runtime errors on the parsing side natively.

Technical Implementation

// Define clear preconditions, execution nodes, and failure transitions
import"github.com/gherk-lib/go-brain/router"

// 1. Build the Node Execution Logic
compileHandler := func(ctx context.Context, state *router.State) error {
    res, err := state.Agent.ExecuteTool("run_cmd", "go build")
    if err != nil {
        return state.Transition("ErrorNode")
    }
    state.Memory.InsertEpisodic("Built successfully: " + res)
    return state.Transition("QANode")
}

// 2. Map the Ghost State Machine
bot.WithRouter("CompileNode").
    AddState("CompileNode", compileHandler).
    AddState("ErrorNode", recoveryHandler).
    AddState("QANode", testHandler)

// 3. Run safely without prompt hallucinations blocking the loop
bot.Run(context.Background())
Memory & Storage

Polymorphic Memory Engine

Swap between 5 advanced persistence layers in real-time through the same interface. Migrate conversation state seamlessly from a cheap Sliding Window buffer, right into an autonomous LLM Summary Compressor when context tokens max out.

Additionally, harness the Workspace Ingestion Engine. Point the bot to your monorepo, and Go-Brain natively crawls, cleanses (`.git`, `node_modules`), and streams your heavy codebases directly to memory while enforcing rigid OOM (Out of Memory) bite limits per file.

R

Recent

Sliding Drops

Continuously drops the oldest messages when context token capacity is reached. Ideal for cheap, fast, and casual conversation chains where deep history is irrelevant.

// Go implementation
bot.WithMemory(memory.NewWindowBuffer(4000))
A

Abstract

Auto-Compress

Triggers an autonomous LLM sub-agent the moment limits are hit. It seamlessly compresses thousands of past tokens into a hyper-dense semantic summary before continuing.

// Go implementation
bot.WithMemory(memory.NewSummaryBuffer(llm, 15000))
P

Profile

KV Extract

Extracts strictly typed JSON key-value properties from the ongoing chat. It builds a persistent background profile available perfectly synced across sessions.

// Go implementation
bot.WithMemory(memory.NewKVEntity(db, "user_pref"))
I

Indexed

RAG Local

Embeds massive codebases and document troves locally via Vector Databases. Implements semantic RAG directly inside the agent context securely.

// Go implementation
bot.WithMemory(memory.NewVectorMemory(pgVector, 5))
D

Database

SQL Persist

Dumps raw conversational state seamlessly into PostgreSQL or Redis, allowing absolute conversational resilience and state-resumption across physical server restarts.

// Go implementation
bot.WithMemory(memory.NewSQLPersist(pool, "sess_id"))
Data Extraction

StructGPT:
The Auto-Healer

Stop building huge custom prompts asking the LLM to return JSON. Define your Go Struct natively.

The Go-Brain extractor recursively parses the output. If the LLM hallucinates an invalid character, the extractor spins up a retrospective loop, injecting the `Unmarshal` error back to the LLM and demanding a fix. Give it up to 3 retries and secure absolute data dominance.

System Log
[0.1s] Trying Extraction: Invoice struct...
[1.5s] ERR: json: cannot unmarshal string into Go struct field Total of type float64
[1.6s] Retrying (Attempt 1): Firing Auto-Heal prompt with error context...
[3.2s] SUCCESS: Payload parsed and mapped correctly.

Core Architecture

Built purely for production resilience and strict type-safe execution, ensuring that autonomous agents run continuously without breaking.

Ghost State Engine

Unlike unstructured loops that endlessly hallucinate, Go-Brain uses deterministic Finite State Machines. Agents gracefully transition between pure GO logic nodes, never getting stuck.

Zero-Dependency

Runs entirely as a statically compiled `go` binary daemon. Say goodbye to fragile Python environments and fragmented NodeJS runtimes. Fast execution natively.

Local MCP Integration

Equipped natively to interface with Model Context Protocols. Allow your orchestrators to read local workspaces, execute bash scripts, and interact with the physical OS bounds without latency overhead.

Observability

BrainTrace™ Telemetry

Go-Brain ships with an exclusive Zero-Dependency Graphical Dashboard hosted locally. Access http://localhost:9090 and watch your agents deliberate in real-time. FSM transitions, token consumption loops, and tool-calling flows are graphed visually via a live MermaidJS Engine through pure Server-Sent Events.

TriageRouter
FSM State
SummarizeMem
Window
StructGPT
JSON Healer
STATUS: AVAILABLE NOW

Ready for Production

The core Go-Brain framework SDK is fully accessible. Configure your private Go environment to integrate advanced LLM orchestration into your systems locally in minutes.

Install Framework

$ go env -w GOPRIVATE="github.com/gherk-lib/*"
$ go get github.com/gherk-lib/go-brain@v1.6.0

AI Assistants Workflow

Developers today use tools like Cursor, Windsurf, or Copilot. By executing the Brain Rules Injector, the framework will automatically bridge all internal SDK contexts directly into your local IDE workspaces `.cursorrules` or `.windsurfrules`. Your AI Assistant will natively understand the framework structure.

$ go run github.com/gherk-lib/go-brain/cmd/brain-rules@latest