Sharur

Primitives, not features. Local-first. Extensible.

sharur is a powerful, local-first agentic harness designed for developers who want a flexible and reliable assistant that runs on their own hardware. It prioritizes local LLMs (via Ollama and llama.cpp) but adapts seamlessly to cloud providers like OpenAI, Anthropic, and Google Gemini.

Sharur, smasher of thousands! The weapon of Ninurta, acting as his counselor and scout - flies ahead, assesses, reports back, then executes.


CI Coverage Go Reference Go Report Card
Latest Release Go Version License

Core Philosophy

  • Local-First — Built from the ground up to favor local inference for privacy, speed, and cost-efficiency.
  • Aggressively Extensible — Every tool, provider, and behavior is a plugin interface. Supports gRPC extensions, markdown skills, and reusable prompt templates.
  • Session Persistence — Intelligent JSONL-backed session management with project-aware storage, branching, forking, and tree visualization.
  • Flexible Modes — TUI mode, one-shot mode, or a multi-session gRPC service — all powered by a central service-oriented architecture.
  • Security & Safety — Dry-run safety for destructive tools, automatic prompt injection mitigation, and a gRPC extension system for enforcing arbitrary policies.

Getting Started

Prerequisites

  • Go 1.26.2+
  • Nix (optional, recommended) — with flake support enabled

Installation

# Recommended: use Nix for a fully reproducible dev environment
nix develop

# Build binary with Go
go build -o shr ./cmd/shr

# Or install globally
go install ./cmd/shr

Quick Start

# Launch the interactive TUI
shr

# One-shot answer (JSONL output)
shr --mode json "What is the best way to structure a Go project?"

# Resume the most recent session
shr --continue

What’s in This Site

SectionAudienceContents
CLIAll usersModes, keybindings, slash commands, provider setup, configuration
ExtensibilityExtension authorsSkills, prompt templates, Go/Python/gRPC extensions
SDKGo library consumersEmbedding, custom tools, events, in-process extensions
InternalsContributorsArchitecture, agent loop, session format, build system
API ReferenceSDK & extension authorsGoDoc for sdk, extensions, internal/tools, internal/agent

Subsections of Sharur

User Guide

The user guide covers day-to-day use of sharur from the terminal:

  • CLI — runtime modes, flags, keybindings, slash commands, and configuration
  • Extensibility — skills, prompt templates, and Go/Python/gRPC extensions

Subsections of User Guide

CLI

shr is the sharur CLI binary. It supports three runtime modes and a rich flag surface for model selection, session management, tools, and extensions.

Runtime Modes

ModeFlagDescription
TUI--mode tui (default)Interactive Bubble Tea terminal interface with streaming, tool cards, and session management
JSON--mode jsonOne-shot query with line-delimited JSON event output — useful for shell pipelines
gRPC--mode grpcPersistent multi-session gRPC service — any gRPC-capable client can connect

Quick Start

# Launch the interactive TUI
shr

# One-shot answer (JSONL output)
shr --mode json "What is the best way to structure a Go project?"

# Resume the most recent session
shr --continue

See the sub-pages for full keybinding and slash command references, JSON event schema, gRPC proto overview, provider setup, and the full configuration schema.

Subsections of CLI

Configuration

sharur uses layered JSON configuration. Project-level settings override global defaults.

PathScope
~/.sharur/config.jsonGlobal defaults — applies to all projects
.sharur/config.jsonProject-level overrides — applies in this directory

config.json Schema

{
  "defaultModel": "llama3.2",
  "defaultProvider": "ollama",
  "theme": "dark",
  "thinkingLevel": "medium",
  "ollamaBaseURL": "http://localhost:11434",
  "openAIBaseURL": "https://api.openai.com/v1",
  "openAIApiKey": "",
  "anthropicApiKey": "",
  "anthropicApiVersion": "",
  "googleApiKey": "",
  "llamaCppBaseURL": "http://localhost:8080",
  "compaction": {
    "enabled": true,
    "reserveTokens": 2048,
    "keepRecentTokens": 8192
  }
}

API keys can also be set via environment variables — env vars take priority over config file values.


Context Files

sharur auto-discovers AGENTS.md, CLAUDE.md, GEMINI.md, and .context.md in your project root and parent directories and injects them into the system prompt. Outermost files take precedence (parent directory wins over project root).

Disable with --no-context-files.


CLI Flags

Mode

FlagDescription
--modeMode: tui (default), json, grpc
--grpc-addrgRPC listen address (default :50051; --mode grpc only)

Model / Provider

FlagDescription
--model / -mModel to use (e.g. llama3, gpt-4o, anthropic/claude-sonnet-4-6)
--providerProvider: ollama, openai, anthropic, llamacpp, google
--api-keyAPI key override
--thinkingThinking level: off, minimal, low, medium, high, xhigh
--modelsComma-separated model list for Ctrl+P cycling

Session

FlagDescription
--continue / -cResume the most recent session
--resume / -rSelect a session to resume (fuzzy search or ID)
--sessionUse a specific session file path
--session-dirDirectory for session storage and lookup
--branchBranch from a session file or partial UUID into a new child session
--no-sessionEphemeral mode: don’t save the session

System Prompt

FlagDescription
--system-promptOverride the system prompt
--append-system-promptAppend text or file to the system prompt (repeatable)

Tools

FlagDescription
--toolsComma-separated list of tools to enable: read,bash,edit,write,grep,find,ls
--no-toolsDisable all built-in tools
--dry-runSafety mode: destructive tools preview actions instead of running

Extensions / Skills / Prompts

FlagDescription
--extension / -eLoad a gRPC extension binary (repeatable)
--no-extensionsDisable extension directory auto-discovery (-e paths still load)
--skillLoad a skill file or directory (repeatable)
--no-skillsDisable skill auto-discovery
--prompt-templateLoad a prompt template file or directory (repeatable)
--no-prompt-templatesDisable prompt template auto-discovery

Output / Info

FlagDescription
--exportExport current session to an HTML file and exit
--list-modelsList available models from the configured provider (optional fuzzy filter)
--version / -vShow version number
--verboseForce verbose startup output
--offlineDisable startup network operations (model checks, etc.)

Provider Setup

sharur supports five LLM providers. All configuration lives in config.json files or environment variables; environment variables take priority over config file values.


Model Naming

Models can be specified as provider/model shorthand or with separate flags:

# Shorthand: provider inferred from the slash-prefix
shr --model anthropic/claude-sonnet-4-6

# Explicit: provider and model as separate flags
shr --provider anthropic --model claude-sonnet-4-6

Both forms are equivalent. The shorthand is convenient for one-off overrides; the config file form is better for persistent defaults.


Environment Variables

API keys set via environment variable take priority over values in config.json. The env var names use the SHARUR_ prefix:

ProviderEnvironment Variable
AnthropicSHARUR_ANTHROPIC_API_KEY
OpenAISHARUR_OPENAI_API_KEY
GoogleSHARUR_GOOGLE_API_KEY

Ollama and llama.cpp are local servers and do not use API keys.


Ollama

Ollama runs models locally. It is the default provider.

// ~/.sharur/config.json or .sharur/config.json
{
  "defaultProvider": "ollama",
  "defaultModel": "llama3.2",
  "ollamaBaseURL": "http://localhost:11434"
}
# Pull a model and launch
ollama pull llama3.2
shr

# Use a specific model
shr --model ollama/llama3.2

# Point at a remote Ollama server
shr --model llama3.2 --provider ollama

Notes:

  • Default base URL is http://localhost:11434. Override with ollamaBaseURL.
  • Ollama models support tools and images (vision models).
  • Use shr --list-models to see all locally available models.
  • Thinking is supported on models that emit <think> tokens (e.g. qwq, deepseek-r1).

llama.cpp

llama.cpp exposes an OpenAI-compatible HTTP server.

{
  "defaultProvider": "llamacpp",
  "llamaCppBaseURL": "http://localhost:8080"
}
# Start the llama.cpp server (example)
./llama-server -m model.gguf --port 8080

# Connect with sharur
shr --provider llamacpp --model my-model

Notes:

  • Default base URL is http://localhost:8080. Override with llamaCppBaseURL.
  • The model name passed to shr is forwarded to the server as-is.
  • Image attachments are not supported.
  • The server’s own context window size is used; sharur queries /v1/models to detect it.

OpenAI

{
  "defaultProvider": "openai",
  "defaultModel": "gpt-4o",
  "openAIApiKey": "",
  "openAIBaseURL": "https://api.openai.com/v1"
}
# Via environment variable (recommended)
export SHARUR_OPENAI_API_KEY=sk-...
shr --model openai/gpt-4o

# One-off key override
shr --provider openai --model gpt-4o --api-key sk-...

OpenAI-compatible endpoints:

Any server that implements the OpenAI chat completions API can be used by pointing openAIBaseURL at it:

{
  "defaultProvider": "openai",
  "openAIBaseURL": "http://localhost:11434/v1",
  "openAIApiKey": "unused"
}

This works with vLLM, LM Studio, and others.

Notes:

  • Reasoning models (o3, o4-mini) emit thinking deltas that appear in the TUI and JSON event stream.
  • Supports tools and vision (images) for compatible models.

Anthropic

{
  "defaultProvider": "anthropic",
  "defaultModel": "claude-sonnet-4-6",
  "anthropicApiKey": "",
  "anthropicApiVersion": ""
}
export SHARUR_ANTHROPIC_API_KEY=sk-ant-...
shr --model anthropic/claude-sonnet-4-6

# Extended thinking (claude-3-7-sonnet and later)
shr --model anthropic/claude-3-7-sonnet-20250219 --thinking high

Notes:

  • Extended thinking is supported for models that enable it (e.g. claude-3-7-sonnet). Use --thinking medium or --thinking high.
  • medium thinking uses a 10,000-token budget; high uses 20,000 tokens. Temperature is automatically set to the required value.
  • anthropicApiVersion overrides the anthropic-version request header; leave empty to use the library default.

Google Gemini

{
  "defaultProvider": "google",
  "defaultModel": "gemini-2.0-flash",
  "googleApiKey": ""
}
export SHARUR_GOOGLE_API_KEY=AIza...
shr --model google/gemini-2.0-flash

Notes:

  • Gemini 1.5 Pro and later have a 1M+ token context window.
  • Supports tools and vision (images).
  • Use shr --list-models to see available Gemini models.

Listing Available Models

All five providers implement model listing. Use --list-models to query the active provider:

# List Ollama models
shr --list-models

# List models from a specific provider
shr --provider anthropic --list-models

# Filter results
shr --provider openai --list-models gpt-4

The output is a plain list of model names, suitable for piping:

shr --list-models | fzf | xargs -I{} shr --model {}

Provider Feature Matrix

ProviderToolsImagesThinkingModel Listing
ollamamodel-dependent
llamacpp
openaireasoning models
anthropic✓ extended
google

TUI

The TUI is a rich, Bubble Tea-powered interface with real-time streaming, tool cards, session management, and a live context usage progress bar in the status footer.


Keybindings

KeyAction
EnterSend message (or Steer the running agent)
Shift+EnterInsert newline
Ctrl+EnterQueue follow-up message (runs after agent finishes)
Ctrl+CAbort the current agent run and clear the input editor
EscCancel streaming / Close modal / Abort current turn
Ctrl+OToggle tool call output expansion
Ctrl+POpen model selection modal (cycling via --models flag)
↑/↓Navigate prompt history (if at start/end of editor) / Scroll viewport
F1Show help modal

Slash Commands

CommandDescription
/newStart a fresh session
/resume <id>Resume a session by ID or partial UUID (fuzzy search enabled)
/branch [idx]Create a new child session branching from a specific message index (defaults to last)
/forkDuplicate current session into a new independent session (no parent link)
/rebaseInteractive rebase: select specific messages to keep in a new session
/merge <id>Merge another session’s history into the current one with a synthesis turn
/tree [-g|-p]Open session tree modal. Flags: --global (-g) or --project (-p)
/import <path>Import a session from a JSONL file
/export <path>Export the current session to a JSONL file
/model <p/m>Switch model mid-conversation (e.g. /model anthropic/claude-sonnet-4-6)
/statsView session statistics and token usage
/configView and edit active configuration
/contextView detailed context window usage
/compactManually trigger a context compaction
/skill:<name> [args]Invoke a skill
/prompt:<name>Expand a prompt template into the editor
/exitQuit (alias: /quit)

Session Tree Modal (/tree)

KeyAction
↑/↓ / PgUp/PgDnNavigate the session list
EnterResume the selected session (or branch from it if it’s an interior node)
BCreate a new branch from the selected session
FCreate an independent fork of the selected session
RStart an interactive rebase from the selected session’s history
EscClose modal

Bang Commands

Bang commands execute a shell command and inject the output into the conversation:

!ls -la          # Execute shell command, paste output into editor
!!cat README.md  # Execute shell command, send output directly to agent
  • !cmd — pastes stdout into the editor so you can review before sending
  • !!cmd — sends stdout directly to the agent without review

At-File Attachments

Type @ in the input to fuzzy-search and attach file contents to your prompt:

Tell me what this does @src/agent/loop.go

The file content is embedded inline in the message sent to the agent.

JSON Mode

JSON mode runs a single prompt and streams the agent’s events as line-delimited JSON (JSONL) to stdout. It is designed for shell pipelines and tooling integration.

shr --mode json "What is the best way to structure a Go project?"

# Pipe stdin as context
cat main.go | shr --mode json "Refactor this to use interfaces"

# Specify a model
shr --mode json "Summarize the last 10 git commits" --model anthropic/claude-opus-4-5

Event Format

Each line is the protobuf JSON encoding of an AgentEvent. Event types mirror the TUI stream:

  • EVENT_AGENT_START / EVENT_AGENT_END
  • EVENT_TEXT_DELTA — incremental response text
  • EVENT_THINKING_DELTA — incremental thinking text (extended thinking models)
  • EVENT_TOOL_CALL — tool invocation start
  • EVENT_TOOL_DELTA — streaming tool output
  • EVENT_TOOL_OUTPUT — final tool result
  • EVENT_TURN_START / EVENT_TURN_END

Common Patterns

# Capture only the text deltas
shr --mode json "Explain Go interfaces" \
  | jq -r 'select(.type == "EVENT_TEXT_DELTA") | .content'

# Run without saving the session
shr --mode json --no-session "Quick one-off question"

# Dry-run to see what tools would be called
shr --mode json --dry-run "Delete all .tmp files in the current directory"

gRPC Mode

gRPC mode starts a persistent AgentService server. Each connecting client supplies a session_id and gets its own isolated agent. Sessions are saved to disk after each turn and reloaded automatically on reconnect.

# Start on the default port
shr --mode grpc

# Use a custom address
shr --mode grpc --grpc-addr :9090

The server responds to SIGINT/SIGTERM with a graceful shutdown: in-flight turns are allowed to finish (30 s timeout), all sessions are flushed to disk, then the listener closes.


Proto Definition

The service is defined in proto/sharur/v1/agent.proto. Generated Go stubs live in internal/gen/sharur/v1/. Regenerate with mage generate.

Key RPCs:

RPCDescription
PromptSend a user message; streams back AgentEvents
NewSessionCreate a new session
GetMessagesRetrieve message history for a session
GetStateGet current agent state
SteerInject a steering message mid-turn
FollowUpQueue a follow-up after the current turn
AbortCancel the current running turn
ForkSessionFork a session into a new independent copy
ConfigureSessionChange model, provider, or thinking level

In-Process Transport

For the TUI and JSON modes, all internal communication also goes through this same protobuf boundary using a bufconn in-memory pipe — not a network socket. This means all three modes share identical code paths. See Service Architecture for details.

Extensibility

sharur supports three extension points:

  • Skills — reusable prompt templates invoked with /skill-name
  • Prompts — system prompt injection via YAML files
  • Extensions — in-process Go, out-of-process Python, or gRPC plugins

Subsections of Extensibility

Skills

Skills are Markdown files that provide sharur with specialized, reusable instructions for specific tasks. When a skill is invoked, its content is sent as a user message to the agent along with any arguments you provide.


How Skills Work

When sharur starts, it scans the skill directories and adds a list of available skills to the system prompt. The agent knows which skills exist and their descriptions. You can explicitly invoke a skill with /skill:<name> from the TUI, or the agent may choose to invoke one automatically via the read tool or a specialized skill tool call.

When you invoke a skill via /skill:<name>, it is executed as a skill tool, which loads the content and sends it to the agent:

<skill name="refactor" location="/path/to/refactor/SKILL.md">
References are relative to /path/to/refactor/.

...skill content here...
</skill>

your additional arguments here

Skill Discovery Directories

sharur searches for skills in these locations (in order):

PathScope
~/.sharur/skills/Global — available in all projects
.sharur/skills/ (project root)Project-specific skills

Skills with the same name in a project directory override global ones.


Skill File Formats

Simple: Single .md file

Create a .md file directly in a skills directory. The filename (without extension) becomes the skill name.

.sharur/skills/refactor.md

Invoke with:

/skill:refactor improve error handling

Structured: Directory with SKILL.md

Create a directory containing a SKILL.md file. The directory name becomes the skill name. This format lets you include supporting files (examples, templates) alongside the skill.

.sharur/skills/
  code-review/
    SKILL.md
    checklist.md
    examples/
      before.go
      after.go

Invoke with:

/skill:code-review

Note: When a SKILL.md is found in a directory, subdirectories are not scanned further. This lets you bundle reference files with your skill.


Frontmatter (Optional)

Both formats support optional YAML frontmatter to provide metadata:

---
name: refactor
description: Refactor Go code to use idiomatic patterns and interfaces
---

You are an expert Go developer. When asked to refactor code:

1. Identify opportunities to use interfaces for testability
2. Replace repetitive code with helper functions
3. Add godoc comments to all exported symbols
4. Ensure error handling follows Go conventions (wrap with %w)

Always explain the reasoning behind each change before making it.

Frontmatter fields:

FieldDescription
nameOverride the skill name (defaults to filename/directory name)
descriptionA short description shown to the agent in the system prompt

Practical Examples

Code Review Skill

.sharur/skills/code-review.md

---
name: code-review
description: Perform a thorough code review with actionable feedback
---

Review the provided code and evaluate it against these criteria:

**Correctness**
- Does the logic match the intended behavior?
- Are edge cases handled?
- Are there potential nil pointer dereferences or index out-of-bounds issues?

**Maintainability**
- Is the code readable and self-documenting?
- Are functions focused on a single responsibility?
- Is there appropriate error handling?

**Performance**
- Are there obvious inefficiencies (e.g. unnecessary allocations, N+1 queries)?

Format your response as:
## Summary
<one paragraph>

## Issues
<numbered list of specific issues with file:line references>

## Suggestions
<numbered list of improvements>

Invoke:

/skill:code-review

Or attach a file reference:

/skill:code-review @[internal/agent/loop.go]

Structured Skill with Supporting Files

.sharur/skills/
  db-migration/
    SKILL.md
    schema-example.sql
---
name: db-migration
description: Generate SQL migration files following our project conventions
---

Generate a database migration for the requested schema change.

Our migration file conventions:
- Files are named: `YYYYMMDD_HHMMSS_description.sql`
- Each file has an `-- +migrate Up` and `-- +migrate Down` section
- All tables use `BIGINT` primary keys with `AUTO_INCREMENT`
- Always include `created_at` and `updated_at` TIMESTAMP columns

See the example schema at the path listed in this skill's location directory: `schema-example.sql`

Global Utility Skill

~/.sharur/skills/explain.md

---
name: explain
description: Explain code clearly for a non-expert audience
---

Explain the following code in plain English. Assume the reader is a competent programmer but unfamiliar with this codebase.

Structure your explanation as:
1. **Purpose** — What does this code do in one sentence?
2. **How it works** — Step-by-step walkthrough of the logic
3. **Key concepts** — Any domain-specific terms or patterns used
4. **Gotchas** — Anything surprising or non-obvious

Tips

  • Keep skills focused. One skill = one task type. Compose them with arguments rather than making a single skill do everything.
  • Use relative file references — when your skill body references files, note they resolve relative to the skill’s directory. The agent is told the skill’s location so it can use the read tool on supporting files.
  • Test your skill by invoking it with /skill:<name> in the TUI. The skill’s content and its effect on the conversation will be visible in the tool output cards.
  • Override skills per-project — place a skill with the same name in .sharur/skills/ to override the global version for a specific project.

Prompt Templates

Prompt templates are reusable text snippets that expand directly into the TUI input editor. Unlike skills (which are sent to the agent immediately), prompt templates let you pre-fill the editor so you can review, edit, or complete the text before sending.


How Prompt Templates Work

When you type /prompt:<name> and press Enter, the template content is loaded into the editor input. You can then modify it, add context, attach files with @, and send it normally. This is useful for long, structured prompts you use frequently.


Prompt Template Directories

sharur searches these locations (in order):

PathScope
~/.sharur/prompts/Global — available in all projects
.sharur/prompts/ (project root)Project-specific templates

Template File Format

A prompt template is any .md file in a prompts directory. The filename (without extension) is the template name.

.sharur/prompts/bug-report.md

Invoke with:

/prompt:bug-report

Minimal Template (no frontmatter)

The entire file content becomes the template text:

Describe the bug you found:

**Steps to reproduce:**
1.
2.
3.

**Expected behavior:**

**Actual behavior:**

**Environment:**
- OS:
- shr version:
- Model:

Template with Frontmatter

Add optional YAML frontmatter for metadata:

---
description: Generate a structured bug report
argument-hint: <component-name>
---

Describe the bug you found in the $1 component:

**Steps to reproduce:**
1.
2.
3.

**Expected behavior:**

**Actual behavior:**

Frontmatter fields:

FieldDescription
descriptionShort description shown in the /prompt: picker
argument-hintHint shown in autocomplete describing expected arguments

Argument Substitution

Templates support positional argument placeholders: $1, $2, etc.

When you invoke a template via the slash command handler (not the interactive TUI), arguments after the template name are substituted. To mitigate prompt injection, sharur automatically wraps these arguments in <untrusted_input> tags. In the TUI, the template expands as-is and you fill in the values manually.


Practical Examples

PR Description Template

.sharur/prompts/pr-description.md

---
description: Generate a pull request description
---

Write a pull request description for the following changes.

**Format:**
## Summary
<What does this PR do? Why?>

## Changes
<Bullet list of specific changes>

## Testing
<How was this tested?>

## Notes
<Anything reviewers should pay attention to>

The diff is:

Invoke:

/prompt:pr-description

Then paste or attach the diff before sending.


Architecture Decision Record

.sharur/prompts/adr.md

---
description: Draft an Architecture Decision Record (ADR)
argument-hint: <decision-title>
---

Draft an Architecture Decision Record (ADR) for: **$1**

Use this structure:

# ADR: $1

## Status
Proposed

## Context
<What is the issue motivating this decision?>

## Decision
<What was decided?>

## Consequences
### Positive
-

### Negative
-

### Neutral
-

## Alternatives Considered
<What other approaches were evaluated and why were they rejected?>

Invoke:

/prompt:adr Use JSONL for session storage

Global Commit Message Template

~/.sharur/prompts/commit.md

---
description: Generate a conventional commit message
---

Generate a commit message following the Conventional Commits specification for the following diff or description of changes.

Format:
` ` `
<type>(<scope>): <short description>

<body: what changed and why, wrapped at 72 chars>

<footer: breaking changes, issue references>
` ` `

Types: feat, fix, docs, style, refactor, perf, test, chore

Changes:

Invoke:

/prompt:commit

Code Explanation for PR Comments

.sharur/prompts/explain-for-review.md

---
description: Explain a code block suitable for a PR comment
---

Explain the following code in a way that's suitable for a GitHub PR review comment. Be concise (2-4 sentences max), assume the reader is a senior engineer, and highlight any non-obvious design decisions.

Code:

Tips

  • Prompt templates are for your input. They expand into the editor, not directly to the agent. This gives you a chance to customize before sending.
  • Use $1, $2 placeholders for dynamic parts you’ll always fill in differently. Leave static boilerplate as literal text.
  • Combine with @ file attachments. Type /prompt:code-review then add @src/myfile.go before pressing Enter to attach a file.
  • Project-specific overrides. A template in .sharur/prompts/ with the same name as a global template takes priority for that project.
  • Organize with subdirectories. Templates are discovered recursively, so you can group them:
    .sharur/prompts/
      code/
        refactor.md
        review.md
      docs/
        readme.md
        adr.md
    Invoke as /prompt:refactor, /prompt:adr, etc. (name is the filename, not the full path).

Go Extensions

Extensions let you add new behaviors to sharur beyond what’s possible with skills and prompt templates. They can observe and modify every stage of the agent loop — from the raw user input through each LLM turn and tool call to compaction and session teardown. Extensions run as separate processes and communicate with sharur via gRPC.


Extension Types

TypeLanguageUse Case
Go binaryGoHigh-performance tools, direct filesystem access
Python scriptPythonData processing, ML integrations, API calls
Any executableAnyShell scripts, compiled binaries from any language

All extension types use the same gRPC protocol. The loader treats .py files specially (runs them with the configured Python interpreter), and everything else is executed directly as a binary.


Extension Discovery

Extensions are loaded from directories listed in your config under extensions:

// .sharur/config.json
{
  "extensions": [".sharur/extensions"]
}

Or globally in ~/.sharur/config.json.

Place your extension binary or script in the configured directory. sharur will automatically discover and launch it on startup.

You can also load a specific extension at runtime with the --extension flag:

shr --extension /path/to/my-extension "Your prompt here"

The Plugin Interface

Every Go extension implements the extensions.Plugin interface from github.com/goppydae/sharur/extensions. Embed extensions.NoopPlugin and override only the hooks you need.

Load-time hooks

MethodWhen calledPurpose
Name()On loadReturns the extension’s identifier string
Tools()On loadReturns tool definitions the agent can call
ExecuteTool()On tool callExecutes a tool registered by this extension

Session lifecycle hooks

MethodWhen calledPurpose
SessionStart(ctx, sessionID, reason)Session attached or first promptOpen connections, initialize per-session state
SessionEnd(ctx, sessionID, reason)Session resetFlush buffers, close connections

reason is "new" for a fresh session and "resume" for one loaded from disk.

Agent loop hooks

MethodWhen calledPurpose
AgentStart(ctx)User prompt received, loop beginsPer-prompt setup, logging
AgentEnd(ctx)Agent loop completesPer-prompt teardown, emit metrics
TurnStart(ctx)Start of each LLM request turnPer-turn timing
TurnEnd(ctx)After each turn’s tool calls finishPer-turn cleanup

Transformation hooks

MethodWhen calledCan modifyPurpose
ModifyInput(ctx, text)Before user text hits the transcriptYes — transform or consumePre-process input, implement shortcuts
ModifySystemPrompt(prompt)Before each LLM requestYes — returns new promptInject dynamic context into the system prompt
BeforePrompt(ctx, state)Before each LLM requestYes — returns new stateChange model, provider, or thinking level
ModifyContext(ctx, messagesJSON)Before each LLM request is builtYes — returns new JSONFilter or inject messages sent to the LLM (transcript unchanged)
BeforeProviderRequest(ctx, requestJSON)Just before the request is sentYes — returns new JSONModify temperature, max tokens, tools list
AfterProviderResponse(ctx, content, numToolCalls)After LLM stream consumedNoObserve response text and tool call count
BeforeToolCall(ctx, call, args)Before each tool executionYes — can interceptBlock or replace tool execution
AfterToolCall(ctx, call, result)After each tool executionYes — returns new resultObserve or modify tool results
BeforeCompact(ctx, prep)Before LLM-based summarizationYes — can skipProvide a custom compaction summary
AfterCompact(ctx, freedTokens)After compaction completesNoObserve freed token count

Key behaviors:

  • ModifyInput returns agent.InputResult. Set Action to "continue" (pass through unchanged), "transform" (use the Text field instead), or "handled" (consume the message entirely — it is not appended to the transcript and the agent does not run).
  • ModifyContext and BeforeProviderRequest work with JSON strings at the gRPC boundary. The GRPCClient marshals/unmarshals the Go structs automatically.
  • BeforeCompact returns "" (empty) to let the default LLM summarization run, or a non-empty summary string to provide your own and skip the LLM call. The prep argument includes the message count, estimated token count, and the previous summary (if any).
  • BeforeToolCall returns (ToolResult, true) to intercept (the tool does not execute), or (ToolResult{}, false) to allow normal execution.

Example: Git Context Injection

// .sharur/extensions/git-context/main.go
package main

import (
    "context"
    "fmt"
    "os/exec"
    "strings"

    "github.com/goppydae/sharur/extensions"
)

type GitContextPlugin struct {
    extensions.NoopPlugin
}

func (p *GitContextPlugin) BeforePrompt(_ context.Context, state extensions.AgentState) extensions.AgentState {
    branch := gitOutput("rev-parse", "--abbrev-ref", "HEAD")
    status := gitOutput("status", "--short")
    log := gitOutput("log", "--oneline", "-5")

    state.SystemPrompt += fmt.Sprintf(
        "\n\n<git_context>\nBranch: %s\n\nRecent commits:\n%s\n\nWorking tree:\n%s\n</git_context>",
        branch, log, status,
    )
    return state
}

func gitOutput(args ...string) string {
    out, err := exec.Command("git", args...).Output()
    if err != nil {
        return "(unavailable)"
    }
    return strings.TrimSpace(string(out))
}

func main() {
    extensions.Serve(&GitContextPlugin{
        NoopPlugin: extensions.NoopPlugin{NameStr: "git-context"},
    })
}

Build and auto-discover:

cd .sharur/extensions/git-context && go build -o ../git-context .

Example: Session Lifecycle Hooks

type AuditPlugin struct {
    extensions.NoopPlugin
    log *os.File
}

func (p *AuditPlugin) SessionStart(_ context.Context, sessionID string, reason agent.SessionStartReason) {
    p.log, _ = os.OpenFile(fmt.Sprintf("/tmp/sharur-%s.log", sessionID[:8]), os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0644)
    fmt.Fprintf(p.log, "session %s (%s)\n", sessionID, reason)
}

func (p *AuditPlugin) SessionEnd(_ context.Context, sessionID string, _ agent.SessionEndReason) {
    if p.log != nil {
        p.log.Close()
    }
}

func (p *AuditPlugin) AfterProviderResponse(_ context.Context, content string, numToolCalls int) {
    fmt.Fprintf(p.log, "response: %d chars, %d tool calls\n", len(content), numToolCalls)
}

Example: Input Transformation

ModifyInput runs before the user text is added to the transcript. Return "handled" to consume shortcuts silently, or "transform" to rewrite the text:

func (p *MyPlugin) ModifyInput(_ context.Context, text string) agent.InputResult {
    if strings.HasPrefix(text, "?quick ") {
        return agent.InputResult{
            Action: agent.InputTransform,
            Text:   "Respond in one sentence: " + text[7:],
        }
    }
    if text == "ping" {
        return agent.InputResult{Action: agent.InputHandled}
    }
    return agent.InputResult{Action: agent.InputContinue}
}

Example: Custom Compaction

Return a non-nil *agent.CompactionResult from BeforeCompact to supply your own summary and bypass the default LLM-based summarization:

func (p *MyPlugin) BeforeCompact(_ context.Context, prep agent.CompactionPrep) *agent.CompactionResult {
    if prep.EstimatedTokens < 50000 {
        return nil
    }
    summary := callCheaperModel(prep.PreviousSummary, prep.MessageCount)
    return &agent.CompactionResult{
        Summary: summary,
    }
}

Example: Extension with Custom Tools

Extensions can contribute tools the agent calls just like built-in tools:

type CounterPlugin struct {
    extensions.NoopPlugin
}

func (p *CounterPlugin) Tools() []extensions.ToolDefinition {
    return []extensions.ToolDefinition{
        {
            Name:        "count_lines",
            Description: "Count lines in a string",
            Schema:      json.RawMessage(`{"type":"object","properties":{"text":{"type":"string"}},"required":["text"]}`),
            IsReadOnly:  true,
        },
    }
}

func (p *CounterPlugin) ExecuteTool(_ context.Context, name string, args json.RawMessage) extensions.ToolResult {
    if name != "count_lines" {
        return extensions.ToolResult{Content: "unknown tool", IsError: true}
    }
    var input struct{ Text string `json:"text"` }
    _ = json.Unmarshal(args, &input)
    n := strings.Count(input.Text, "\n") + 1
    return extensions.ToolResult{Content: fmt.Sprintf("%d lines", n)}
}

Example: Intercepting Tool Calls (Sandbox)

BeforeToolCall lets you block or replace any built-in tool call:

type SandboxPlugin struct {
    extensions.NoopPlugin
    AllowedDir string
}

func (p *SandboxPlugin) BeforeToolCall(_ context.Context, call extensions.ToolCall, args json.RawMessage) (extensions.ToolResult, bool) {
    var input struct{ Path string `json:"path"` }
    _ = json.Unmarshal(args, &input)
    if input.Path != "" && !strings.HasPrefix(input.Path, p.AllowedDir) {
        return extensions.ToolResult{
            Content: fmt.Sprintf("blocked: %s is outside %s", input.Path, p.AllowedDir),
            IsError: true,
        }, true
    }
    return extensions.ToolResult{}, false
}

See examples/sandbox/ for a complete standalone implementation.


Extension Lifecycle

flowchart TD
    Start["shr startup"] --> Scan["Scan extension directories"]
    Scan --> Launch["Launch subprocess
SHARUR_SOCKET_PATH=..."]
    Launch --> Socket["Wait for socket · dial gRPC"]
    Socket --> Init["Name() · Tools()"]

    Init --> SS["SessionStart(sessionID, reason)
on new session or resume"]

    SS --> MI["ModifyInput(text)"]
    MI --> AS["AgentStart()"]

    subgraph turn ["Per LLM turn (repeats until no tool calls)"]
        direction TB
        T1["BeforePrompt() · ModifySystemPrompt()
ModifyContext() · BeforeProviderRequest()"]
        T2[/"LLM streams"/]
        T3["AfterProviderResponse() · TurnStart()"]
        subgraph toolloop ["Per tool call"]
            BTC["BeforeToolCall()"] --> Intercept{"intercept?"}
            Intercept -->|yes| CustomResult["return custom ToolResult"]
            Intercept -->|no| Exec["execTool() · AfterToolCall()"]
        end
        TE["TurnEnd()"]
        T1 --> T2 --> T3 --> toolloop --> TE
    end

    AS --> turn
    turn --> AE["AgentEnd()"]

    subgraph compact ["On compaction (auto or /compact)"]
        direction TB
        BC["BeforeCompact(prep)"] --> CustomSummary{"return non-nil?"}
        CustomSummary -->|yes| SkipLLM["skip LLM summarization"]
        CustomSummary -->|no| LLMSum["LLM summarizes"]
        SkipLLM --> AC["AfterCompact(freedTokens)"]
        LLMSum --> AC
    end

    AE --> SE["SessionEnd(sessionID, reason)
on session reset"]
    SE --> Shutdown["shr shutdown · kill subprocess"]

In-Process Go Extension (Advanced)

If your extension is written in Go and you control the build, you can implement agent.Extension directly via the SDK and register it without the gRPC overhead:

import (
    "github.com/goppydae/sharur/internal/agent"
    "github.com/goppydae/sharur/internal/tools"
)

type MyExtension struct {
    agent.NoopExtension
}

func (e *MyExtension) AgentStart(ctx context.Context) {
    log.Println("agent started")
}

func (e *MyExtension) ModifyInput(ctx context.Context, text string) agent.InputResult {
    if text == "ping" {
        return agent.InputResult{Action: agent.InputHandled}
    }
    return agent.InputResult{Action: agent.InputContinue}
}

func (e *MyExtension) ModifySystemPrompt(prompt string) string {
    return prompt + "\n\nAlways respond in bullet points."
}

func (e *MyExtension) BeforeToolCall(ctx context.Context, call *agent.ToolCall, args json.RawMessage) (*tools.ToolResult, bool) {
    if call.Name == "bash" {
        return &tools.ToolResult{Content: "bash is disabled", IsError: true}, true
    }
    return nil, false
}

Pass the extension via ag.SetExtensions() from the SDK or directly in cmd/shr.


Tips

  • Extensions are isolated processes. A crash in an extension will not crash sharur — the loader catches errors and logs them.
  • Keep BeforePrompt and ModifySystemPrompt fast. They run before every single LLM call. Cache data when possible; avoid blocking network calls.
  • ModifyContext does not affect the stored transcript. Changes to the message slice are only visible to the LLM for that turn.
  • Use skills for static context. If you only need to append static text to the system prompt, a skill is simpler than an extension.
  • Extensions are global. All extensions in the configured directories are loaded for every session. There is no per-project scoping beyond the directory config.
  • Logs go to stderr. Stdout is not read by the host; stderr is passed through for debugging.
  • InputHandled stops all further processing. No agent turn is started, no message is appended to the transcript.
  • BeforeCompact fires before the LLM call. Return nil to let the default summarizer run. Return a *CompactionResult to supply your own summary — useful for using a cheaper model or domain-specific logic.

Python Extensions

Python extensions use the same gRPC protocol as Go extensions. The loader detects .py files and runs them with the configured Python interpreter, passing SHARUR_SOCKET_PATH as an environment variable. The extension is expected to listen on that Unix socket.


Prerequisites

pip install grpcio grpcio-tools

Generate Python Stubs

python -m grpc_tools.protoc \
  -I extensions/proto \
  --python_out=.sharur/extensions \
  --grpc_python_out=.sharur/extensions \
  extensions/proto/extension.proto

This deposits extension_pb2.py and extension_pb2_grpc.py alongside your script.


Implement the Extension

# .sharur/extensions/ticket_context.py
import os
import subprocess
import grpc
from concurrent import futures
import extension_pb2
import extension_pb2_grpc


class TicketContextServicer(extension_pb2_grpc.ExtensionServicer):
    def Name(self, request, context):
        return extension_pb2.NameResponse(name="ticket-context")

    def Tools(self, request, context):
        return extension_pb2.ToolsResponse(tools=[])

    def BeforePrompt(self, request, context):
        branch = subprocess.check_output(
            ["git", "rev-parse", "--abbrev-ref", "HEAD"], text=True
        ).strip()
        state = request.state or extension_pb2.AgentState()
        state.prompt += f"\n\n<branch>Current branch: {branch}</branch>"
        return extension_pb2.BeforePromptResponse(state=state)

    def BeforeToolCall(self, request, context):
        return extension_pb2.BeforeToolCallResponse(intercept=False)

    def AfterToolCall(self, request, context):
        return extension_pb2.AfterToolCallResponse(result=request.result)

    def ModifySystemPrompt(self, request, context):
        return extension_pb2.ModifySystemPromptResponse(
            modified_prompt=request.current_prompt
        )

    def AgentStart(self, request, context):
        return extension_pb2.Empty()

    def AgentEnd(self, request, context):
        return extension_pb2.Empty()

    def ModifyInput(self, request, context):
        return extension_pb2.ModifyInputResponse(action="continue", text=request.text)


def serve():
    socket_path = os.environ["SHARUR_SOCKET_PATH"]
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    extension_pb2_grpc.add_ExtensionServicer_to_server(TicketContextServicer(), server)
    server.add_insecure_port(f"unix:{socket_path}")
    server.start()
    server.wait_for_termination()


if __name__ == "__main__":
    serve()

Place the script in your extensions directory. sharur runs it as python ticket_context.py on startup.


Available RPC Methods

Implement any subset of the ExtensionServicer methods. Unimplemented methods should return a sensible empty response (see the template above). The full list mirrors the Go plugin interface — see Go Extensions for hook semantics.

RPCPurpose
NameReturn extension identifier
ToolsReturn tool definitions
ExecuteToolExecute a registered tool
SessionStart / SessionEndSession lifecycle
AgentStart / AgentEndPer-prompt lifecycle
TurnStart / TurnEndPer-LLM-turn lifecycle
ModifyInputTransform or consume user input
ModifySystemPromptAugment the system prompt
BeforePromptMutate model/provider/thinking
ModifyContextFilter or inject LLM-bound messages
BeforeProviderRequestModify the raw completion request
AfterProviderResponseObserve LLM output
BeforeToolCallIntercept or block tool calls
AfterToolCallObserve or modify tool results
BeforeCompact / AfterCompactCompaction lifecycle

Tips

  • Logs go to stderr. Python’s print() goes to stdout, which is not read by the host. Use sys.stderr.write() or logging for debugging output.
  • Keep proto stubs in the same directory as your script, or adjust sys.path before importing them.
  • Thread safety: grpc.server with ThreadPoolExecutor handles concurrent RPC calls. If you maintain per-session state, use a lock or session-keyed dict.

gRPC Extensions

gRPC extensions run as separate processes. sharur manages their lifecycle: launching the binary, passing the socket path, waiting for readiness, dialing, and killing on shutdown. The extension communicates entirely over a Unix Domain Socket using the generated proto stubs in extensions/proto/extension.proto.


How It Works

sequenceDiagram
    participant Loader as shr Loader
    participant Ext as Extension process
    participant Client as gRPC client

    Loader->>Ext: exec binary/script
    note over Ext: env: SHARUR_SOCKET_PATH=/tmp/...sock
    Ext->>Ext: net.Listen("unix", socketPath)
    note over Ext: signals readiness by listening
    Loader->>Loader: poll for socket file
    Loader->>Client: dial gRPC over Unix socket
    Client->>Ext: Name()
    Ext-->>Client: "my-extension"
    Client->>Ext: Tools()
    Ext-->>Client: [ToolDefinition, ...]
    note over Loader,Ext: extension registered — hooks active for all sessions

The extension must call net.Listen("unix", os.Getenv("SHARUR_SOCKET_PATH")) and start serving before shr times out.


Writing a Go Extension

Import github.com/goppydae/sharur/extensions — no internal packages needed.

package main

import "github.com/goppydae/sharur/extensions"

type myPlugin struct {
    extensions.NoopPlugin
}

func (p *myPlugin) ModifySystemPrompt(prompt string) string {
    return prompt + "\n\nAlways respond in haiku."
}

func main() {
    extensions.Serve(&myPlugin{
        NoopPlugin: extensions.NoopPlugin{NameStr: "haiku"},
    })
}

extensions.Serve handles the socket path, gRPC server setup, and graceful shutdown. extensions.NoopPlugin provides no-op defaults for every method.

Build and place the binary in a configured extensions directory:

go build -o .sharur/extensions/haiku .

Or load at runtime:

shr --extension .sharur/extensions/haiku

Plugin Interface

All hooks map 1:1 to agent.Extension. See Go Extensions for full hook semantics and examples.

Load-time:

MethodCalledPurpose
Name()Once on connectExtension identifier
Tools()Once on connectContribute tools to the agent
ExecuteTool()Per tool callExecute a registered tool

Session lifecycle:

MethodCalledPurpose
SessionStart(ctx, sessionID, reason)New or resumed sessionOpen connections, init per-session state
SessionEnd(ctx, sessionID, reason)Session resetFlush, close connections

reason is "new" or "resume".


Proto Definition

The extension service is defined in extensions/proto/extension.proto. Generated Go stubs are in extensions/gen/. Regenerate with mage generate.

Python stubs can be generated with:

python -m grpc_tools.protoc \
  -I extensions/proto \
  --python_out=.sharur/extensions \
  --grpc_python_out=.sharur/extensions \
  extensions/proto/extension.proto

Tool Read-Only Semantics

Tool definitions returned by Tools() have an IsReadOnly bool field. Set it to true for tools that are safe in dry-run mode. The GRPCClient propagates this to the internal RemoteTool.IsReadOnly() so dry-run and sandbox extensions honour it correctly.


Debugging

  • Logs go to stderr. The host passes the subprocess’s stderr through. Use log.Println or fmt.Fprintln(os.Stderr, ...) for debug output.
  • Crashes are isolated. A panicking extension does not crash shr — the loader catches errors and logs them.
  • Socket timeout. If the extension doesn’t listen within the timeout, the loader logs an error and skips it. Ensure extensions.Serve (or your own net.Listen + grpc.Serve) is called promptly in main().
  • Test in isolation. Set SHARUR_SOCKET_PATH=/tmp/test.sock and run your extension binary directly; then grpcurl the socket to verify RPCs before integrating with shr.

Developer Guide

The developer guide covers two audiences:

  • SDK — embedding sharur as a Go library
  • Internals — architecture, agent loop, session format, build system

Subsections of Developer Guide

Internals

This section describes the high-level architecture of sharur: how its components are organized, how data flows through the system, and how the key abstractions relate to each other.


Directory Structure

sharur/
│   ├── internal/
│   │   ├── service/        # Central AgentService implementation + in-process client
│   │   ├── gen/            # Generated Protobuf stubs (pb.AgentServiceClient/Server)
│   │   ├── agent/          # Core agentic loop, event bus, state machine
│   │   ├── llm/            # LLM provider adapters (Ollama, OpenAI, Anthropic, llama.cpp, Google)
│   │   ├── tools/          # Built-in tool implementations + registry
│   │   ├── session/        # JSONL-backed session persistence, branching, tree
│   │   ├── modes/
│   │   │   ├── interactive/ # Bubble Tea TUI (pb client)
│   │   │   ├── print.go    # One-shot CLI JSONL mode (pb client)
│   │   │   └── grpc.go     # gRPC server mode (wraps Service)
│   │   ├── config/         # Config loading (global + project layering)
│   │   ├── themes/         # TUI colour themes
│   │   ├── types/          # Shared value types (Message, Session, ThinkingLevel)
│   │   ├── events/         # Generic publish-subscribe event bus
│   │   ├── skills/         # Skill discovery (Markdown files → slash commands)
│   │   ├── prompts/        # Prompt template discovery
│   │   └── contextfiles/   # Auto-discovered context file injection (AGENTS.md, etc.)
│   ├── cmd/                # Entry points (shr)
│   ├── proto/              # Protobuf definitions (sharur/v1/agent.proto)
│   ├── extensions/         # gRPC extension loader + proto definitions
│   └── sdk/                # Public Go SDK

Component Diagram

flowchart TD
    CLI["CLI flags & Config"] --> Svc

    subgraph core ["internal/agent"]
        Agent["Agent
Messages · SteerQueue · FollowUpQueue
StateMachine"]
        RunTurn["runTurn
provider.Stream · consumeStream · execTools"]
        EB["EventBus
async · non-blocking · 4096-item buffer"]
        Agent --> RunTurn
        RunTurn -->|publishes| EB
    end

    Svc["internal/service
AgentService"] --> core

    RunTurn --> LLM

    subgraph llm ["internal/llm"]
        LLM["Provider interface
Stream · Info"]
        Adapters["Ollama · OpenAI · Anthropic
llama.cpp · Google"]
        LLM --> Adapters
    end

    EB --> TUI["TUI"]
    EB --> JSON["JSON stdout"]
    EB --> GRPC["gRPC stream"]
    EB --> Session["session saver"]

Data Flow Summary

flowchart TD
    Input["User Input"] --> Mode["TUI · JSON · Remote Client"]
    Mode --> PBClient["pb.AgentServiceClient
bufconn or TCP"]
    PBClient --> Service["internal/service
getOrCreate / loadIfExists"]
    Service --> AP["agent.Prompt(ctx, text)"]
    AP --> MI["ext.ModifyInput()"]
    MI --> SS["ext.SessionStart() · ext.AgentStart()
EventAgentStart"]

    SS --> Loop

    subgraph Loop ["runTurn loop"]
        direction TB
        BP["ext.BeforePrompt() · ModifySystemPrompt()
ModifyContext() · BeforeProviderRequest()"]
        LLMStream["llm.Provider.Stream()
EventTextDelta · EventThinkingDelta · EventToolCall"]
        APR["ext.AfterProviderResponse()
EventTurnStart · ext.TurnStart()"]
        ToolExec["ext.BeforeToolCall() · execTool() · ext.AfterToolCall()
EventToolDelta · EventToolOutput"]
        TE["ext.TurnEnd()"]
        More{"more tool calls?"}
        BP --> LLMStream --> APR --> ToolExec --> TE --> More
        More -->|yes| BP
    end

    More -->|no| AgEnd["EventAgentEnd · ext.AgentEnd()"]
    AgEnd --> Save["service saves session to disk"]
    Save --> Stream["Stream Protobuf Events to client"]
    Stream --> Render["Render: TUI · JSONL stdout · gRPC stream"]

Subsections of Internals

Agent Loop

The agent is driven by an event-bus (internal/events). Every meaningful state transition emits an agent.Event to all subscribers.


EventBus Performance

The EventBus is async and non-blocking. Publish() enqueues to a 4096-item buffered channel per subscriber and returns immediately — it never blocks the agent loop. Each subscriber runs in its own goroutine. Slow subscribers drop events to protect the agent loop from backpressure.


Event Flow

sequenceDiagram
    participant User
    participant Agent
    participant LLM
    participant Tools

    User->>Agent: Prompt(text)
    Agent->>Agent: EventAgentStart
    loop each LLM turn
        Agent->>Agent: EventTurnStart · EventMessageStart
        Agent->>LLM: provider.Stream()
        LLM-->>Agent: EventTextDelta (×n)
        LLM-->>Agent: EventThinkingDelta (×n, if thinking enabled)
        LLM-->>Agent: EventToolCall (×n, if tools requested)
        Agent->>Agent: EventMessageEnd
        loop each tool call
            Agent->>Tools: execTool()
            Tools-->>Agent: EventToolDelta (streaming)
            Agent->>Agent: EventToolOutput
        end
        Agent->>Agent: EventTurnEnd
    end
    Agent->>Agent: EventAgentEnd

State Machine

The agent transitions through explicit states to prevent concurrent modification:

stateDiagram-v2
    [*] --> Idle
    Idle --> Thinking : Prompt()
    Thinking --> Executing : tool calls present
    Thinking --> Idle : no tool calls
    Thinking --> Compacting : token limit reached
    Thinking --> Aborting : Abort() called
    Executing --> Thinking : more turns needed
    Executing --> Idle : done
    Compacting --> Thinking : resume
    Aborting --> Idle
    Thinking --> Error
    Error --> [*]

Prompt Queues

Two queues support non-blocking interaction while the agent is running:

  • SteerQueue — Injected as a user message at the next tool boundary (interrupt-style)
  • FollowUpQueue — Processed as a new turn after the agent goes Idle

Tool System

Tools implement a simple interface:

type Tool interface {
    Name() string
    Description() string
    Schema() json.RawMessage
    Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)
    IsReadOnly() bool
}

A ToolRegistry holds all registered tools. During a turn, when the LLM emits a tool call, execTool looks up the tool by name, executes it, and streams partial output via EventToolDelta before emitting the final EventToolOutput.

Built-in tools: read, write, edit, bash, grep, ls, find

Safety Enforcements

  • Dry-Run Mode: When DryRun is enabled, any tool that is not marked as read-only will bypass execution and return a descriptive preview of what it would have done.
  • Input Sanitization: Prompt template expansion automatically wraps user inputs in <untrusted_input> tags to prevent prompt injection into the base instructions.

Service Architecture

sharur follows a Strict Protobuf Internal Architecture. Instead of UI modes calling Go functions directly, all interfaces are treated as clients of a central AgentService.


Protobuf Boundary

The interface between the UI and the core is defined in proto/sharur/v1/agent.proto. This boundary ensures:

  • Consistency: All modes (TUI, CLI, JSON, Remote gRPC) use the exact same code paths and logic.
  • Decoupling: UI logic is completely isolated from agent state, session persistence, and provider adapters.
  • Interoperability: Any gRPC-capable client can interact with a sharur service.

In-Process Communication

For local CLI usage, sharur uses a specialized In-Process Client (internal/service/client.go). It uses bufconn to implement the pb.AgentServiceClient interface over an in-memory pipe. This provides the safety and structure of gRPC without the latency or configuration complexity of network ports.


Backend Service (internal/service)

The Service struct implements pb.AgentServiceServer. It owns the session.Manager and manages the lifecycle of agent.Agent instances. It translates between internal agent events (Go channels) and Protobuf event streams.


Session Loading Strategy

RPCs split into three lookup strategies:

StrategyUsed byBehaviour
getOrCreate(id)Prompt, NewSessionAlways returns an entry — creates a fresh agent if id is unknown, loading from disk if a matching session file exists
loadIfExists(id)GetState, GetMessages, ConfigureSession, ForkSession, CloneSessionReturns the entry if it is in memory or can be loaded from disk; returns NotFound for completely unknown IDs
lookup(id)Steer, Abort, FollowUp, StreamEventsIn-memory only — these only make sense for a currently-running agent

This means a /resume <id> command can switch to any session ever saved to disk without a round-trip NewSession call: the first GetMessages or GetState call transparently loads it.

LLM Providers

Provider Interface

type Provider interface {
    Stream(ctx context.Context, req *CompletionRequest) (<-chan *Event, error)
    Info() ProviderInfo
}

All providers return a uniform Stream of Event values — text deltas, thinking deltas, tool calls, and usage. The agent’s consumeStream function normalizes these into the internal Message format, making the agent completely provider-agnostic.


CompletionRequest

type CompletionRequest struct {
    Model       string
    Messages    []types.Message
    Tools       []types.ToolInfo
    System      string
    Thinking    types.ThinkingLevel
    MaxTokens   int
    Temperature float64
    StreamOpts  StreamOptions
}

The BeforeProviderRequest extension hook receives this struct as JSON and can modify any field before it is sent to the provider — useful for overriding temperature, trimming the tool list, or adjusting MaxTokens per request.


ProviderInfo

type ProviderInfo struct {
    Name          string
    Model         string
    MaxTokens     int
    ContextWindow int  // 0 = unknown
    HasToolCall   bool
    HasImages     bool
}

Info() is called once at startup. The service uses ContextWindow to trigger compaction when the conversation grows too large. HasImages controls whether the TUI offers image attachment UI.


ModelLister

type ModelLister interface {
    ListModels() ([]string, error)
}

All five adapters implement ModelLister. When --list-models is passed, the CLI casts the active provider to ModelLister and prints the result. Each adapter queries the appropriate API:

ProviderQuery mechanism
ollamaGET /api/tags
llamacppGET /v1/models
openaiGET /v1/models
anthropicGET /v1/models
googleGemini model list API

Supported Providers

ProviderBackend
ollamaLocal Ollama server (HTTP)
llamacppllama.cpp server (HTTP, OpenAI-compatible)
openaiOpenAI API or any OpenAI-compatible endpoint
anthropicAnthropic Messages API
googleGoogle Gemini API

Each adapter lives in internal/llm/ and translates the provider’s wire format into the uniform Stream abstraction.


Feature Matrix

ProviderToolsImagesThinkingContext Window
ollamamodel-dependent4096 (default)
llamacppfrom server n_ctx
openaireasoning modelsmodel-dependent
anthropic✓ extendedmodel-dependent
google1,000,000+

Per-Provider Notes

Ollama

The Ollama adapter uses the /api/chat endpoint with streaming enabled. Context window defaults to 4096 when not reported by the server. Thinking is supported on models that emit <think> tokens (e.g. qwq, deepseek-r1) — sharur surfaces these as EventThinkingDelta events by detecting the tag boundaries in the stream.

llama.cpp

Uses the OpenAI-compatible /v1/chat/completions endpoint. The context window (n_ctx) is queried from the server at startup. Image attachments are not supported because llama.cpp’s OpenAI endpoint does not accept multipart vision payloads in the standard format.

OpenAI

Uses the standard /v1/chat/completions streaming endpoint. Any server implementing this API — vLLM, LM Studio, Groq, Together AI — can be used by setting openAIBaseURL. Reasoning models (o3, o4-mini) emit reasoning_content deltas that are surfaced as EventThinkingDelta.

Anthropic

Uses the Messages API (/v1/messages) with streaming. Extended thinking is activated when req.Thinking is medium or high:

  • medium — 10,000-token thinking budget
  • high — 20,000-token thinking budget

The API requires temperature: 1.0 when extended thinking is enabled; the adapter sets this automatically and overrides any user-supplied temperature for that request.

Google

Uses the Gemini generateContent API via the google.golang.org/genai client library. Gemini 1.5 Pro and later have context windows of 1M+ tokens; compaction is rarely triggered for typical sessions.


Adding a Provider

Implement the Provider interface in internal/llm/yourprovider.go and register it in internal/config/factory.go. Implement ModelLister to enable --list-models. The adapter receives a fully-formed CompletionRequest; it is responsible for translating Message.ToolCalls and Message.Images into the target API’s format.

Session Management

Sessions are persisted as JSONL files in a project-aware directory:

~/.sharur/sessions/
  --Users-alice-Projects-myapp--/     ← sanitized CWD
    2026-04-23T07-06-54_{uuid}.jsonl  ← timestamped session file
    2026-04-23T09-12-11_{uuid}.jsonl

Session File Format

Each .jsonl file contains one JSON object per line:

  • Line 0 (header): kind=header — session ID, parentId, model, timestamps, system prompt, compaction settings, dryRun flag
  • Subsequent lines: kind=message — individual conversation messages with full payloads (role, content, thinking, tool calls, tool call ID)

Session Tree

Sessions form a linked tree via parentId. The session.Manager.BuildTree() method assembles all sessions from the project directory into a []*TreeNode tree. FlattenTree produces a depth-first flat list with structured layout metadata (gutters, connectors, indentation), which the TUI layer uses to render a clean Unicode box-drawing tree diagram.

flowchart TD
    A["Session A
(root)"] --> B["Session B
(/branch from A)"]
    A --> C["Session C
(/fork of A)"]
    B --> D["Session D
(/branch from B at msg 5)"]
    B --> E["Session E
(/rebase of B)"]
    B --> F["Session F
(/merge into B)"]

    style C stroke-dasharray: 5 5

/fork creates an independent copy (dashed border above) with no parentId link — it does not appear as a child in the tree visualization.


Branching, Rebasing & Merging

flowchart TD
    Q{"What do you need?"}

    Q -->|"Explore an alternate
path from this point"| Branch["/branch [idx]
Child session, same history up to idx"]
    Q -->|"Independent copy
no tree relationship"| Fork["/fork
Detached snapshot"]
    Q -->|"Clean up the conversation
keep only specific messages"| Rebase["/rebase
Interactively select messages
for a new session"]
    Q -->|"Combine two sessions
into one context"| Merge["/merge <id>
LLM-synthesized merge turn
appended to current session"]
CommandCreates parent linkCopies historyInteractive
/branch [idx]up to idx
/forkfull
/rebaseselected messages
/merge <id>appends other sessionLLM turn

The /tree modal (keyboard shortcut B, F, R on a selected session) exposes all of these without leaving the TUI.


Compaction & Context Management

To stay within LLM context windows, sharur implements an auto-compaction strategy:

  1. Trigger: When tokens > ContextWindow - reserveTokens, compaction fires.
  2. Summarization: The agent uses the LLM to generate a structured summary (<!-- sharur-summary -->) of the pruned messages.
  3. File Tracking: The summary carries forward lists of files read and modified, so the assistant retains awareness of what it has already seen.
  4. Split Turn Handling: If compaction cuts mid-turn, a “Turn Prefix Summary” is generated to preserve context for the remaining tool calls.
  5. Session Tree Integration: Compaction events are stored as TypeCompaction records in the JSONL file, visible in /stats and preserved across restarts.

Compaction Configuration

// ~/.sharur/config.json or .sharur/config.json
{
  "compaction": {
    "enabled": true,
    "reserveTokens": 2048,
    "keepRecentTokens": 8192
  }
}
FieldDefaultDescription
enabledtrueWhether auto-compaction fires when the token budget is exceeded
reserveTokens2048Tokens to keep free at the top of the context window; compaction triggers when used > window - reserveTokens
keepRecentTokens8192Minimum recent-turn tokens to always retain after compaction, ensuring the current conversation thread survives

Trigger compaction manually at any time with /compact in the TUI or by calling the Compact RPC directly.


Export & Import

Sessions can be exported to and imported from JSONL files:

# Export from TUI
/export /path/to/session.jsonl

# Import into TUI (creates a new session from the file)
/import /path/to/session.jsonl

# Export from CLI without entering TUI
shr --export /path/to/session.html   # HTML snapshot

Exported JSONL files are self-contained: they include the session header and all messages. Imported sessions are assigned a new UUID and added to the current project’s session directory.

TUI Internals

The TUI is built with Bubble Tea (v2) and organized into focused files:

FileResponsibility
interactive.goRun() entry point, gRPC client wiring
model.gomodel struct definition, newModel()
update.goUpdate() — key handling, slash commands, picker logic, promptGRPC()
events.gohandleAgentEvent() — maps *pb.AgentEvent payloads to TUI history updates
view.goView() — renders chat history, status bar, input
modal.goStats, Config, and Session Tree modal overlays
slash.goSlash command parsing and handlers (all via gRPC client)
picker.goFuzzy picker component (sessions, skills, files, prompts)
keys.goKeybinding helpers (Matches, K.Ctrl(...))
types.gohistoryEntry, contentItem, toolCallEntry — render data model
utils.goHelper functions (Capitalize)

Prompt Submission

Prompt submission uses promptGRPC(), which opens a client.Prompt() server-streaming RPC and drains *pb.AgentEvent messages into m.eventCh in a goroutine. The listenForEvent Bubble Tea command feeds that channel back into the update loop one event at a time.


Prompt History

The TUI maintains a per-session prompt history in m.promptHistory, synced from the service via GetMessages at startup and after session switches. Users navigate previous prompts using Up/Down arrow keys while the editor is focused; the current draft is preserved as m.draftInput.


Render Data Model

The TUI stores conversation history as []historyEntry. Each entry has an ordered []contentItem slice that preserves the exact stream order:

historyEntry {
  role: "assistant"
  items: [
    { kind: contentItemThinking, text: "..." }
    { kind: contentItemText,     text: "..." }
    { kind: contentItemToolCall, tc: { id, name, arg, status, streamingOutput } }
    { kind: contentItemToolOutput, out: { toolCallID, content, isError } }
  ]
}

This mirrors the content[] array model, ensuring correct temporal ordering of thinking, text, and tool calls.


  • Stats — Token counts, session metadata, file/path info
  • Config — Active model, provider, compaction settings
  • Session Tree — Interactive paginated tree with structured branch visualization; supports Resume (Enter) and Branch (B)
  • Rebase Picker — Selection interface for history manipulation
  • Merge Picker — Fuzzy finder for selecting sessions to merge into the current conversation

Build & Release

sharur uses a combination of Mage and GitHub Actions for CI/CD.


Versioning

The project version is maintained in a VERSION file in the repository root. During build, Magefile.go reads this file and injects it into the binary using linker flags (-ldflags "-X main.version=...").


Mage Targets

TargetDescription
BuildCompile shr for the current platform with version injection
TestRun all unit tests with coverage
VetStatic analysis with go vet
LintRun golangci-lint
VulnVulnerability scan with govulncheck
AllRun generate, build, test, vet, lint, and vuln in sequence
ReleaseCross-compile for Linux, macOS, and Windows (AMD64/ARM64), package into dist/
GenerateRun buf to regenerate protobuf stubs
DocsGenerate API reference (gomarkdoc) and build the Hugo site
DocsServeRun Hugo dev server at localhost:1313 with live reload
PkgSiteRun pkgsite for local full API browsing including internals

CI/CD Pipelines

Continuous Integration (ci.yml)

Triggered on every push to main and all pull requests. Runs mage all within a Nix environment on both ubuntu-latest and macos-latest, then uploads per-platform binaries as build artifacts. Coverage is collected and summarised via go tool cover.

Automated Release (release.yml)

Triggered by pushing a version tag (e.g., v1.2.3). Runs mage release to build cross-platform assets and uses softprops/action-gh-release to publish them to a new GitHub Release.

Docs Deploy (docs.yml)

Triggered on push to main and on published releases. Runs mage docs (gomarkdoc + Hugo build) and deploys docs/public/ to the gh-pages branch via peaceiris/actions-gh-pages.

SDK

The github.com/goppydae/sharur/sdk package lets you embed a sharur agent in any Go program.

import "github.com/goppydae/sharur/sdk"

See the sub-pages for a quickstart, custom tool implementations, the EventBus API, and in-process extensions.

Subsections of SDK

Quickstart

Import github.com/goppydae/sharur/sdk to embed an agent in any Go program.

import "github.com/goppydae/sharur/sdk"

ag, err := sdk.NewAgent(sdk.Config{
    Provider: "ollama",
    Model:    "llama3.2",
    Tools:    sdk.DefaultTools(),
})
if err != nil {
    panic(err)
}

ag.Subscribe(func(e sdk.Event) {
    if e.Type == sdk.EventTextDelta {
        fmt.Print(e.Content)
    }
})

ag.Prompt(context.Background(), "List the Go files in this directory")
<-ag.Idle()

Config Fields

type Config struct {
    Provider    string        // "ollama", "openai", "anthropic", "llamacpp", "google"
    Model       string        // model name or "provider/model"
    APIKey      string        // optional; env vars take priority
    BaseURL     string        // optional provider endpoint override
    Tools       []sdk.Tool    // sdk.DefaultTools() or custom list
    Extensions  []sdk.Extension
    SystemPrompt string
    ThinkingLevel sdk.ThinkingLevel
    SessionDir  string        // where to persist sessions
    DryRun      bool
}

Core API

CallDescription
sdk.NewAgent(cfg)Create and initialize an agent
ag.Subscribe(fn)Register an event handler; called for every emitted event
ag.Prompt(ctx, text)Send a user message and start the agent loop
ag.Idle()Returns a channel that closes when the agent reaches Idle state
ag.Steer(ctx, text)Inject a steering message into the running turn
ag.FollowUp(ctx, text)Queue a message to process after the current turn
ag.Abort(ctx)Cancel the current running turn
ag.SetExtensions(exts)Replace the extension list (takes effect on next prompt)

Event Types

Subscribe to events by checking e.Type:

Event typePayload fieldDescription
EventAgentStartAgent loop started
EventAgentEndAgent loop completed
EventTurnStartLLM turn started
EventTurnEndLLM turn completed
EventTextDeltae.ContentIncremental response text
EventThinkingDeltae.ContentIncremental thinking text
EventToolCalle.ToolCallTool invocation started
EventToolDeltae.ContentStreaming tool output
EventToolOutpute.ToolOutputFinal tool result

Minimal Example (no tools, no session)

ag, _ := sdk.NewAgent(sdk.Config{
    Provider: "anthropic",
    Model:    "claude-sonnet-4-6",
    APIKey:   os.Getenv("ANTHROPIC_API_KEY"),
})

var buf strings.Builder
ag.Subscribe(func(e sdk.Event) {
    if e.Type == sdk.EventTextDelta {
        buf.WriteString(e.Content)
    }
})

ag.Prompt(context.Background(), "What is 2+2?")
<-ag.Idle()
fmt.Println(buf.String())

Custom Tools

Built-in Tools

Pass sdk.DefaultTools() in sdk.Config.Tools to get the full set of built-in tools:

ToolDescription
readRead file contents with offset/limit support
writeCreate or overwrite files
editSearch-and-replace edits within files
bashExecute shell commands
grepSearch file contents via regex
lsList directory contents
findLocate files using glob patterns

bash, write, and edit are destructive. In --dry-run mode they preview what they would do without executing.


Tool Interface

Implement sdk.Tool to create a custom tool:

type Tool interface {
    Name() string
    Description() string
    Schema() json.RawMessage       // JSON Schema for the input parameters
    Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)
    IsReadOnly() bool              // if true, tool is allowed in dry-run mode
}

ToolUpdate is a callback for streaming partial output while the tool runs:

type ToolUpdate func(content string)

Example: Custom Tool

type CountLinesTool struct{}

func (t *CountLinesTool) Name() string { return "count_lines" }
func (t *CountLinesTool) Description() string {
    return "Count the number of lines in a file"
}
func (t *CountLinesTool) Schema() json.RawMessage {
    return json.RawMessage(`{
        "type": "object",
        "properties": {
            "path": {"type": "string", "description": "File path to count lines in"}
        },
        "required": ["path"]
    }`)
}
func (t *CountLinesTool) IsReadOnly() bool { return true }

func (t *CountLinesTool) Execute(ctx context.Context, args json.RawMessage, update sdk.ToolUpdate) (*sdk.ToolResult, error) {
    var input struct {
        Path string `json:"path"`
    }
    if err := json.Unmarshal(args, &input); err != nil {
        return nil, err
    }
    data, err := os.ReadFile(input.Path)
    if err != nil {
        return &sdk.ToolResult{Content: err.Error(), IsError: true}, nil
    }
    n := strings.Count(string(data), "\n") + 1
    return &sdk.ToolResult{Content: fmt.Sprintf("%d lines", n)}, nil
}

Register alongside the built-in tools:

ag, _ := sdk.NewAgent(sdk.Config{
    Provider: "ollama",
    Model:    "llama3.2",
    Tools:    append(sdk.DefaultTools(), &CountLinesTool{}),
})

Selective Tools

Pass only the tools you want rather than the full default set:

tools := sdk.ToolsFor("read", "grep", "ls")   // subset by name

Or build the list manually to include only read-only tools for a sandboxed agent.

Events

The agent communicates state transitions via an event bus. Every meaningful action emits an sdk.Event to all registered subscribers.


Subscribing

ag.Subscribe(func(e sdk.Event) {
    switch e.Type {
    case sdk.EventTextDelta:
        fmt.Print(e.Content)
    case sdk.EventToolCall:
        fmt.Printf("[tool: %s]\n", e.ToolCall.Name)
    case sdk.EventAgentEnd:
        fmt.Println("\ndone")
    }
})

Multiple subscribers are allowed. Each runs in its own goroutine. The EventBus is non-blocking — Publish enqueues to a 4096-item buffered channel per subscriber and returns immediately, so slow subscribers drop events rather than stalling the agent loop.


Event Reference

Type constantPayloadFired when
EventAgentStartPrompt() called, agent loop begins
EventAgentEndAgent loop completes (all turns done)
EventTurnStartAn LLM request turn begins
EventTurnEndA turn’s tool calls finish
EventMessageStartLLM starts streaming a response
EventMessageEndLLM response stream complete
EventTextDeltae.Content stringIncremental response text chunk
EventThinkingDeltae.Content stringIncremental extended-thinking chunk
EventToolCalle.ToolCallTool invocation requested by LLM
EventToolDeltae.Content stringStreaming partial output from a running tool
EventToolOutpute.ToolOutputFinal tool result (success or error)

Event Flow Per Prompt

EventAgentStart
  EventTurnStart
    EventMessageStart
      EventTextDelta*
      EventThinkingDelta*
      EventToolCall*
    EventMessageEnd
    [per tool call]
      EventToolDelta*
      EventToolOutput
  EventTurnEnd
  [repeat if tool calls triggered another turn]
EventAgentEnd

Agent State Machine

The agent transitions through explicit states visible via EventAgentStart/EventAgentEnd and the ag.Idle() channel:

Idle → Thinking → Executing → Idle
           ↓
       Compacting → Idle
           ↓
         Aborting → Idle

ag.Idle() returns a channel that closes when the agent returns to Idle. Use it to block until a prompt completes:

ag.Prompt(ctx, "Refactor main.go")
<-ag.Idle()
// agent is idle, safe to call Prompt again

In-Process Extensions

If your extension is written in Go and you control the build, you can implement sdk.Extension (an alias of agent.Extension) directly — no gRPC, no subprocess, no socket. This is the lowest-overhead extension path.


Attaching Extensions

type loggingExt struct {
    sdk.NoopExtension
}

func (e *loggingExt) AgentStart(ctx context.Context) { log.Println("agent started") }
func (e *loggingExt) AgentEnd(ctx context.Context)   { log.Println("agent finished") }
func (e *loggingExt) ModifyInput(ctx context.Context, text string) sdk.InputResult {
    if text == "quit" {
        return sdk.InputResult{Action: sdk.InputHandled}
    }
    return sdk.InputResult{Action: sdk.InputContinue}
}

ag.SetExtensions([]sdk.Extension{
    &loggingExt{NoopExtension: sdk.NoopExtension{NameStr: "logger"}},
})

sdk.NoopExtension provides no-op defaults for every method. Embed it and override only what you need.


Extension Interface

type Extension interface {
    Name() string
    Tools() []Tool

    SessionStart(ctx context.Context, sessionID string, reason SessionStartReason)
    SessionEnd(ctx context.Context, sessionID string, reason SessionEndReason)

    AgentStart(ctx context.Context)
    AgentEnd(ctx context.Context)
    TurnStart(ctx context.Context)
    TurnEnd(ctx context.Context)

    ModifyInput(ctx context.Context, text string) InputResult
    ModifySystemPrompt(prompt string) string
    BeforePrompt(ctx context.Context, state *AgentState) *AgentState
    ModifyContext(ctx context.Context, messages []Message) []Message
    BeforeProviderRequest(ctx context.Context, req *CompletionRequest) *CompletionRequest
    AfterProviderResponse(ctx context.Context, content string, numToolCalls int)
    BeforeToolCall(ctx context.Context, call *ToolCall, args json.RawMessage) (*ToolResult, bool)
    AfterToolCall(ctx context.Context, call *ToolCall, result *ToolResult) *ToolResult
    BeforeCompact(ctx context.Context, prep CompactionPrep) *CompactionResult
    AfterCompact(ctx context.Context, freedTokens int)
}

All types are re-exported from sdk so callers only need to import github.com/goppydae/sharur/sdk.


Key Hook Behaviours

ModifyInput — runs before the user text is added to the transcript. Return an InputResult with:

  • sdk.InputContinue — pass through unchanged
  • sdk.InputTransform — replace with result.Text
  • sdk.InputHandled — consume entirely; no agent turn is started and nothing is appended to the transcript

ModifyContext — receives and returns the message slice that will be sent to the LLM. Changes do not affect the stored session transcript — they are ephemeral per-turn.

BeforeToolCall — return (result, true) to intercept and block the tool; return (nil, false) to allow normal execution.

BeforeCompact — return nil to let the default LLM summarization run, or a *CompactionResult to supply your own summary and skip the LLM call.


Example: System Prompt Injection

type gitContextExt struct {
    sdk.NoopExtension
}

func (e *gitContextExt) ModifySystemPrompt(prompt string) string {
    branch, _ := exec.Command("git", "rev-parse", "--abbrev-ref", "HEAD").Output()
    return prompt + "\n\nCurrent git branch: " + strings.TrimSpace(string(branch))
}

Example: Tool Interception

type sandboxExt struct {
    sdk.NoopExtension
    allowedDir string
}

func (e *sandboxExt) BeforeToolCall(_ context.Context, call *sdk.ToolCall, args json.RawMessage) (*sdk.ToolResult, bool) {
    var input struct{ Path string `json:"path"` }
    _ = json.Unmarshal(args, &input)
    if input.Path != "" && !strings.HasPrefix(input.Path, e.allowedDir) {
        return &sdk.ToolResult{
            Content: fmt.Sprintf("blocked: %s is outside %s", input.Path, e.allowedDir),
            IsError: true,
        }, true
    }
    return nil, false
}

This section is generated by gomarkdoc from Go source comments. Run mage docs to regenerate.


Packages

PackageImport pathDescription
sdkgithub.com/goppydae/sharur/sdkPublic embedding API — NewAgent, Subscribe, Prompt, Idle
extensionsgithub.com/goppydae/sharur/extensionsgRPC extension building blocks — Plugin, NoopPlugin, Serve
toolsgithub.com/goppydae/sharur/internal/toolsTool and ToolResult interfaces used by both SDK and extensions
agentgithub.com/goppydae/sharur/internal/agentExtension interface and NoopExtension for in-process extensions

Note: internal/tools and internal/agent are documented here because they are contract surfaces for in-process extension authors who build inside the same module. Go’s import restrictions prevent external consumers from importing them directly, but the interfaces are stable and intentionally exposed through this reference.

Subsections of API Reference

agent

import "github.com/goppydae/sharur/internal/agent"

Package agent provides the stateful agent with transcript, tools, and events.

Index

Constants

const (
    ThinkingOff    = types.ThinkingOff
    ThinkingLow    = types.ThinkingLow
    ThinkingMedium = types.ThinkingMedium
    ThinkingHigh   = types.ThinkingHigh
)

const SUMMARIZATION_PROMPT = `The messages above are a conversation to summarize. Create a structured context checkpoint summary that another LLM will use to continue the work.

Start your response with the exact string: <!-- sharur-summary -->

Then use this EXACT format:

## Goal
[What is the user trying to accomplish? Can be multiple items if the session covers different tasks.]

## Constraints & Preferences
- [Any constraints, preferences, or requirements mentioned by user]
- [Or "(none)" if none were mentioned]

## Progress
### Done
- [x] [Completed tasks/changes]

### In Progress
- [ ] [Current work]

### Blocked
- [Issues preventing progress, if any]

## Key Decisions
- **[Decision]**: [Brief rationale]

## Next Steps
1. [Ordered list of what should happen next]

## Critical Context
- [Any data, examples, or references needed to continue]
- [Or "(none)" if not applicable]

Keep each section concise. Preserve exact file paths, function names, and error messages.`

const TURN_PREFIX_SUMMARIZATION_PROMPT = `This is the PREFIX of a turn that was too large to keep. The SUFFIX (recent work) is retained.

Summarize the prefix to provide context for the retained suffix:

## Original Request
[What did the user ask for in this turn?]

## Early Progress
- [Key decisions and work done in the prefix]

## Context for Suffix
- [Information needed to understand the retained recent work]

Be concise. Focus on what's needed to understand the kept suffix.`

const UPDATE_SUMMARIZATION_PROMPT = `The messages above are NEW conversation messages to incorporate into the existing summary provided in <previous-summary> tags.

Start your response with the exact string: <!-- sharur-summary -->

Update the existing structured summary with new information. RULES:
- PRESERVE all existing information from the previous summary
- ADD new progress, decisions, and context from the new messages
- UPDATE the Progress section: move items from "In Progress" to "Done" when completed
- UPDATE "Next Steps" based on what was accomplished
- PRESERVE exact file paths, function names, and error messages
- If something is no longer relevant, you may remove it

Use this EXACT format:

## Goal
[Preserve existing goals, add new ones if the task expanded]

## Constraints & Preferences
- [Preserve existing, add new ones discovered]

## Progress
### Done
- [x] [Include previously done items AND newly completed items]

### In Progress
- [ ] [Current work - update based on progress]

### Blocked
- [Current blockers - remove if resolved]

## Key Decisions
- **[Decision]**: [Brief rationale] (preserve all previous, add new)

## Next Steps
1. [Update based on current state]

## Critical Context
- [Preserve important context, add new if needed]

Keep each section concise. Preserve exact file paths, function names, and error messages.`

func EstimateMessageTokens

func EstimateMessageTokens(m Message) int

type Agent

Agent owns the transcript, emits events, and executes tools.

type Agent struct {
    // contains filtered or unexported fields
}

func New

func New(provider llm.Provider, registry *tools.ToolRegistry) *Agent

New creates a new agent with the given provider and tools.

func (*Agent) Abort

func (a *Agent) Abort()

Abort signals the agent to stop the current turn.

func (*Agent) Compact

func (a *Agent) Compact(ctx context.Context, keepRecentTokens int)

Compact trims the transcript to stay within approximate token budgets. It implements a pi-mono style summarization and file tracking strategy.

func (*Agent) Continue

func (a *Agent) Continue(ctx context.Context) error

Continue asks the agent to continue generating.

func (*Agent) EstimateContextTokens

func (a *Agent) EstimateContextTokens() int

EstimateContextTokens returns the estimated total tokens in the current context.

func (*Agent) EventBus

func (a *Agent) EventBus() *events.EventBus

EventBus returns the event bus.

func (*Agent) FollowUp

func (a *Agent) FollowUp(text string, images ...Image)

FollowUp queues a follow-up message to be processed after the agent finishes.

func (*Agent) GetInfo

func (a *Agent) GetInfo() llm.ProviderInfo

GetInfo returns the current model’s provider info.

func (*Agent) GetSession

func (a *Agent) GetSession() *types.Session

GetSession returns a copy of the current session types.

func (*Agent) GetStats

func (a *Agent) GetStats() AgentStats

GetStats returns token usage statistics from the agent’s events.

func (*Agent) Idle

func (a *Agent) Idle() <-chan struct{}

Idle returns a channel that closes when the agent is idle.

func (*Agent) InvokeTool

func (a *Agent) InvokeTool(ctx context.Context, name string, args string) error

InvokeTool manually triggers a tool call as if it came from the assistant. It executes the tool, records the result, and then starts the agent loop to allow the LLM to react to the invocation.

func (*Agent) IsRunning

func (a *Agent) IsRunning() bool

IsRunning reports whether the agent is currently processing.

func (*Agent) LifecycleState

func (a *Agent) LifecycleState() string

LifecycleState returns the current lifecycle state as a string.

func (*Agent) Messages

func (a *Agent) Messages() []Message

Messages returns a copy of the conversation messages.

func (*Agent) Prompt

func (a *Agent) Prompt(ctx context.Context, text string, images ...Image) error

Prompt sends a user message and runs the agent loop until idle.

func (*Agent) Reset

func (a *Agent) Reset()

Reset clears the conversation history and queues.

func (*Agent) ResetSession

func (a *Agent) ResetSession(id string)

ResetSession clears messages, queues and creates a fresh session ID.

func (*Agent) Session

func (a *Agent) Session() *session.Session

Session returns the current session object.

func (*Agent) SetCompactionConfig

func (a *Agent) SetCompactionConfig(enabled bool, reserve, keepRecent int)

SetCompactionConfig updates the compaction settings.

func (*Agent) SetDryRun

func (a *Agent) SetDryRun(dry bool)

SetDryRun sets the agent’s dry-run mode.

func (*Agent) SetExtensions

func (a *Agent) SetExtensions(exts []Extension)

SetExtensions sets the active extensions for the agent.

func (*Agent) SetMaxTokens

func (a *Agent) SetMaxTokens(n int)

SetMaxTokens sets the maximum tokens for LLM responses.

func (*Agent) SetModel

func (a *Agent) SetModel(model string)

SetModel sets the model name and records it in the session if manager is present.

func (*Agent) SetProvider

func (a *Agent) SetProvider(provider llm.Provider)

SetProvider sets the LLM provider and records it in the session if manager is present.

func (*Agent) SetSession

func (a *Agent) SetSession(mgr *session.Manager, sess *session.Session)

SetSession attaches a session manager and session to the agent.

func (*Agent) SetSessionName

func (a *Agent) SetSessionName(name string)

SetSessionName sets a human-readable name for the current session.

func (*Agent) SetSystemPrompt

func (a *Agent) SetSystemPrompt(prompt string)

SetSystemPrompt updates the system prompt.

func (*Agent) SetThinkingLevel

func (a *Agent) SetThinkingLevel(level ThinkingLevel)

SetThinkingLevel sets the thinking level and records it in the session if manager is present.

func (*Agent) State

func (a *Agent) State() *AgentState

State returns a copy of the current agent state.

func (*Agent) Steer

func (a *Agent) Steer(text string, images ...Image)

Steer queues a steering message to be injected as soon as the current tool execution finishes.

func (*Agent) Subscribe

func (a *Agent) Subscribe(fn func(Event)) func()

Subscribe registers an event listener and returns an unsubscribe function.

func (*Agent) ToolRegistry

func (a *Agent) ToolRegistry() *tools.ToolRegistry

ToolRegistry returns the tool registry.

type AgentState

AgentState holds the full state of an agent instance.

type AgentState struct {
    Session       Session       `json:"session"`
    SystemPrompt  string        `json:"systemPrompt"`
    Messages      []Message     `json:"messages"`
    SteerQueue    []Message     `json:"steerQueue,omitempty"`
    FollowUpQueue []Message     `json:"followUpQueue,omitempty"`
    Tools         []ToolInfo    `json:"tools,omitempty"`
    Model         string        `json:"model"`
    Provider      string        `json:"provider"`
    Thinking      ThinkingLevel `json:"thinkingLevel"`
    MaxTokens     int           `json:"maxTokens,omitempty"`
    Temperature   float64       `json:"temperature,omitempty"`
    DryRun        bool          `json:"dryRun,omitempty"`
    Compaction    struct {
        Enabled          bool `json:"enabled"`
        ReserveTokens    int  `json:"reserveTokens"`
        KeepRecentTokens int  `json:"keepRecentTokens"`
    }   `json:"compaction"`
    LatestCompaction *types.CompactionState `json:"latestCompaction,omitempty"`
}

type AgentStats

AgentStats holds session statistics.

type AgentStats struct {
    SessionID      string
    ParentID       string
    SessionFile    string
    Name           string
    CreatedAt      time.Time
    UpdatedAt      time.Time
    Model          string
    Provider       string
    Thinking       string
    UserMessages   int
    AssistantMsgs  int
    ToolCalls      int
    ToolResults    int
    TotalMessages  int
    InputTokens    int
    OutputTokens   int
    CacheRead      int
    CacheWrite     int
    TotalTokens    int
    ContextTokens  int
    ContextWindow  int
    Cost           float64
    QueuedSteer    int
    QueuedFollowUp int
}

type CompactionPrep

CompactionPrep describes the state passed to BeforeCompact.

type CompactionPrep struct {
    MessageCount    int
    EstimatedTokens int
    PreviousSummary string
}

type CompactionResult

CompactionResult can be returned by BeforeCompact to provide a custom summary and skip the default LLM-based summarization.

type CompactionResult struct {
    Summary          string
    FirstKeptEntryID string
}

type Event

Event represents an agent lifecycle event.

type Event struct {
    Type     EventType
    Content  string
    ToolCall *ToolCall
    Usage    *llm.Usage
    Error    error
    // ToolOutput stores the result content of a tool execution.
    // Emitted when type is EventToolOutput.
    ToolOutput *ToolOutput
    // StateChange holds details of a lifecycle state transition.
    // Emitted when type is EventStateChange.
    StateChange *StateTransition
    // Value stores a numeric value (e.g. token count).
    // Emitted when type is EventTokens.
    Value int64
}

type EventType

EventType identifies the kind of agent event.

type EventType string

const (
    EventAgentStart    EventType = "agent_start"
    EventTurnStart     EventType = "turn_start"
    EventMessageStart  EventType = "message_start"
    EventTextDelta     EventType = "text_delta"
    EventThinkingDelta EventType = "thinking_delta"
    EventToolCall      EventType = "tool_call"
    EventToolDelta     EventType = "tool_delta"
    EventToolOutput    EventType = "tool_output"
    EventMessageEnd    EventType = "message_end"
    EventAgentEnd      EventType = "agent_end"
    EventError         EventType = "error"
    EventAbort         EventType = "abort"
    EventQueueUpdate   EventType = "queue_update"
    EventCompactStart  EventType = "compact_start"
    EventCompactEnd    EventType = "compact_end"
    EventStateChange   EventType = "state_change"
    EventTokens        EventType = "tokens"
    EventHeartbeat     EventType = "heartbeat"
)

type Extension

Extension is the unified interface for all extensions (gRPC plugins, Markdown Skills, etc.)

type Extension interface {
    // Name returns the extension's unique identifier.
    Name() string

    // Tools returns additional tools to register with the agent.
    Tools() []tools.Tool

    // BeforePrompt is called before each LLM request.
    // Return a modified state to change the request.
    BeforePrompt(ctx context.Context, state *AgentState) *AgentState

    // BeforeToolCall is called before each tool execution.
    // Return (result, true) to intercept and prevent the tool from running.
    // Return (nil, false) to allow normal execution.
    BeforeToolCall(ctx context.Context, call *ToolCall, args json.RawMessage) (*tools.ToolResult, bool)

    // AfterToolCall is called after each tool call completes.
    // Return a modified result to change the outcome.
    AfterToolCall(ctx context.Context, call *ToolCall, result *tools.ToolResult) *tools.ToolResult

    // ModifySystemPrompt is called to augment the system prompt.
    ModifySystemPrompt(prompt string) string

    // SessionStart is called when a session is attached or the first prompt begins.
    SessionStart(ctx context.Context, sessionID string, reason SessionStartReason)

    // SessionEnd is called when a session is reset or the agent is torn down.
    SessionEnd(ctx context.Context, sessionID string, reason SessionEndReason)

    // AgentStart is called when the agent begins processing a user prompt.
    AgentStart(ctx context.Context)

    // AgentEnd is called when the agent loop finishes (success, error, or abort).
    AgentEnd(ctx context.Context)

    // TurnStart is called at the start of each LLM request turn.
    TurnStart(ctx context.Context)

    // TurnEnd is called after each turn's tool calls have been processed.
    TurnEnd(ctx context.Context)

    // ModifyInput is called with raw user input before it is added to the transcript.
    // Return InputHandled to consume the message without further processing.
    // Return InputTransform to replace the text.
    // Return InputContinue (or zero value) to proceed unchanged.
    ModifyInput(ctx context.Context, text string) InputResult

    // ModifyContext is called with the message slice just before building each LLM
    // request. The returned slice replaces what is sent to the LLM (not the stored
    // transcript). Extensions are chained; each receives the previous result.
    ModifyContext(ctx context.Context, messages []types.Message) []types.Message

    // BeforeProviderRequest is called with the assembled CompletionRequest before
    // it is sent to the LLM provider. Return a modified copy to alter the request.
    BeforeProviderRequest(ctx context.Context, req *llm.CompletionRequest) *llm.CompletionRequest

    // AfterProviderResponse is called after the LLM stream is fully consumed.
    AfterProviderResponse(ctx context.Context, content string, numToolCalls int)

    // BeforeCompact is called before the compaction summarization LLM call.
    // Return a non-nil *CompactionResult to provide a custom summary and skip the
    // default LLM-based summarization entirely.
    BeforeCompact(ctx context.Context, prep CompactionPrep) *CompactionResult

    // AfterCompact is called after compaction completes.
    AfterCompact(ctx context.Context, freedTokens int)
}

type Image

Image is an alias for types.Image.

type Image = types.Image

type InputAction

InputAction controls how ModifyInput’s result is applied.

type InputAction string

const (
    // InputContinue passes the original text through unchanged.
    InputContinue InputAction = "continue"
    // InputTransform replaces the user text with InputResult.Text.
    InputTransform InputAction = "transform"
    // InputHandled marks the input as consumed; the message is not appended to the transcript.
    InputHandled InputAction = "handled"
)

type InputResult

InputResult is returned by ModifyInput to describe how to process the user input.

type InputResult struct {
    Action InputAction
    Text   string
}

type LifecycleState

LifecycleState identifies the current operational state of the agent.

type LifecycleState string

const (
    StateIdle       LifecycleState = "idle"
    StateThinking   LifecycleState = "thinking"
    StateExecuting  LifecycleState = "executing"
    StateCompacting LifecycleState = "compacting"
    StateAborting   LifecycleState = "aborting"
    StateError      LifecycleState = "error"
)

type Message

Message is an alias for types.Message.

type Message = types.Message

type NoopExtension

NoopExtension is an extension that does nothing — useful as a base embed.

type NoopExtension struct {
    NameStr string
}

func (*NoopExtension) AfterCompact

func (n *NoopExtension) AfterCompact(_ context.Context, _ int)

func (*NoopExtension) AfterProviderResponse

func (n *NoopExtension) AfterProviderResponse(_ context.Context, _ string, _ int)

func (*NoopExtension) AfterToolCall

func (n *NoopExtension) AfterToolCall(_ context.Context, _ *ToolCall, result *tools.ToolResult) *tools.ToolResult

func (*NoopExtension) AgentEnd

func (n *NoopExtension) AgentEnd(_ context.Context)

func (*NoopExtension) AgentStart

func (n *NoopExtension) AgentStart(_ context.Context)

func (*NoopExtension) BeforeCompact

func (n *NoopExtension) BeforeCompact(_ context.Context, _ CompactionPrep) *CompactionResult

func (*NoopExtension) BeforePrompt

func (n *NoopExtension) BeforePrompt(_ context.Context, state *AgentState) *AgentState

func (*NoopExtension) BeforeProviderRequest

func (n *NoopExtension) BeforeProviderRequest(_ context.Context, req *llm.CompletionRequest) *llm.CompletionRequest

func (*NoopExtension) BeforeToolCall

func (n *NoopExtension) BeforeToolCall(_ context.Context, _ *ToolCall, _ json.RawMessage) (*tools.ToolResult, bool)

func (*NoopExtension) ModifyContext

func (n *NoopExtension) ModifyContext(_ context.Context, messages []types.Message) []types.Message

func (*NoopExtension) ModifyInput

func (n *NoopExtension) ModifyInput(_ context.Context, _ string) InputResult

func (*NoopExtension) ModifySystemPrompt

func (n *NoopExtension) ModifySystemPrompt(prompt string) string

func (*NoopExtension) Name

func (n *NoopExtension) Name() string

func (*NoopExtension) SessionEnd

func (n *NoopExtension) SessionEnd(_ context.Context, _ string, _ SessionEndReason)

func (*NoopExtension) SessionStart

func (n *NoopExtension) SessionStart(_ context.Context, _ string, _ SessionStartReason)

func (*NoopExtension) Tools

func (n *NoopExtension) Tools() []tools.Tool

func (*NoopExtension) TurnEnd

func (n *NoopExtension) TurnEnd(_ context.Context)

func (*NoopExtension) TurnStart

func (n *NoopExtension) TurnStart(_ context.Context)

type Session

Session is an alias for types.Session.

type Session = types.Session

type SessionEndReason

SessionEndReason identifies why a session is ending.

type SessionEndReason string

const (
    SessionEndReset SessionEndReason = "reset"
)

type SessionStartReason

SessionStartReason identifies why a session is starting.

type SessionStartReason string

const (
    SessionStartNew    SessionStartReason = "new"
    SessionStartResume SessionStartReason = "resume"
)

type StateMachine

StateMachine manages agent states and transitions.

type StateMachine struct {
    // contains filtered or unexported fields
}

func NewStateMachine

func NewStateMachine(initial LifecycleState, onTransition func(StateTransition)) *StateMachine

NewStateMachine creates a new state machine.

func (*StateMachine) Current

func (s *StateMachine) Current() LifecycleState

Current returns the current lifecycle state.

func (*StateMachine) Transition

func (s *StateMachine) Transition(to LifecycleState) error

Transition moves the state machine to a new state.

type StateTransition

StateTransition represents a transition between two states.

type StateTransition struct {
    From LifecycleState
    To   LifecycleState
}

type ThinkingLevel

ThinkingLevel is an alias for types.ThinkingLevel.

type ThinkingLevel = types.ThinkingLevel

type ToolCall

ToolCall is an alias for types.ToolCall.

type ToolCall = types.ToolCall

type ToolInfo

ToolInfo is an alias for types.ToolInfo.

type ToolInfo = types.ToolInfo

type ToolOutput

ToolOutput is an alias for types.ToolOutput.

type ToolOutput = types.ToolOutput

Generated by gomarkdoc

extensions

import "github.com/goppydae/sharur/extensions"

Index

func LoadErrors

func LoadErrors(errs []error) error

LoadErrors joins all errors from a Load call into a single error, or nil if there were none.

func Serve

func Serve(impl Plugin)

Serve starts a gRPC server on the Unix socket path provided via SHARUR_SOCKET_PATH. This is the entry point for extension binaries.

type AgentState

AgentState is the mutable prompt state passed to BeforePrompt.

type AgentState struct {
    SystemPrompt  string
    Model         string
    Provider      string
    ThinkingLevel string
}

type GRPCClient

GRPCClient is an implementation of agent.Extension that talks over RPC. It runs on the host side when a plugin binary is loaded.

If Name() or Tools() fail, the client is marked degraded and all subsequent tool executions return an error rather than silently doing nothing.

type GRPCClient struct {
    // contains filtered or unexported fields
}

func (*GRPCClient) AfterCompact

func (m *GRPCClient) AfterCompact(ctx context.Context, freedTokens int)

func (*GRPCClient) AfterProviderResponse

func (m *GRPCClient) AfterProviderResponse(ctx context.Context, content string, numToolCalls int)

func (*GRPCClient) AfterToolCall

func (m *GRPCClient) AfterToolCall(ctx context.Context, call *agent.ToolCall, result *tools.ToolResult) *tools.ToolResult

func (*GRPCClient) AgentEnd

func (m *GRPCClient) AgentEnd(ctx context.Context)

func (*GRPCClient) AgentStart

func (m *GRPCClient) AgentStart(ctx context.Context)

func (*GRPCClient) BeforeCompact

func (m *GRPCClient) BeforeCompact(ctx context.Context, prep agent.CompactionPrep) *agent.CompactionResult

func (*GRPCClient) BeforePrompt

func (m *GRPCClient) BeforePrompt(ctx context.Context, state *agent.AgentState) *agent.AgentState

func (*GRPCClient) BeforeProviderRequest

func (m *GRPCClient) BeforeProviderRequest(ctx context.Context, req *llm.CompletionRequest) *llm.CompletionRequest

func (*GRPCClient) BeforeToolCall

func (m *GRPCClient) BeforeToolCall(ctx context.Context, call *agent.ToolCall, args json.RawMessage) (*tools.ToolResult, bool)

func (*GRPCClient) Degraded

func (m *GRPCClient) Degraded() (bool, error)

Degraded reports whether the extension failed to initialise. Callers can surface this to the user rather than letting the failure be silent.

func (*GRPCClient) ModifyContext

func (m *GRPCClient) ModifyContext(ctx context.Context, messages []types.Message) []types.Message

func (*GRPCClient) ModifyInput

func (m *GRPCClient) ModifyInput(ctx context.Context, text string) agent.InputResult

func (*GRPCClient) ModifySystemPrompt

func (m *GRPCClient) ModifySystemPrompt(prompt string) string

func (*GRPCClient) Name

func (m *GRPCClient) Name() string

func (*GRPCClient) SessionEnd

func (m *GRPCClient) SessionEnd(ctx context.Context, sessionID string, reason agent.SessionEndReason)

func (*GRPCClient) SessionStart

func (m *GRPCClient) SessionStart(ctx context.Context, sessionID string, reason agent.SessionStartReason)

func (*GRPCClient) Tools

func (m *GRPCClient) Tools() []tools.Tool

Tools queries the extension process for its tool definitions and returns RemoteTool wrappers that execute each tool over the ExecuteTool RPC.

func (*GRPCClient) TurnEnd

func (m *GRPCClient) TurnEnd(ctx context.Context)

func (*GRPCClient) TurnStart

func (m *GRPCClient) TurnStart(ctx context.Context)

type GRPCServer

GRPCServer is the gRPC server that runs inside the plugin binary. It adapts the Plugin interface to the proto service.

type GRPCServer struct {
    proto.UnimplementedExtensionServer
    Impl Plugin
}

func (*GRPCServer) AfterCompact

func (m *GRPCServer) AfterCompact(ctx context.Context, req *proto.AfterCompactRequest) (*proto.Empty, error)

func (*GRPCServer) AfterProviderResponse

func (m *GRPCServer) AfterProviderResponse(ctx context.Context, req *proto.AfterProviderResponseRequest) (*proto.Empty, error)

func (*GRPCServer) AfterToolCall

func (m *GRPCServer) AfterToolCall(ctx context.Context, req *proto.AfterToolCallRequest) (*proto.AfterToolCallResponse, error)

func (*GRPCServer) AgentEnd

func (m *GRPCServer) AgentEnd(ctx context.Context, _ *proto.Empty) (*proto.Empty, error)

func (*GRPCServer) AgentStart

func (m *GRPCServer) AgentStart(ctx context.Context, _ *proto.Empty) (*proto.Empty, error)

func (*GRPCServer) BeforeCompact

func (m *GRPCServer) BeforeCompact(ctx context.Context, req *proto.BeforeCompactRequest) (*proto.BeforeCompactResponse, error)

func (*GRPCServer) BeforePrompt

func (m *GRPCServer) BeforePrompt(ctx context.Context, req *proto.BeforePromptRequest) (*proto.BeforePromptResponse, error)

func (*GRPCServer) BeforeProviderRequest

func (m *GRPCServer) BeforeProviderRequest(ctx context.Context, req *proto.BeforeProviderRequestRequest) (*proto.BeforeProviderRequestResponse, error)

func (*GRPCServer) BeforeToolCall

func (m *GRPCServer) BeforeToolCall(ctx context.Context, req *proto.BeforeToolCallRequest) (*proto.BeforeToolCallResponse, error)

func (*GRPCServer) ExecuteTool

func (m *GRPCServer) ExecuteTool(ctx context.Context, req *proto.ExecuteToolRequest) (*proto.ExecuteToolResponse, error)

func (*GRPCServer) ModifyContext

func (m *GRPCServer) ModifyContext(ctx context.Context, req *proto.ModifyContextRequest) (*proto.ModifyContextResponse, error)

func (*GRPCServer) ModifyInput

func (m *GRPCServer) ModifyInput(ctx context.Context, req *proto.ModifyInputRequest) (*proto.ModifyInputResponse, error)

func (*GRPCServer) ModifySystemPrompt

func (m *GRPCServer) ModifySystemPrompt(ctx context.Context, req *proto.ModifySystemPromptRequest) (*proto.ModifySystemPromptResponse, error)

func (*GRPCServer) Name

func (m *GRPCServer) Name(ctx context.Context, _ *proto.Empty) (*proto.NameResponse, error)

func (*GRPCServer) SessionEnd

func (m *GRPCServer) SessionEnd(ctx context.Context, req *proto.SessionEndRequest) (*proto.Empty, error)

func (*GRPCServer) SessionStart

func (m *GRPCServer) SessionStart(ctx context.Context, req *proto.SessionStartRequest) (*proto.Empty, error)

func (*GRPCServer) Tools

func (m *GRPCServer) Tools(ctx context.Context, _ *proto.Empty) (*proto.ToolsResponse, error)

func (*GRPCServer) TurnEnd

func (m *GRPCServer) TurnEnd(ctx context.Context, _ *proto.Empty) (*proto.Empty, error)

func (*GRPCServer) TurnStart

func (m *GRPCServer) TurnStart(ctx context.Context, _ *proto.Empty) (*proto.Empty, error)

type Loader

Loader discovers and loads extensions (executable binaries and scripts).

type Loader struct {
    Dirs       []string
    PythonPath string
    // contains filtered or unexported fields
}

func NewLoader

func NewLoader(dirs []string, pythonPath string) *Loader

NewLoader creates a new extension loader.

func (*Loader) Cleanup

func (l *Loader) Cleanup()

Cleanup kills all running extension subprocesses and removes their socket files.

func (*Loader) Load

func (l *Loader) Load() ([]agent.Extension, []error)

Load discovers extensions, starts them as subprocesses, and returns gRPC client interfaces. Extensions that fail to load are logged and skipped; the returned error accumulates all failures so callers can distinguish “nothing loaded” from “everything succeeded”.

func (*Loader) LoadOrLog

func (l *Loader) LoadOrLog() []agent.Extension

LoadOrLog calls Load and logs any errors, returning only the successfully loaded extensions.

type NoopPlugin

NoopPlugin is a base Plugin implementation with no-op defaults. Embed it in your Plugin struct and override only what you need.

type NoopPlugin struct {
    NameStr string
}

func (*NoopPlugin) AfterCompact

func (n *NoopPlugin) AfterCompact(_ context.Context, _ int)

func (*NoopPlugin) AfterProviderResponse

func (n *NoopPlugin) AfterProviderResponse(_ context.Context, _ string, _ int)

func (*NoopPlugin) AfterToolCall

func (n *NoopPlugin) AfterToolCall(_ context.Context, _ ToolCall, result ToolResult) ToolResult

func (*NoopPlugin) AgentEnd

func (n *NoopPlugin) AgentEnd(_ context.Context)

func (*NoopPlugin) AgentStart

func (n *NoopPlugin) AgentStart(_ context.Context)

func (*NoopPlugin) BeforeCompact

func (n *NoopPlugin) BeforeCompact(_ context.Context, _ agent.CompactionPrep) *agent.CompactionResult

func (*NoopPlugin) BeforePrompt

func (n *NoopPlugin) BeforePrompt(_ context.Context, state AgentState) AgentState

func (*NoopPlugin) BeforeProviderRequest

func (n *NoopPlugin) BeforeProviderRequest(_ context.Context, requestJSON string) string

func (*NoopPlugin) BeforeToolCall

func (n *NoopPlugin) BeforeToolCall(_ context.Context, _ ToolCall, _ json.RawMessage) (ToolResult, bool)

func (*NoopPlugin) ExecuteTool

func (n *NoopPlugin) ExecuteTool(_ context.Context, name string, _ json.RawMessage) ToolResult

func (*NoopPlugin) ModifyContext

func (n *NoopPlugin) ModifyContext(_ context.Context, messagesJSON string) string

func (*NoopPlugin) ModifyInput

func (n *NoopPlugin) ModifyInput(_ context.Context, _ string) agent.InputResult

func (*NoopPlugin) ModifySystemPrompt

func (n *NoopPlugin) ModifySystemPrompt(prompt string) string

func (*NoopPlugin) Name

func (n *NoopPlugin) Name() string

func (*NoopPlugin) SessionEnd

func (n *NoopPlugin) SessionEnd(_ context.Context, _ string, _ agent.SessionEndReason)

func (*NoopPlugin) SessionStart

func (n *NoopPlugin) SessionStart(_ context.Context, _ string, _ agent.SessionStartReason)

func (*NoopPlugin) Tools

func (n *NoopPlugin) Tools() []ToolDefinition

func (*NoopPlugin) TurnEnd

func (n *NoopPlugin) TurnEnd(_ context.Context)

func (*NoopPlugin) TurnStart

func (n *NoopPlugin) TurnStart(_ context.Context)

type Plugin

Plugin is the interface that standalone gRPC extension binaries implement. Embed NoopPlugin and override only the methods you need.

type Plugin interface {
    Name() string
    Tools() []ToolDefinition
    ExecuteTool(ctx context.Context, name string, args json.RawMessage) ToolResult
    BeforePrompt(ctx context.Context, state AgentState) AgentState
    BeforeToolCall(ctx context.Context, call ToolCall, args json.RawMessage) (ToolResult, bool)
    AfterToolCall(ctx context.Context, call ToolCall, result ToolResult) ToolResult
    ModifySystemPrompt(prompt string) string

    SessionStart(ctx context.Context, sessionID string, reason agent.SessionStartReason)
    SessionEnd(ctx context.Context, sessionID string, reason agent.SessionEndReason)
    AgentStart(ctx context.Context)
    AgentEnd(ctx context.Context)
    TurnStart(ctx context.Context)
    TurnEnd(ctx context.Context)
    ModifyInput(ctx context.Context, text string) agent.InputResult
    ModifyContext(ctx context.Context, messagesJSON string) string
    BeforeProviderRequest(ctx context.Context, requestJSON string) string
    AfterProviderResponse(ctx context.Context, content string, numToolCalls int)
    BeforeCompact(ctx context.Context, prep agent.CompactionPrep) *agent.CompactionResult
    AfterCompact(ctx context.Context, freedTokens int)
}

type RemoteTool

RemoteTool is a tools.Tool that executes over the extension’s ExecuteTool gRPC.

type RemoteTool struct {
    // contains filtered or unexported fields
}

func (*RemoteTool) Description

func (t *RemoteTool) Description() string

func (*RemoteTool) Execute

func (t *RemoteTool) Execute(ctx context.Context, args json.RawMessage, update tools.ToolUpdate) (*tools.ToolResult, error)

func (*RemoteTool) IsReadOnly

func (t *RemoteTool) IsReadOnly() bool

func (*RemoteTool) Name

func (t *RemoteTool) Name() string

func (*RemoteTool) Schema

func (t *RemoteTool) Schema() json.RawMessage

type SkillLoader

SkillLoader discovers and loads Markdown-based skills.

type SkillLoader struct {
    Dirs []string
}

func NewSkillLoader

func NewSkillLoader(dirs []string) *SkillLoader

NewSkillLoader creates a new loader for Markdown skills.

func (*SkillLoader) Load

func (l *SkillLoader) Load() ([]agent.Extension, error)

Load finds all skills and returns a SkillsMetadataExtension.

type SkillTool

SkillTool implements tools.Tool for a single Markdown-based skill.

type SkillTool struct {
    // contains filtered or unexported fields
}

func (*SkillTool) Description

func (s *SkillTool) Description() string

func (*SkillTool) Execute

func (s *SkillTool) Execute(ctx context.Context, args json.RawMessage, update tools.ToolUpdate) (*tools.ToolResult, error)

func (*SkillTool) IsReadOnly

func (s *SkillTool) IsReadOnly() bool

func (*SkillTool) Name

func (s *SkillTool) Name() string

func (*SkillTool) Schema

func (s *SkillTool) Schema() json.RawMessage

type SkillsMetadataExtension

SkillsMetadataExtension lists all available skills in the system prompt.

type SkillsMetadataExtension struct {
    agent.NoopExtension
    // contains filtered or unexported fields
}

func NewSkillsMetadataExtension

func NewSkillsMetadataExtension(allSkills []*skills.Skill) *SkillsMetadataExtension

NewSkillsMetadataExtension creates an extension that adds skill metadata to the prompt.

func (*SkillsMetadataExtension) ModifySystemPrompt

func (s *SkillsMetadataExtension) ModifySystemPrompt(prompt string) string

ModifySystemPrompt injects a brief list of skills into the system prompt. This tells the agent it can call these skills (as tools) when needed.

func (*SkillsMetadataExtension) Tools

func (s *SkillsMetadataExtension) Tools() []tools.Tool

Tools returns a SkillTool for each loaded skill.

type ToolCall

ToolCall describes a tool invocation passed to Plugin hook methods.

type ToolCall struct {
    Name string
    Args json.RawMessage
}

type ToolDefinition

ToolDefinition describes a tool contributed by a Plugin.

type ToolDefinition struct {
    Name        string
    Description string
    Schema      json.RawMessage
    IsReadOnly  bool
}

type ToolResult

ToolResult is the outcome of a tool call or an interception.

type ToolResult struct {
    Content string
    IsError bool
}

Generated by gomarkdoc

sdk

import "github.com/goppydae/sharur/sdk"

Package sdk provides the public Go SDK for embedding sharur agents in your own applications.

Example:

ag, err := sdk.NewAgent(sdk.Config{
    Model:    "llama3",
    Provider: "ollama",
    Tools:    sdk.DefaultTools(),
})
if err != nil {
    panic(err)
}

ag.Subscribe(func(e sdk.Event) {
    if e.Type == sdk.EventTextDelta {
        fmt.Print(e.Content)
    }
})

if err := ag.Prompt(context.Background(), "What files are in this directory?"); err != nil {
    panic(err)
}
<-ag.Idle()

Index

Constants

const (
    EventAgentStart   = agent.EventAgentStart
    EventTurnStart    = agent.EventTurnStart
    EventMessageStart = agent.EventMessageStart
    EventTextDelta    = agent.EventTextDelta
    EventToolCall     = agent.EventToolCall
    EventMessageEnd   = agent.EventMessageEnd
    EventAgentEnd     = agent.EventAgentEnd
    EventError        = agent.EventError
    EventAbort        = agent.EventAbort
)

type Agent

Agent is the stateful conversation agent.

type Agent = agent.Agent

func NewAgent

func NewAgent(cfg Config) (*Agent, error)

NewAgent creates a new agent from the given configuration.

type CompactionPrep

CompactionPrep describes the state passed to BeforeCompact.

type CompactionPrep = agent.CompactionPrep

type CompactionResult

CompactionResult can be returned by BeforeCompact to provide a custom summary.

type CompactionResult = agent.CompactionResult

type Config

Config holds the options for creating a new agent.

type Config struct {
    // Provider selects the LLM backend: "ollama" (default), "openai", or "anthropic".
    Provider string

    // Model is the model name to use (e.g. "llama3", "gpt-4o", "claude-sonnet-4-6").
    Model string

    // OllamaURL overrides the Ollama base URL (default: http://localhost:11434).
    OllamaURL string

    // OpenAIURL overrides the OpenAI-compatible base URL.
    OpenAIURL string

    // OpenAIKey is the API key for OpenAI or any compatible provider.
    OpenAIKey string

    // AnthropicKey is the Anthropic API key.
    AnthropicKey string

    // SystemPrompt sets the agent's system prompt.
    SystemPrompt string

    // ThinkingLevel controls reasoning depth.
    ThinkingLevel ThinkingLevel

    // MaxTokens caps the response length (0 = provider default).
    MaxTokens int

    // DryRun mode prevents tools from performing destructive actions.
    DryRun bool

    // Tools registers additional tools beyond the builtins.
    // Pass tools.Read{}, tools.Write{}, tools.Bash{}, etc.
    Tools []Tool

    // Extensions registers active extensions (gRPC plugins or Skills).
    Extensions []Extension
}

type Event

Event is an agent lifecycle event emitted to subscribers.

type Event = agent.Event

type EventType

EventType identifies the kind of event.

type EventType = agent.EventType

type Extension

Extension is the interface for agent extensions (gRPC plugins, skills, etc.).

type Extension = agent.Extension

type InputAction

InputAction controls how ModifyInput’s result is applied.

type InputAction = agent.InputAction

const (
    InputContinue  InputAction = agent.InputContinue
    InputTransform InputAction = agent.InputTransform
    InputHandled   InputAction = agent.InputHandled
)

type InputResult

InputResult is returned by ModifyInput.

type InputResult = agent.InputResult

type SessionEndReason

SessionEndReason identifies why a session is ending.

type SessionEndReason = agent.SessionEndReason

const (
    SessionEndReset SessionEndReason = agent.SessionEndReset
)

type SessionStartReason

SessionStartReason identifies why a session is starting.

type SessionStartReason = agent.SessionStartReason

const (
    SessionStartNew    SessionStartReason = agent.SessionStartNew
    SessionStartResume SessionStartReason = agent.SessionStartResume
)

type ThinkingLevel

ThinkingLevel controls how much reasoning budget the model gets.

type ThinkingLevel = types.ThinkingLevel

const (
    ThinkingOff    ThinkingLevel = types.ThinkingOff
    ThinkingLow    ThinkingLevel = types.ThinkingLow
    ThinkingMedium ThinkingLevel = types.ThinkingMedium
    ThinkingHigh   ThinkingLevel = types.ThinkingHigh
)

type Tool

Tool is the universal tool interface.

type Tool = tools.Tool

func DefaultTools

func DefaultTools() []Tool

DefaultTools returns the full built-in tool set.

type ToolResult

ToolResult is the output of a tool execution.

type ToolResult = tools.ToolResult

Generated by gomarkdoc

tools

import "github.com/goppydae/sharur/internal/tools"

Package tools provides the universal tool interface and registry.

Index

func NormalizePath

func NormalizePath(path string) string

NormalizePath strips a leading ‘@’ from a path if present.

type Bash

Bash is a tool for executing shell commands.

Security note: commands are executed as-is via `bash -c`. The subprocess runs in an isolated environment containing only an explicit allowlist of variables (PATH, HOME, LANG, TERM, TMPDIR, USER, SHELL). Extra variables can be injected via EnvAllowlist. DenyPatterns blocks commands by substring match before execution.

type Bash struct {
    // Cwd is the working directory for commands.
    Cwd string
    // Timeout for command execution.
    Timeout time.Duration
    // DenyPatterns is an optional list of substrings that, if found in the
    // command, will cause execution to be rejected. Checked case-insensitively.
    // Example: []string{"rm -rf /", "dd if=", "> /dev/sd"}
    DenyPatterns []string
    // EnvAllowlist is an optional list of KEY=VALUE pairs to inject into the
    // subprocess environment in addition to the default allowlist.
    EnvAllowlist []string
}

func (Bash) Description

func (Bash) Description() string

func (Bash) Execute

func (t Bash) Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)

func (Bash) IsReadOnly

func (Bash) IsReadOnly() bool

func (Bash) Name

func (Bash) Name() string

func (Bash) Schema

func (Bash) Schema() json.RawMessage

type Edit

Edit is a tool for performing search-replace edits on files.

type Edit struct{}

func (Edit) Description

func (Edit) Description() string

func (Edit) Execute

func (Edit) Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)

func (Edit) IsReadOnly

func (Edit) IsReadOnly() bool

func (Edit) Name

func (Edit) Name() string

func (Edit) Schema

func (Edit) Schema() json.RawMessage

type Find

Find is a tool for finding files by glob pattern.

type Find struct{}

func (Find) Description

func (Find) Description() string

func (Find) Execute

func (Find) Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)

func (Find) IsReadOnly

func (Find) IsReadOnly() bool

func (Find) Name

func (Find) Name() string

func (Find) Schema

func (Find) Schema() json.RawMessage

type Grep

Grep is a tool for searching file contents with regex.

type Grep struct{}

func (Grep) Description

func (Grep) Description() string

func (Grep) Execute

func (Grep) Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)

func (Grep) IsReadOnly

func (Grep) IsReadOnly() bool

func (Grep) Name

func (Grep) Name() string

func (Grep) Schema

func (Grep) Schema() json.RawMessage

type Ls

Ls is a tool for listing directory contents.

type Ls struct{}

func (Ls) Description

func (Ls) Description() string

func (Ls) Execute

func (Ls) Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)

func (Ls) IsReadOnly

func (Ls) IsReadOnly() bool

func (Ls) Name

func (Ls) Name() string

func (Ls) Schema

func (Ls) Schema() json.RawMessage

type Read

Read is a tool for reading file contents.

type Read struct{}

func (Read) Description

func (Read) Description() string

func (Read) Execute

func (Read) Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)

func (Read) IsReadOnly

func (Read) IsReadOnly() bool

func (Read) Name

func (Read) Name() string

func (Read) Schema

func (Read) Schema() json.RawMessage

type Tool

Tool is the universal tool interface — anything the agent can do.

type Tool interface {
    Name() string
    Description() string
    Schema() json.RawMessage // JSON Schema for parameters
    Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)
    IsReadOnly() bool
}

type ToolCall

ToolCall represents a tool invocation from the LLM.

type ToolCall struct {
    ID       string          `json:"id"`
    Name     string          `json:"name"`
    Args     json.RawMessage `json:"args"`
    Position int             `json:"position,omitempty"`
}

type ToolRegistry

ToolRegistry manages registered tools.

type ToolRegistry struct {
    // contains filtered or unexported fields
}

func NewToolRegistry

func NewToolRegistry() *ToolRegistry

NewToolRegistry creates an empty tool registry.

func (*ToolRegistry) All

func (r *ToolRegistry) All() []Tool

All returns all registered tools.

func (*ToolRegistry) Get

func (r *ToolRegistry) Get(name string) (Tool, bool)

Get retrieves a tool by name.

func (*ToolRegistry) Has

func (r *ToolRegistry) Has(name string) bool

Has checks if a tool is registered.

func (*ToolRegistry) Register

func (r *ToolRegistry) Register(t Tool)

Register adds a tool to the registry.

type ToolResult

ToolResult represents the output of a tool execution.

type ToolResult struct {
    Content  string         `json:"content"`
    IsError  bool           `json:"isError,omitempty"`
    Metadata map[string]any `json:"metadata,omitempty"`
}

type ToolUpdate

ToolUpdate is a callback for streaming partial results.

type ToolUpdate func(partial *ToolResult)

type Write

Write is a tool for creating or overwriting files.

type Write struct{}

func (Write) Description

func (Write) Description() string

func (Write) Execute

func (Write) Execute(ctx context.Context, args json.RawMessage, update ToolUpdate) (*ToolResult, error)

func (Write) IsReadOnly

func (Write) IsReadOnly() bool

func (Write) Name

func (Write) Name() string

func (Write) Schema

func (Write) Schema() json.RawMessage

Generated by gomarkdoc