constSUMMARIZATION_PROMPT=`The messages above are a conversation to summarize. Create a structured context checkpoint summary that another LLM will use to continue the work.
Start your response with the exact string: <!-- sharur-summary -->
Then use this EXACT format:
## Goal
[What is the user trying to accomplish? Can be multiple items if the session covers different tasks.]
## Constraints & Preferences
- [Any constraints, preferences, or requirements mentioned by user]
- [Or "(none)" if none were mentioned]
## Progress
### Done
- [x] [Completed tasks/changes]
### In Progress
- [ ] [Current work]
### Blocked
- [Issues preventing progress, if any]
## Key Decisions
- **[Decision]**: [Brief rationale]
## Next Steps
1. [Ordered list of what should happen next]
## Critical Context
- [Any data, examples, or references needed to continue]
- [Or "(none)" if not applicable]
Keep each section concise. Preserve exact file paths, function names, and error messages.`
constTURN_PREFIX_SUMMARIZATION_PROMPT=`This is the PREFIX of a turn that was too large to keep. The SUFFIX (recent work) is retained.
Summarize the prefix to provide context for the retained suffix:
## Original Request
[What did the user ask for in this turn?]
## Early Progress
- [Key decisions and work done in the prefix]
## Context for Suffix
- [Information needed to understand the retained recent work]
Be concise. Focus on what's needed to understand the kept suffix.`
constUPDATE_SUMMARIZATION_PROMPT=`The messages above are NEW conversation messages to incorporate into the existing summary provided in <previous-summary> tags.
Start your response with the exact string: <!-- sharur-summary -->
Update the existing structured summary with new information. RULES:
- PRESERVE all existing information from the previous summary
- ADD new progress, decisions, and context from the new messages
- UPDATE the Progress section: move items from "In Progress" to "Done" when completed
- UPDATE "Next Steps" based on what was accomplished
- PRESERVE exact file paths, function names, and error messages
- If something is no longer relevant, you may remove it
Use this EXACT format:
## Goal
[Preserve existing goals, add new ones if the task expanded]
## Constraints & Preferences
- [Preserve existing, add new ones discovered]
## Progress
### Done
- [x] [Include previously done items AND newly completed items]
### In Progress
- [ ] [Current work - update based on progress]
### Blocked
- [Current blockers - remove if resolved]
## Key Decisions
- **[Decision]**: [Brief rationale] (preserve all previous, add new)
## Next Steps
1. [Update based on current state]
## Critical Context
- [Preserve important context, add new if needed]
Keep each section concise. Preserve exact file paths, function names, and error messages.`
func EstimateMessageTokens
funcEstimateMessageTokens(mMessage)int
type Agent
Agent owns the transcript, emits events, and executes tools.
typeAgentstruct{// contains filtered or unexported fields}
InvokeTool manually triggers a tool call as if it came from the assistant. It executes the tool, records the result, and then starts the agent loop to allow the LLM to react to the invocation.
func (*Agent) IsRunning
func(a*Agent)IsRunning()bool
IsRunning reports whether the agent is currently processing.
func (*Agent) LifecycleState
func(a*Agent)LifecycleState()string
LifecycleState returns the current lifecycle state as a string.
func (*Agent) Messages
func(a*Agent)Messages()[]Message
Messages returns a copy of the conversation messages.
typeEventstruct{TypeEventTypeContentstringToolCall*ToolCallUsage*llm.UsageErrorerror// ToolOutput stores the result content of a tool execution.// Emitted when type is EventToolOutput.ToolOutput*ToolOutput// StateChange holds details of a lifecycle state transition.// Emitted when type is EventStateChange.StateChange*StateTransition// Value stores a numeric value (e.g. token count).// Emitted when type is EventTokens.Valueint64}
Extension is the unified interface for all extensions (gRPC plugins, Markdown Skills, etc.)
typeExtensioninterface{// Name returns the extension's unique identifier.Name()string// Tools returns additional tools to register with the agent.Tools()[]tools.Tool// BeforePrompt is called before each LLM request.// Return a modified state to change the request.BeforePrompt(ctxcontext.Context,state*AgentState)*AgentState// BeforeToolCall is called before each tool execution.// Return (result, true) to intercept and prevent the tool from running.// Return (nil, false) to allow normal execution.BeforeToolCall(ctxcontext.Context,call*ToolCall,argsjson.RawMessage)(*tools.ToolResult,bool)// AfterToolCall is called after each tool call completes.// Return a modified result to change the outcome.AfterToolCall(ctxcontext.Context,call*ToolCall,result*tools.ToolResult)*tools.ToolResult// ModifySystemPrompt is called to augment the system prompt.ModifySystemPrompt(promptstring)string// SessionStart is called when a session is attached or the first prompt begins.SessionStart(ctxcontext.Context,sessionIDstring,reasonSessionStartReason)// SessionEnd is called when a session is reset or the agent is torn down.SessionEnd(ctxcontext.Context,sessionIDstring,reasonSessionEndReason)// AgentStart is called when the agent begins processing a user prompt.AgentStart(ctxcontext.Context)// AgentEnd is called when the agent loop finishes (success, error, or abort).AgentEnd(ctxcontext.Context)// TurnStart is called at the start of each LLM request turn.TurnStart(ctxcontext.Context)// TurnEnd is called after each turn's tool calls have been processed.TurnEnd(ctxcontext.Context)// ModifyInput is called with raw user input before it is added to the transcript.// Return InputHandled to consume the message without further processing.// Return InputTransform to replace the text.// Return InputContinue (or zero value) to proceed unchanged.ModifyInput(ctxcontext.Context,textstring)InputResult// ModifyContext is called with the message slice just before building each LLM// request. The returned slice replaces what is sent to the LLM (not the stored// transcript). Extensions are chained; each receives the previous result.ModifyContext(ctxcontext.Context,messages[]types.Message)[]types.Message// BeforeProviderRequest is called with the assembled CompletionRequest before// it is sent to the LLM provider. Return a modified copy to alter the request.BeforeProviderRequest(ctxcontext.Context,req*llm.CompletionRequest)*llm.CompletionRequest// AfterProviderResponse is called after the LLM stream is fully consumed.AfterProviderResponse(ctxcontext.Context,contentstring,numToolCallsint)// BeforeCompact is called before the compaction summarization LLM call.// Return a non-nil *CompactionResult to provide a custom summary and skip the// default LLM-based summarization entirely.BeforeCompact(ctxcontext.Context,prepCompactionPrep)*CompactionResult// AfterCompact is called after compaction completes.AfterCompact(ctxcontext.Context,freedTokensint)}
type Image
Image is an alias for types.Image.
typeImage=types.Image
type InputAction
InputAction controls how ModifyInput’s result is applied.
typeInputActionstring
const(// InputContinue passes the original text through unchanged.InputContinueInputAction="continue"// InputTransform replaces the user text with InputResult.Text.InputTransformInputAction="transform"// InputHandled marks the input as consumed; the message is not appended to the transcript.InputHandledInputAction="handled")
type InputResult
InputResult is returned by ModifyInput to describe how to process the user input.