Availability
| Edition | Deployment Type |
|---|---|
| Community & Enterprise | Self-Managed, Hybrid |
Experimental Feature: Agent plugins are currently experimental. The API and behavior may change in future releases.AI Studio Agent plugins enable conversational AI experiences in the Chat Interface using the Unified Plugin SDK. Build custom agents that wrap LLMs, add specialized logic, integrate external services, and create sophisticated multi-turn conversations with streaming responses. Agent plugins use the same
pkg/plugin_sdk as other plugin types, automatically detecting the Studio runtime and providing access to Studio Services (LLM calls, tool execution, datasource queries).
Architecture Overview
Agent plugins follow a three-tier binding model:Key Concepts
| Component | Purpose |
|---|---|
| Plugin | Long-running gRPC plugin implementing AgentPlugin interface |
| Agent Object | Configuration binding a plugin to an App, with access controls |
| App Object | Resource container providing LLMs, tools, datasources, and credentials |
- Plugins are reusable: A single plugin can power multiple Agent Objects
- Apps provide resources: The App determines which LLMs the agent can call
- Access via Groups: Users access agents based on group membership
- Budget enforcement: LLM calls are routed through the proxy, enforcing budgets
- Portal integration: Active agents appear in the Chat section of the AI Portal alongside managed chats
Overview
Agent plugins enable you to:- Stream Responses: Real-time server-streaming for interactive conversations
- Call LLMs: Access managed LLMs via Context Services or direct SDK calls
- Execute Tools: Run registered tools and integrate external services
- Query Datasources: Access configured datasources for RAG and context
- Maintain Context: Access full conversation history
- Custom Configuration: Per-agent config with JSON schema validation
- Universal Services: KV storage and logging via Context.Services
Unified SDK Integration
Agent plugins use the Unified Plugin SDK (pkg/plugin_sdk), just like all other plugin types. Key patterns:
Import and Structure
Lifecycle Methods
Expandable
Agent Capability
Implement theAgentPlugin capability:
Expandable
Serving the Plugin
Service API Access
Agent plugins can call LLMs, execute tools, and query datasources via theai_studio_sdk helper functions (requires broker ID):
Quick Start
1. Project Structure
2. Create Manifest
server/plugin.manifest.json:Expandable
3. Implement Agent Plugin
server/main.go:Expandable
4. Build and Deploy
Expandable
5. Create Agent Object
Create an Agent Object that binds the plugin to an App: Prerequisites:- An active plugin with
hook_type: agent - An App with at least one LLM assigned, active credential, and optionally tools and datasources
Expandable
| Field | Type | Description |
|---|---|---|
name | string | Display name for the agent |
description | string | Description shown to users |
plugin_id | uint | ID of the agent plugin to use |
app_id | uint | ID of the App providing resources |
config | object | Plugin-specific configuration (passed as ConfigJson) |
group_ids | []uint | Groups that can access this agent (empty = public) |
is_active | bool | Whether the agent is available to users |
namespace | string | Optional namespace for multi-tenant deployments |
6. Use in Chat Interface
Active agents appear in the Chat section of the AI Portal alongside managed chats. Users see agents they have access to based on group membership. SSE Communication Flow:-
Establish SSE connection:
-
Receive session info (first message):
-
Send messages via POST:
-
Receive streaming response via SSE:
Agent Message Request
TheAgentMessageRequest provides rich context for your agent:
Available Tools
Available Datasources
Available LLMs
Configuration
Parse custom configuration from JSON:Conversation History
Streaming Responses
Agent plugins use server-streaming gRPC to send real-time responses:Chunk Types
Send Content
Send Thinking
Send Tool Call
Send Error
Send Done
Complete Example: Echo Agent
A simple agent that wraps LLM responses with custom formatting:Expandable
Configuration Schema
config.schema.json:Expandable
Advanced Patterns
Multi-Step Reasoning
Expandable
Tool Integration
Expandable
Best Practices
Performance
- Use non-blocking I/O for external calls
- Stream responses as they arrive
- Set appropriate timeouts
- Cache frequently accessed data
- Minimize LLM calls where possible
Error Handling
- Always send ERROR chunk on failures
- Set
IsFinal=truefor ERROR chunks - Provide descriptive error messages
- Log errors with context (session ID, plugin ID)
- Implement graceful fallbacks
User Experience
- Send THINKING chunks for long operations
- Stream content as it’s generated
- Provide progress updates
- Use metadata for rich responses
- Always send DONE chunk
Security
- Validate all inputs
- Sanitize user messages
- Don’t expose sensitive data in responses
- Use permission scopes appropriately
- Log security-relevant events
API Reference
Agent Management (Admin Only)
| Endpoint | Method | Description |
|---|---|---|
/api/v1/agents | GET | List all accessible agents |
/api/v1/agents | POST | Create new agent config |
/api/v1/agents/{id} | GET | Get agent details |
/api/v1/agents/{id} | PUT | Update agent config |
/api/v1/agents/{id} | DELETE | Delete agent config |
/api/v1/agents/{id}/activate | POST | Activate agent |
/api/v1/agents/{id}/deactivate | POST | Deactivate agent |
Agent Communication (Users)
| Endpoint | Method | Description |
|---|---|---|
/api/agents/{id}/stream | GET | Establish SSE connection |
/api/agents/{id}/message | POST | Send message to active session |
SessionAware Pattern (Recommended)
Agent plugins should implement theSessionAware interface to warm up the Service API connection:
Service Broker ID
For LLM calls via the Service API, always set the service broker ID from the request:Troubleshooting
Agent Not Appearing in Chat
Agent Not Appearing in Chat
- Check
plugin_typeis"agent" - Verify agent configuration is created for an app
- Ensure plugin is active (
is_active: true) - Check logs for initialization errors
LLM Calls Failing
LLM Calls Failing
- Verify
llms.proxypermission in manifest - Check SDK is initialized
- Ensure LLMs are available in app configuration
- Review LLM provider credentials
Streaming Not Working
Streaming Not Working
- Ensure you’re sending chunks sequentially
- Always set
IsFinal=truefor final chunk - Send DONE chunk at the end
- Check for errors in
stream.Send()
Context Not Available
Context Not Available
- Verify plugin context is passed correctly
- Check broker ID is set for service API calls
- Ensure session ID is provided
- Review plugin initialization
Working Example
See the Echo Agent example in the repository: Path:examples/plugins/studio/echo-agent/
The Echo Agent demonstrates:
- Basic
HandleAgentMessageimplementation - LLM selection from available LLMs
- Calling LLMs via
ai_studio_sdk.CallLLM() - Streaming responses back to users
- Configuration handling via JSON schema
- Session warmup with
OnSessionReady
Limitations (Experimental)
- No built-in conversation persistence (agents must manage their own state if needed)
- Tool execution must be implemented by the agent
- No automatic retry on LLM failures
- Single concurrent conversation per session