Availability
| Edition | Deployment Type |
|---|
| Community & Enterprise | Self-Managed, Hybrid |
AI Studio is the central management hub of the Tyk AI platform. It is the brain of the system — where administrators configure LLM providers, manage users, monitor usage, and extend the platform with plugins. When deployed in a hub-and-spoke topology, it also acts as the control plane that governs all connected Edge Gateways.
High-Level Architecture
AI Studio runs as a single binary that starts multiple servers:
| Server | Port | Purpose |
|---|
| REST API + Admin UI | 8080 | Web interface and programmatic management |
| Embedded Gateway | 9090 | Proxies LLM requests directly (standalone mode) |
| gRPC Control Server | 50051 | Hub-and-spoke control plane (only in control mode) |
The gRPC Control Server only starts when GATEWAY_MODE=control is set. In standalone mode (the default), AI Studio handles everything locally without Edge Gateways.
Core Features
AI Studio provides the following capabilities out of the box:
| Feature | Description |
|---|
| LLM Management | Configure and manage connections to LLM providers |
| Application Management | Create apps with credentials, budgets, and LLM access |
| User Management & RBAC | Users, groups, roles, and access control |
| Analytics & Monitoring | Token usage, cost tracking, and dashboards |
| Plugin System | Extend AI Studio with UI, Agent, and Gateway plugins |
| Secrets Management | Secure storage and reference for API keys |
| Embedded Gateway | Built-in LLM proxy (standalone mode) |
| Edge Gateway Management | Register, monitor, and reload Edge Gateways (control mode) |
| Plugin Marketplace | Discover and install community plugins |
| Documentation Server | Built-in docs site served at port 8989 |
Configuration Management
Configuration Management is the heart of AI Studio. It is where administrators define what LLMs are available, how they are accessed, and what rules govern their use.
LLM Provider Configuration
AI Studio supports multiple LLM vendors through a unified configuration model:
Supported Vendors:
| Vendor | Key | Notes |
|---|
| OpenAI | openai | GPT-4, GPT-3.5, etc. |
| Anthropic | anthropic | Claude models |
| Google Vertex AI | vertex | Gemini via gcloud |
| Google AI | google_ai | Gemini via API key |
| Hugging Face | huggingface | Open-source models |
| Ollama | ollama | Self-hosted models |
Model Pricing
To enable cost tracking, administrators define per-token prices for each model:
The Analytics Engine uses these prices to calculate the cost of every LLM interaction automatically.
Application (App) Management
Applications are the access credentials that developers and systems use to interact with LLMs through the proxy:
Secrets Management
API keys and sensitive values can be stored securely and referenced by name:
$SECRET/MyOpenAIKey ← Reference in LLM config instead of raw key
This prevents sensitive credentials from being exposed in configuration exports or logs.
Content Filters
Filters are rules attached to LLMs that can block or modify requests/responses. They are implemented as plugins with the pre_auth, auth, or post_auth hook types and are associated with specific LLM configurations.
User Management & RBAC
AI Studio uses a group-based access control model. Access to resources is granted through group membership, not individual user permissions.
| Role | IsAdmin | ShowPortal | Capabilities |
|---|
| Super Admin | ✅ (ID=1) | ✅ | Full access, manages admins, SSO config, audit logs |
| Admin | ✅ | ✅ | Manages users, groups, LLMs, plugins |
| Developer | ❌ | ✅ | Portal access, creates apps, uses tools |
| Chat User | ❌ | ❌ | Chat interface access only |
Analytics & Monitoring
AI Studio automatically collects and stores analytics data for every LLM interaction that flows through the system.
Data Collection Flow
What Gets Recorded
Every LLM interaction records:
| Field | Description |
|---|
timestamp | When the request occurred |
user_id | Which user made the request |
app_id | Which application was used |
llm_id | Which LLM configuration was targeted |
vendor | LLM provider (openai, anthropic, etc.) |
model_name | Specific model used (e.g. gpt-4-turbo) |
prompt_tokens | Input token count |
response_tokens | Output token count |
total_tokens | Combined token count |
cost | Calculated cost (using model pricing) |
latency_ms | Request duration in milliseconds |
interaction_type | chat or proxy |
cache_write_tokens | Tokens written to cache (Anthropic) |
cache_read_tokens | Tokens read from cache (Anthropic) |
Plugin System
The Plugin System is AI Studio’s extensibility layer. Plugins run as isolated processes communicating over gRPC, providing security and fault tolerance. All plugins use a Unified Plugin SDK that works in both AI Studio and Edge Gateway contexts.
Plugin Distribution
Plugins can be distributed in three ways:
| Method | Description | Example |
|---|
| Local Binary | Path to executable on disk | /usr/local/bin/my-plugin |
| Remote Binary | URL to download | https://example.com/plugin |
| OCI Artifact | Container registry reference | oci://ghcr.io/org/plugin:v1.0.0 |
Plugin Marketplace
AI Studio includes a built-in marketplace for discovering and installing community plugins:
CE vs Enterprise: Community Edition supports one official Tyk marketplace. Enterprise Edition supports multiple custom marketplace sources with full management UI.
How Configuration Synchronization Works?
Tyk AI Studio uses a checksum-based system to track configuration synchronization between the control plane and edge gateways.
How It Works
- Checksum Generation: When configuration changes occur on the control plane, a SHA-256 checksum is computed from the serialized configuration snapshot
- Heartbeat Reporting: Edge gateways report their loaded configuration checksum in each heartbeat
- Status Comparison: The control plane compares reported checksums to determine sync status
- UI Notifications: The admin UI displays sync status and notifies administrators when edges are out of sync
- On configuration change, an admin pushes a reload signal. This can target all gateways or a specific namespace. Each gateway then pulls the latest snapshot.
- Namespaces control what gets loaded onto each gateway. LLMs, Apps, Filters, and Plugins can all be namespaced.
- If the hub is unreachable, gateways continue operating from their last-known snapshot stored in a local database (SQLite or PostgreSQL).
What Gets Synced to Gateways
| Synced (part of config snapshot) | NOT synced (Studio-only) |
|---|
| LLM Configurations | Tools |
| Apps | Data Sources |
| Filters | Chat configurations |
| Plugins | User management |
| Model Prices | |
| Model Routers (Enterprise) | |
Note: Apps are included in the sync but are not part of the checksum calculation because they change frequently. Credentials are not pulled until a gateway actually needs them — this is a pull-on-miss caching strategy that ensures the admin retains ongoing control over access tokens.
Sync Status Values
| Status | Description | UI Indicator |
|---|
| In Sync | Edge has the current configuration | Green chip |
| Pending | Edge needs a configuration update | Yellow chip |
| Stale | Edge has been out of sync for >15 minutes | Orange chip |
| Unknown | Edge hasn’t reported a checksum yet | Gray chip |
Pushing Configuration
Configuration changes are pushed to edge gateways on-demand (not automatically) to ensure administrators maintain control over when changes are deployed.
Push Configuration Modal
Click the Push Configuration button to open the push modal. You can choose to:
- Push to All Namespaces: Sends configuration to all connected edge gateways
- Push to Specific Namespace: Sends configuration only to edges in a selected namespace (Enterprise)
Push Process
When you push configuration:
- The control plane generates a new configuration snapshot for the target namespace(s)
- Edge gateways receive a reload signal via gRPC
- Each edge fetches the new configuration and applies it
- Edges report the new checksum in their next heartbeat
- The sync status updates to reflect the new state
Configuration Reference
To know more about configuring AI Studio, see the Configuration Reference for detailed documentation on all environment variables.