Skip to main content

Availability

EditionDeployment Type
Community & EnterpriseSelf-Managed, Hybrid
AI Studio is the central management hub of the Tyk AI platform. It is the brain of the system — where administrators configure LLM providers, manage users, monitor usage, and extend the platform with plugins. When deployed in a hub-and-spoke topology, it also acts as the control plane that governs all connected Edge Gateways.

High-Level Architecture

AI Studio runs as a single binary that starts multiple servers:
ServerPortPurpose
REST API + Admin UI8080Web interface and programmatic management
Embedded Gateway9090Proxies LLM requests directly (standalone mode)
gRPC Control Server50051Hub-and-spoke control plane (only in control mode)
The gRPC Control Server only starts when GATEWAY_MODE=control is set. In standalone mode (the default), AI Studio handles everything locally without Edge Gateways.

Core Features

AI Studio provides the following capabilities out of the box:
FeatureDescription
LLM ManagementConfigure and manage connections to LLM providers
Application ManagementCreate apps with credentials, budgets, and LLM access
User Management & RBACUsers, groups, roles, and access control
Analytics & MonitoringToken usage, cost tracking, and dashboards
Plugin SystemExtend AI Studio with UI, Agent, and Gateway plugins
Secrets ManagementSecure storage and reference for API keys
Embedded GatewayBuilt-in LLM proxy (standalone mode)
Edge Gateway ManagementRegister, monitor, and reload Edge Gateways (control mode)
Plugin MarketplaceDiscover and install community plugins
Documentation ServerBuilt-in docs site served at port 8989

Configuration Management

Configuration Management is the heart of AI Studio. It is where administrators define what LLMs are available, how they are accessed, and what rules govern their use.

LLM Provider Configuration

AI Studio supports multiple LLM vendors through a unified configuration model:
Supported Vendors:
VendorKeyNotes
OpenAIopenaiGPT-4, GPT-3.5, etc.
AnthropicanthropicClaude models
Google Vertex AIvertexGemini via gcloud
Google AIgoogle_aiGemini via API key
Hugging FacehuggingfaceOpen-source models
OllamaollamaSelf-hosted models

Model Pricing

To enable cost tracking, administrators define per-token prices for each model: The Analytics Engine uses these prices to calculate the cost of every LLM interaction automatically.

Application (App) Management

Applications are the access credentials that developers and systems use to interact with LLMs through the proxy:

Secrets Management

API keys and sensitive values can be stored securely and referenced by name:
$SECRET/MyOpenAIKey   ← Reference in LLM config instead of raw key
This prevents sensitive credentials from being exposed in configuration exports or logs.

Content Filters

Filters are rules attached to LLMs that can block or modify requests/responses. They are implemented as plugins with the pre_auth, auth, or post_auth hook types and are associated with specific LLM configurations.

User Management & RBAC

AI Studio uses a group-based access control model. Access to resources is granted through group membership, not individual user permissions.
RoleIsAdminShowPortalCapabilities
Super Admin✅ (ID=1)Full access, manages admins, SSO config, audit logs
AdminManages users, groups, LLMs, plugins
DeveloperPortal access, creates apps, uses tools
Chat UserChat interface access only

Analytics & Monitoring

AI Studio automatically collects and stores analytics data for every LLM interaction that flows through the system.

Data Collection Flow

What Gets Recorded

Every LLM interaction records:
FieldDescription
timestampWhen the request occurred
user_idWhich user made the request
app_idWhich application was used
llm_idWhich LLM configuration was targeted
vendorLLM provider (openai, anthropic, etc.)
model_nameSpecific model used (e.g. gpt-4-turbo)
prompt_tokensInput token count
response_tokensOutput token count
total_tokensCombined token count
costCalculated cost (using model pricing)
latency_msRequest duration in milliseconds
interaction_typechat or proxy
cache_write_tokensTokens written to cache (Anthropic)
cache_read_tokensTokens read from cache (Anthropic)

Plugin System

The Plugin System is AI Studio’s extensibility layer. Plugins run as isolated processes communicating over gRPC, providing security and fault tolerance. All plugins use a Unified Plugin SDK that works in both AI Studio and Edge Gateway contexts.

Plugin Distribution

Plugins can be distributed in three ways:
MethodDescriptionExample
Local BinaryPath to executable on disk/usr/local/bin/my-plugin
Remote BinaryURL to downloadhttps://example.com/plugin
OCI ArtifactContainer registry referenceoci://ghcr.io/org/plugin:v1.0.0

Plugin Marketplace

AI Studio includes a built-in marketplace for discovering and installing community plugins:
CE vs Enterprise: Community Edition supports one official Tyk marketplace. Enterprise Edition supports multiple custom marketplace sources with full management UI.

How Configuration Synchronization Works?

Tyk AI Studio uses a checksum-based system to track configuration synchronization between the control plane and edge gateways.

How It Works

  1. Checksum Generation: When configuration changes occur on the control plane, a SHA-256 checksum is computed from the serialized configuration snapshot
  2. Heartbeat Reporting: Edge gateways report their loaded configuration checksum in each heartbeat
  3. Status Comparison: The control plane compares reported checksums to determine sync status
  4. UI Notifications: The admin UI displays sync status and notifies administrators when edges are out of sync
  5. On configuration change, an admin pushes a reload signal. This can target all gateways or a specific namespace. Each gateway then pulls the latest snapshot.
  6. Namespaces control what gets loaded onto each gateway. LLMs, Apps, Filters, and Plugins can all be namespaced.
  7. If the hub is unreachable, gateways continue operating from their last-known snapshot stored in a local database (SQLite or PostgreSQL).

What Gets Synced to Gateways

Synced (part of config snapshot)NOT synced (Studio-only)
LLM ConfigurationsTools
AppsData Sources
FiltersChat configurations
PluginsUser management
Model Prices
Model Routers (Enterprise)
Note: Apps are included in the sync but are not part of the checksum calculation because they change frequently. Credentials are not pulled until a gateway actually needs them — this is a pull-on-miss caching strategy that ensures the admin retains ongoing control over access tokens.

Sync Status Values

StatusDescriptionUI Indicator
In SyncEdge has the current configurationGreen chip
PendingEdge needs a configuration updateYellow chip
StaleEdge has been out of sync for >15 minutesOrange chip
UnknownEdge hasn’t reported a checksum yetGray chip

Pushing Configuration

Configuration changes are pushed to edge gateways on-demand (not automatically) to ensure administrators maintain control over when changes are deployed.

Push Configuration Modal

Click the Push Configuration button to open the push modal. You can choose to:
  1. Push to All Namespaces: Sends configuration to all connected edge gateways
  2. Push to Specific Namespace: Sends configuration only to edges in a selected namespace (Enterprise)

Push Process

When you push configuration:
  1. The control plane generates a new configuration snapshot for the target namespace(s)
  2. Edge gateways receive a reload signal via gRPC
  3. Each edge fetches the new configuration and applies it
  4. Edges report the new checksum in their next heartbeat
  5. The sync status updates to reflect the new state

Configuration Reference

To know more about configuring AI Studio, see the Configuration Reference for detailed documentation on all environment variables.