Core Concepts

Last updated:

Welcome to Tyk AI Studio! Before diving into specific features, understanding these core concepts will help you navigate the platform and its capabilities.

Key Components & Philosophy

Tyk AI Studio is designed as a secure, observable, and extensible gateway for interacting with Large Language Models (LLMs) and other AI services. Key architectural pillars include:

  • AI Gateway: The central gateway managing all interactions between your applications and various LLM providers. It enforces policies, logs activity, and handles vendor abstraction.
  • AI Portal: Empowers developers with a curated catalog of AI tools and services for faster innovation.
  • Chat: Provides a secure and interactive environment for users to engage with LLMs, leveraging integrated tools and data sources.
  • User Management & RBAC: Securely manages users, groups, and permissions. Access to resources like LLMs, Tools, and Data Sources is controlled via group memberships.
  • Extensibility (Tools & Data Sources): Allows integrating external APIs (Tools) and vector databases (Data Sources) into LLM workflows (e.g., for Retrieval-Augmented Generation - RAG).
  • Policy Enforcement (Filters): Intercept and modify LLM requests/responses using custom scripts to enforce specific rules or data transformations.
  • Configuration over Code: Many aspects like LLM parameters, Filters, and Budgets are configured through the UI/API rather than requiring code changes.
  • Security First: Features like Secrets Management, SSO integration, and fine-grained access control are integral to the platform.
  • Observability: Includes systems for Analytics & Monitoring and Notifications to track usage, costs, and system events.

Core Entities

Understanding these entities is crucial:

  • User: Represents an individual interacting with Tyk AI Studio, managed within the User Management system.
  • Group: Collections of users, the primary mechanism for assigning access rights to resources via RBAC.
  • API Key: Credentials generated by Users to allow applications or scripts programmatic access to Tyk AI Studio APIs (like the Proxy), inheriting the User’s permissions.
  • LLM Configuration: Represents a specific LLM provider and model setup (e.g., OpenAI GPT-4, Anthropic Claude 3), including parameters and potentially associated pricing and budgets.
  • Tool: Definitions of external APIs (via OpenAPI spec) that can be invoked by LLMs during chat sessions to perform actions or retrieve external data.
  • Data Source: Connections to vector databases or other data repositories used for Retrieval-Augmented Generation (RAG) within chat sessions.
  • Catalogue (Tools / Data Sources): Collections that group related Tools or Data Sources for easier management and assignment to Groups for access control.
  • Secret: Securely stored credentials (API keys, tokens) referenced indirectly (e.g., $SECRET/MY_KEY) in configurations like LLMs, Tools, or Data Sources.
  • Filter: Custom logic (using Tengo scripts) associated with specific execution points (e.g., pre/post LLM request) to intercept and modify requests/responses.

This page provides a high-level overview. Click the links above or use the sidebar to navigate to the detailed documentation for each feature.