1.Multi-Data-Center Bridge: Introduction
Tyk Multi Data Center Bridge (MDCB, also known as Tyk Sink) acts as a broker between Tyk Gateway Instances that are isolated from one another and typically have their own Redis DB.
In order to manage physically separate Tyk Gateway clusters from a centralised location, Tyk MDCB needs to be used to provide a remote “back-end” for token and configuration queries.
How Tyk MDCB Works
Tyk MDCB creates a bridge between a configuration source (MongoDB and a centralised Redis DB) and multiple Tyk gateway instances, this bridge provides API Definitions, Policy definitions and Org (Tenant) rate limits, as well as acting as a data sink for all analytics data gathered by slaved Tyk gateways.
The communication between instances works through a compressed RPC TCP tunnel between the gateway and MDCB, it is incredibly fast an can handle 10’s of thousands transactions per second.
2.MDCB Logical Architecture
The Tyk MDCB Logical architecture consists of:
- A master Tyk cluster, this can be active or inactive, what is important is that the master Tyk installation is not tagged or sharded or zoned in any way, it stores configurations for all valid APIs (this is to facilitate key creation).
- MDCB instances to handle the RPC connections
- The Tyk Slave clusters, these consist of Tyk Nodes and an isolated Redis DB
1. The master nodes
Tyk instances connected to MDCB are slaved, and so actually only ever have a locally cached set of key and policy data, so in order to first get slave clusters set up, you must have a master. The master can be an existing Tyk Gateway setup, it does not need to be separately created, but bear in mind that the key store for this set up will hold a copy of ALL tokens across ALL zones.
The Master nodes need to consist of:
- A Dashboard instance
- A Master Tyk gateway instance(s) (will load and be aware of all configurations, it is important to ensure this is not public facing)
- A master Redis DB
- A MongoDB replica set for the dashboard and MDCB
- One or more MDCB instances, load balanced with port 9090 open for TCP connections
2. The slave cluster(s)
The slave clusters are essentially local caches that run all validation and rate limiting operations locally instead of against a remote master that could cause latency.
When a request comes into a slaved node, the following set of actions occur:
- Request arrives
- Auth header and API identified
- Local cache is checked for token, if it doesn’t exist, attempt to copy token from RPC master node (MDCB)
- If token is found in master, copy to local cache and use
- If it is found in the local cache, no remote call is made and rate limiting and validation happen on the local copy
(Note: Cached versions do not get synchronised back to the master, setting a short TTL is important to ensure a regular lifetime)
A slave cluster consists of the following configuration:
- Tyk gateway instance(s) specially configured as slaves
- A redis DB