Scaling API management with GitOps: Multi-tenancy, Argo CD and Tyk Operator in action

This blog post summarizes a session from LEAP 2.0: The API governance conference, featuring key takeaways and insights. Explore the full on-demand videos, slides, and more here.

Scaling API management with GitOps puts powerful automation at your fingertips. It’s a topic we’ve examined previously, from GitOps-enabled API management in Kubernetes to what happens when APIOps meets GitOps. 

The LEAP 2.0 API Governance Conference gave us a chance to look at GitOps afresh, so we called on Tamara Evans, Account Director at Tyk, and Alexander Troppmann, Lead Cloud-native Architect for Platform Integration at Zeiss, to share their expertise. Given the increasing complexity of API governance, and the ever-expanding efficiency potential of automation, Tamara and Alex used the LEAP 2.0 Conference to showcase GitOps in action. The duo delivered a live demonstration of how to get a fully functional Tyk data plane up and running in under 20 minutes. 

Read on to discover: 

  • How Zeiss has taken GitOps beyond its infrastructure and applied it to API governance
  • How to bootstrap Argo CD from scratch and implement a multi-tenancy strategy in it
  • How to provision Tyk APIs using Tyk Operator
  • GitOps patterns and practical insights into GitOps-driven API management
  • A real-world example of automated API provisioning in a cloud-native environment

API management for enterprises 

Zeiss is demonstrating the value of efficient, scalable API management at the enterprise level. The global business has in excess of 38,000 employees across its market segments, which include semiconductor manufacturing technology, industrial quality and research, medical technology and consumer markets. 

Right now, one area of focus for Zeiss is software architectures that leverage event-driven microservices running on Kubernetes. It was some of this work that led to the live demonstration on Azure Kubernetes at the Tyk LEAP 2.0 Conference. 

Argo CD and Tyk Operator in action 

With a Kubernetes cluster already set up, the demo began with an initial GitOps repository for the bootstrapping process of the cluster:

 

 

The image above outlines the basic concept of the GitOps architecture setup used for the demo. The process begins with using an Argo CD application called Bootstrap, which can apply a bootstrapping of itself. 

The bootstrapper then pulls in projects, creating app projects as shown in the above diagram, with restrictions for specific tenants.

 

 

The above shows the central capability application project. Argo CD includes some helpful basic security options. For example, you can restrict access to the GitOps repository to one specific team. You can also restrict access to specific namespaces. This means you can use the Argo CD app project as a tool to apply some basic security restrictions for tenants and teams. 

Note that a platform engineering team is actually a tenant itself (a very big tenant, of course!). In a GitOps architecture, the platform team uses the same principles and approaches as all tenants. 

It’s also worth noting that every step used in this demo forms part of a CI/CD pipeline later on. You just run the pipeline once at the beginning of the setup of a new cluster. 

The first step involves adding the Helm charts:

 

 

After a repo update, it’s time to deploy the custom resource definition (CRD) for Argo CD. By deploying an isolated deployment of the CRDs, you can redeploy Argo CD more easily later on, as the CRDs aren’t part of the Helm chart deployment.

 

 

This same approach is used for all the central capabilities being installed on the Kubernetes cluster. The Tyk data plane, for example, is also installed using Helm, with the CRDs installed in an isolated way. 

Next, it’s time to create the initial namespace for Argo CD and put in the credential needed to access the first bootstrap GitOps repository, tagging it so Argo CD can find it. 

 

 

With Argo CD installed using the Helm chart, you can see on Lens what’s going on on the cluster. 

 

 

It will take three to four minutes for all the pods to show up, providing plenty of time to start the bootstrap process. 

The bootstrapper application is the initial application in the GitOps architecture. You can take a self-contained folder with the Argo CD application in it, then use it to pick up the Argo CD Helm chart deployment.

 

 

This puts the Argo CD official Helm Chart into an umbrella chart. Using the Helm chart like this makes it easy to have a self-managed Argo CD instance later on

When you start the bootstrapper, it pulls in the Argo CD application and applies a reconciliation by all the activities that have been performed by the Helm charts before.

The bootstrapper also has a project called central capabilities, which is quite interesting. One of the GitOps patterns in use for the GitOps architecture here is the application pointer pattern, which supports a linking strategy to pull in another GitOps repository: the central capabilities. Within the central capabilities is the deployment specification for all cluster-wide applications (for example, the Tyk data plane). 

 

 

There is also a projects application, which pulls in details from the projects folder, defining security constraints for the application and removing the ability to use the default app project (for security reasons). It also defines the app project for all the Argo CD bootstrapping components. 

To add a new tenant, all you need to do is create a project for the tenant itself, providing the tenant namespaces and the tenant GitOps repository location. 

 

 

You also need to call in the application pointer to the root application of the tenant-specific GitOps repository.

 

 

Checking back in Lens, if everything is ready, it’s time to connect to the cluster. You’ll need the password for the Argo CD system, then to do the port forward and connect. 

 

 

Now you can start the bootstrapper to deploy some applications. You can bootstrap the projects, applying the last step in the full bootstrap process and sitting back to watch the magic happen after you hit the enter key. 

 

 

You should now be able to see lots of applications deployed on the cluster: Argo CD itself, the initial bootstrap application, common capabilities, cert manager and much more. 

 

 

As everything on Kubernetes starts at the same time, you’ll need to wait a moment for everything to be working as expected. 

In terms of using GitOps to deploy Tyk Operator and the other Tyk components, you can head to the platform engineering tenant repository. 

 

 

Using the folder per environment pattern, it’s possible to have different Tyk versions running on different clusters. That way, it’s easy to test new versions to check any major changes to your configurations, modify specific settings and so on. 

You can use a classic Redis deployment for an on-premise data plane (rather than a managed Redis instance in Azure, for lower latency). 

There are three different approaches available in Argo CD to deploy the Helm chart. Using value files means it’s possible to have the same Redis deployment on all environments.

To deploy the Tyk data plane, pre-render the Helm chart YAML manifest, then apply changes using patches. 

 

 

Then you can deploy everything that is finally rendered by Argo CD, customized.

For the Tyk Operator deployment, a classic Helm chart deployment will serve. 

Once everything is up and running in your cluster, click on the Tyk hybrid gateway. You’ll be able to check the logs and see the connection to the control plane and that applications have been provisioned

Next, head over to your Tyk Dashboard instance. 

 

 

The example above shows a pet store demo deployed, with a Tyk classic API example and a UDG example. 

To deploy these APIs, go to the tenant. You’ll see various patterns, such as UDG, which doesn’t consume upstream, backend APIs. Instead, it uses an OAS wrapper, making it possible to apply specific changes to API responses and requests. For example, you could modify the JSON response from a REST API so it’s much easier to map into your GraphQL API in Tyk’s UDG.

 

 

For the OAS part, you can use a config map with the Tyk API gateway extension included, so everything related to Tyk can be specified there. 

 

 

You can then pull the Tyk OAS API definition from the config map, and inside UDG all you need to do is consume it. 

It’s the same for the classic API. You can use a transformer to customize it in Argo CD, providing the opportunity to make changes to all the different environments you have. And you can manage the full API definition in the base layer of your Argo CD setup.

And that’s it! Scaling API management with GitOps really is that easy when you use Argo CD and Tyk Operator. Why not learn more about Tyk Operator or speak to the Tyk team about how best our platform can help your enterprise excel?