A practical guide using Tyk Operator, ArgoCD, and Kustomize

If you’re like me, managing Kubernetes deployments can sometimes feel like trying to juggle flaming torches. Kubernetes is powerful, but when it comes to handling API configurations alongside your applications, things can get a bit… chaotic.

That’s where GitOps comes into play. By using Git as the single source of truth, you can streamline your infrastructure and application configurations. And when you throw **Tyk Operator** into the mix, managing your APIs becomes a whole lot easier.

In this post, we’ll walk through a practical setup that brings together Tyk Operator, ArgoCD, and Kustomize. We’ll start from scratch and build a GitOps workflow that automates the deployment of both your applications and API configurations. By the end, you’ll have a solid understanding of how to manage deployments in an organization using GitOps principles.

 

Here’s what we’ll cover:

  • Setting up the environment with Tyk Stack, Tyk Operator, and ArgoCD.
  • Organizing your repository for easy management.
  • Deploying applications and APIs simultaneously across environments.
  • Customizing configurations using Kustomize overlays.

 

Prerequisites:

  • Familiarity with Kubernetes and Kustomize (we’ll build on that knowledge).
  • Basic understanding of Git.
  • Access to two Kubernetes clusters.
  • Our example Git repository: (tyk-operator-demo)

Why GitOps and Tyk Operator?

Before we dive in, let’s chat about why we’re doing this. Managing APIs in a Kubernetes environment can get complex, especially when scaling across different environments like staging and production. GitOps helps by treating your configurations as code, which means you get version control, peer reviews, and a clear audit trail.

But what about API management? That’s where Tyk Operator comes in. It’s a Kubernetes operator that lets you manage your API definitions and security policies as Kubernetes resources. So, you can handle your APIs just like any other Kubernetes object. Pretty cool, right?

Setting up the environment

Let’s get our tools in place. We’ll use the tyk-k8s-demo repository to install Tyk Stack, which includes all dependencies and gets us up and running quickly.

1. Set up staging environment

1.1 Start Minikube and enable Ingress

In this example, we’ll run everything locally using Minikube, but feel free to deploy on AWS, GCP, Azure, or any other cloud provider.

To get Minikube up and running:

 

```bash

minikube start -p staging

minikube addons enable ingress -p staging

```

 

This starts a Minikube cluster (named `staging`) and enables the ingress controller, which we’ll need later.

 

1.2. Install Tyk Stack using `tyk-k8s-demo`

We’ll use the `tyk-k8s-demo` repository to install Tyk Stack, which includes all dependencies and gets us up and running quickly.

1.2.1 Clone the repository

 

```bash

git clone https://github.com/TykTechnologies/tyk-k8s-demo.git

cd tyk-k8s-demo

cp .env.example .env

```

 

1.2.2 Obtain a license

We need a license to run Tyk Dashboard.

Visit the Tyk sign-up page and choose ‘Get in touch’ to receive a guided evaluation and a temporary license. Then add your license key to the `.env` file `LICENSE` field.

 

1.2.3 Install Tyk Stack with Tyk Operator

Now, let’s run the installation script:

 

```bash

./up.sh --deployments operator tyk-stack

```

 

If you want to deploy on Cloud, check out [Kubernetes Tyk Demo](https://tyk.io/docs/getting-started/quick-start/tyk-k8s-demo/#clusters) for deployment configurations for [AWS](https://github.com/TykTechnologies/tyk-k8s-demo/tree/main/src/clouds/aws/.env.example), [GCP](https://github.com/TykTechnologies/tyk-k8s-demo/tree/main/src/clouds/gcp/.env.example), and [Azure](https://github.com/TykTechnologies/tyk-k8s-demo/tree/main/src/clouds/azure/.env.example).

 

The script will take a few minutes to complete. It sets up everything we need, including Tyk API Gateway, Dashboard, and Tyk Operator.

 

1.3 Install ArgoCD

ArgoCD will handle the continuous deployment aspect of our GitOps setup.

 

1.3.1 Create ArgoCD namespace

 

```bash

kubectl create namespace argocd

```

 

1.3.2 Install ArgoCD

 

```bash

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

```

 

Wait for the pods to be in the `running` state. You can use kubectl port-forwarding to connect to the ArgoCD API server without exposing the service:

 

```bash

kubectl port-forward svc/argocd-server -n argocd 8000:443 &

```

 

1.3.3 Login to ArgoCD

– **Username:** `admin`
– **Password:** Retrieve using:

 

```bash

kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d

```

 

Now, you can login to ArgoCD admin UI at http://localhost:8000.

 

2. Setup production environment

Create another cluster as the production environment and repeat the steps there.

2.1 Start Minikube for production

 

```bash

minikube start -p production

minikube addons enable ingress -p production

```

 

2.2 Switch to production cluster

To switch between the clusters:

 

```bash

# switch to staging

minikube profile staging

# switch to production

minikube profile production

```

 

If you’re not using Minikube, make sure `kubectl` is configured to use the correct cluster:

 

```bash

# List all contexts

kubectl config get-contexts

# Switch to another context

kubectl config use-context [name]

```

 

2.3 Install Tyk Stack with port offset

Install `tyk-stack` with parameter `–port-offset 1`, which exposes Tyk services on a new set of ports (all incremented by 1):

 

```bash

./up.sh --deployments operator --port-offset 1 tyk-stack

```

 

2.4 Install ArgoCD on production cluster

You’ll also need to install ArgoCD:

 

```bash

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

```

 

Expose ArgoCD service on port `8001`:

 

```bash

kubectl port-forward svc/argocd-server -n argocd 8001:443 &

```

 

Retrieve the ArgoCD admin password:

 

```bash

kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d

```

 

Now, you can login to production ArgoCD admin UI at http://localhost:8001.

> [!NOTE] Note
> You could also run ArgoCD in a hub-and-spoke model, where you host ArgoCD in a centralized hub cluster.

You have now set up two clusters. Let’s continue to set up your continuous deployment (CD) repository.

Starting with the demo repository

To make things easier, we’ll start by forking the demo repository. This way, you have the freedom to commit changes to your Git repository without affecting the original repository.

1. Fork the repository

1.1 Go to the repository

Visit the tyk-operator-demo and click on the “Fork” button in the top-right corner to create your own copy.

1.2 Clone your fork locally

Go to your working directory and then clone your fork:

 

```bash

git clone https://github.com/your-username/tyk-operator-demo.git

cd tyk-operator-demo

```

 

Replace `your-username` with your GitHub username.

Now, you’re all set to make changes and commit to your own repository.

2 Examine the repository

The repository is organized into three main directories:

1. **`apps/`**: Contains all application manifests.
2. **`policies/`**: Stores security policy manifests.
2. **`argocd/`**: Holds ArgoCD application manifests.

Let’s break down each of these directories in detail.

 

2.1 `apps/` Directory

This directory contains Kubernetes manifests for deploying applications. In our example, we’re focusing on the `httpbin`application.

Structure:

The `httpbin` app directory is organized for Kustomize. The `base/` directory holds the common configuration for `httpbin`, such as container image and port settings. The `overlays/` directories (`prod` and `staging`) customize the base manifests for each environment.

 

```

apps/

└── httpbin/

├── base/

│   ├── apidefinition.yaml

│   ├── deployment.yaml

│   ├── kustomization.yaml

│   └── service.yaml

└── overlays/

    ├── prod/

    │   ├── api_auth.yaml

    │   ├── api_config.yaml

    │   └── kustomization.yaml

    └── staging/

        ├── api_auth.yaml

        ├── api_config.yaml

        └── kustomization.yaml

```

 

- **`httpbin/`**: Directory for the `httpbin` application.
- **`base/`**: Contains the base Kubernetes manifests that are common across all environments.
- **`apidefinition.yaml`**: Defines the `ApiDefinition` resource for `httpbin`.
- **`deployment.yaml`**: Defines the `Deployment` resource for `httpbin`.
- **`service.yaml`**: Defines the `Service` resource to expose `httpbin`.
- **`kustomization.yaml`**: Kustomize file that aggregates the base resources.
- **`overlays/`**: Contains environment-specific customizations using Kustomize overlays.
- **`prod/`**: Customizations for the production environment.
- **`api_auth.yaml`**: Overrides the API authentication for production.
- **`api_config.yaml`**: Overrides the API configurations for production.
- **`kustomization.yaml`**: References the base and applies patches.
- **`staging/`**: Customizations for the staging environment.
- **`api_auth.yaml`**: Overrides the API authentication for staging.
- **`api_config.yaml`**: Overrides the API configurations for staging.
- **`kustomization.yaml`**: References the base and applies patches.

 

2.2`policies/` Directory

This directory stores security policy manifests, which govern security options and access rights that can be applied to an API key. Using a security policy to govern access gives you the flexibility to decouple API definitions from access and security policy settings, which may be set by different teams—for example, API definitions by API developers and security policies by API product managers.

Structure:

The policies directory is organized by environments. It allows different environments to have tailored security settings.

 

```

policies/

├── prod/ 

│   ├── standard-policy.yaml 

│   └── trial-policy.yaml 

└── staging/ 

    ├── standard-policy.yaml 

    └── trial-policy.yaml

```

 

- **`policies/`**: Central location for security policies.
- **`prod/`**: Policies for the production environment.
- **`standard-policy.yaml`**: Defines security policies like access control, rate limits, and quotas for standard users.
- **`trial-policy.yaml`**: Defines security policies for trial users.
- **`staging/`**: Policies for the staging environment.
- **`standard-policy.yaml`**: Similar to production but may have different settings.
- **`trial-policy.yaml`**: Similar to production but may have different settings.

 

2.3`argocd/` Directory

This directory contains ArgoCD application manifests that automate the deployment process.

Structure:

The argocd directory is organized by environments. Each ArgoCD application point to the corresponding paths in the Git repository where deployment manifests can be found. The path can point to a directory with kubernetes manifest files, a Kustomize directory, or a Helm Chart.

 

```

argocd/

├── prod/

│   ├── httpbin.yaml

│   └── policies.yaml

├── staging/

│   ├── httpbin.yaml

│   └── policies.yaml

├── prod-apps.yaml

└── staging-apps.yaml

```

 

- **`argocd/`**: Central location for ArgoCD application manifest.
 - **`prod/`**: ArgoCD applications for the production environment.
- **`httpbin.yaml`**: ArgoCD application manifest for deploying the `httpbin` application. Specifies **`source`**: Points to the Git repository and the path `apps/httpbin/overlays/prod` for the `httpbin` manifests.
- **`policies.yaml`**: ArgoCD application manifest for deploying security policies.  Specifies **`source`**: Points to the Git repository and the path `policies/prod` for the policies.
 - **`staging/`**: ArgoCD applications for the staging environment.
- **`httpbin.yaml`**: ArgoCD application manifest for deploying the `httpbin` application. Specifies **`source`**: Points to the Git repository and the path `apps/httpbin/overlays/staging` for the `httpbin` manifests.
- **`policies.yaml`**: ArgoCD application manifest for deploying security policies.  Specifies **`source`**: Points to the Git repository and the path `policies/staging` for the policies.
- **`prod-apps.yaml`**: ArgoCD "app of application" that points to the source directory `argocd/prod`
- **`staging-apps.yaml`**: ArgoCD "app of application" that points to the source directory `argocd/staging`

 

Deploy your application with ArgoCD

1. Update source repository

Modify all ArgoCD apps under `argocd` so that they point to your GitHub repository.

For example, in `argocd/staging/httpbin.yaml`:

 

```yaml

spec:

  source:

    repoURL: 'https://github.com/your-username/tyk-operator-demo'

```

 

Replace `your-username` with your GitHub username.

Make sure the changes are committed:

 

```bash

git add .

git commit -m "Update repoURL in ArgoCD applications"

git push origin main

```

 

2. Deploy your application with ArgoCD

First, switch to the **staging** cluster and deploy the staging ArgoCD apps.

 

```bash

kubectl config use-context staging

kubectl apply -f argocd/staging-apps.yaml

```

 

After that, switch to the **production** cluster and deploy the production ArgoCD apps.

 

 

```bash

kubectl config use-context production

kubectl apply -f argocd/prod-apps.yaml

```

 

3: Test the deployment

**For Staging:**

This should respond with a **401 Unauthorized** response:

 

```bash

curl http://localhost:8080/httpbin/get

```

 

The API requires an auth key as specified in `api_auth.yaml`. You can go to the Staging Dashboard at [https://localhost:3000](https://localhost:3000/) to create an API key, and append the key in request header:

 

```bash

curl -H "Authorization: Bearer [API_KEY]" http://localhost:8080/httpbin/get

```

 

You can get a successful 200 responses with a valid API key now.

**For production:**

Similarly, this should respond with a **401 Unauthorized** response:

 

```bash

curl http://localhost:8081/httpbin/get

```

 

You can go to the production dashboard at [https://localhost:3001](https://localhost:3001/) to create an API key and include it in the request header to get a successful 200 response:

 

```bash

curl -H "Authorization: Bearer [API_KEY]" http://localhost:8081/httpbin/get

```

 

4. Make some changes

Now that your applications and policies are deployed, let’s see GitOps in action.

 

4.1 Modify the ApiDefinition or SecurityPolicy Manifests

Example 1: Update the API definition

Let’s enable CORS (cross-origin resource sharing) in the API definition for the staging environment.

1. Open `apps/httpbin/overlays/staging/api_config.yaml` and add or modify the `cors` settings:

 

```yaml

apiVersion: tyk.tyk.io/v1alpha1

kind: ApiDefinition

metadata:

  name: not-important

spec:

  tags:

  - staging

  detailed_tracing: true

  CORS:

    enable: true

    allowed_origins:

    - "*"

    allowed_methods:

    - GET

    - POST

    - OPTIONS

    allowed_headers:

    - Authorization

    - Content-Type

    exposed_headers:

    - X-Custom-Header

    allow_credentials: true

```

 

2. Save the file.

Example 2: Modify the security policy

Alternatively, you can modify the trial policy and limit access only to the `/get` endpoint.

1. Open `policies/staging/trial-policy.yaml` and add `allowed_urls`:

 

```

apiVersion: tyk.tyk.io/v1alpha1

kind: SecurityPolicy

metadata:

  name: standard-policy

  namespace: policies-staging

spec:

  access_rights_array:

    - name: httpbin-api

      namespace: httpbin-staging

      versions:

      - Default

      allowed_urls:             # Add path-based permissions

        - url: /get

          methods:

            - GET

  active: true

  name: Trial policy (Staging)

  state: active

  rate: -1

  per: -1

  throttle_interval: -1

  throttle_retry_limit: -1

  quota_max: -1

  quota_renewal_rate: 60

```

 

2. Save the file.

 

4.2 Commit and push the changes

 

```bash

git add .

git commit -m "Enable CORS in staging API and add path-based permissions for trial users"

git push origin main

```

 

4.3 Watch ArgoCD sync the changes

Go back to the ArgoCD UI for the staging environment at [https://localhost:8000](https://localhost:8000/). You should see that the `httpbin-staging` and `policies-staging` applications have detected changes and are syncing automatically. Once the sync is complete, the new configurations are applied to your cluster.

Changes in custom resources `ApiDefinition` and `security policy` would be detected by **Tyk Operator**, which will reconcile the changes with Tyk accordingly.

 

4.4 Verify the changes

Test the API endpoint:

First, create an API KEY for the “Trial policy (staging)” policy and save it in `API_KEY`.

 

```bash

export API_KEY=[Copy Key ID here]

```

 

Then use it to call the API:

 

```bash

curl -I -H "Authorization: Bearer $API_KEY" http://localhost:8080/httpbin/get

```

 

You should see the `access-control-allow-origin: *` header, indicating that CORS is enabled.

Check the path-based permissions:

Call the API again on another endpoint:

 

```bash

curl -H "Authorization: Bearer $API_KEY" http://localhost:8080/httpbin/anything

```

 

You should see that it now return **403 Forbidden**, indicating that this endpoint has been disallowed.

By simply committing changes to your Git repository, you’ve updated your live configurations in Tyk without manually applying any changes to the cluster. This demonstrates the power of GitOps in automating and streamlining deployments.

Wrap up

By following this guide, you’ve set up a comprehensive GitOps workflow that automates the deployment of both your applications and API configurations across multiple environments. You’ve utilized Tyk Operator to manage your API definitions and security policies as Kubernetes resources, ArgoCD to automate continuous deployments, and Kustomize to handle environment-specific configurations without duplicating code.

Your teams can collaborate effectively using Git as the single source of truth, for both application deployment and API and security policy configurations. API platform team can easily exercise governance and ensure consistency on API configurations using Kustomize.

With this setup, you can now manage deployments in your organization more efficiently, ensuring consistency, reducing manual errors, and speeding up your development workflow.