GitOps applies software development best practices to DevOps processes. The popular methodology uses version control, collaboration, and declarative configuration to automate updates to your infrastructure as well as the apps that run within it.
Successful GitOps workflows often use Kubernetes to orchestrate container deployment and scaling operations. But how do you actually get code into Kubernetes from your source repositories?
In this article, you’ll use Kubernetes, Argo CD, and Tyk to set up a functioning GitOps implementation that you can use to deploy your apps and APIs. Argo CD is a declarative continuous delivery tool that brings GitOps to Kubernetes clusters, while Tyk is an API management platform that works across clouds, containers, and on-premise environments.
How to implement a GitOps workflow for managing a kubernetes cluster
Before you begin this tutorial, the first step is you need GitHub and Amazon Web Services (AWS) accounts to follow along. Additionally, you should have Docker, kubectl, Helm, and the AWS CLI already installed on your system.
In the second step, you are creating a new Kubernetes cluster on Amazon Elastic Kubernetes Service (Amazon EKS), but you can skip over this step if you’d prefer to use an existing cluster.
You’ll be using Argo CD to set up GitOps deployments. Argo CD is purpose-built to automate continuous delivery to Kubernetes clusters. It runs an agent inside your cluster that monitors your repositories and automatically applies changes as they’re detected. This pull-based model is simpler and securer than alternative push-based options, where a third-party server must be granted access to your cluster.
Your workflow’s final component is Tyk. Tyk provides an API management layer that sits in front of your Kubernetes deployments. It is designed to slot into existing workflows and offers GitOps support via the Tyk Operator for Kubernetes:
Rough architecture diagram courtesy of James Walker
All the code for this tutorial can be found in this GitHub repo.
Create your API
Begin by preparing your API application. You can fork this article’s GitHub repository to get going quickly with a sample app. The container image is publicly available on Docker Hub as well.
The sample app is a simple Node.js project that uses Express to serve a single HTTP endpoint (ie `/time`
) that provides the current server time. The repository also contains a set of Kubernetes YAML manifests that allow the app to be deployed.
Start a remote Kubernetes cluster
Once you’ve forked the repository, you’re ready to get started with Kubernetes. It automates the deployment, scaling, and operation of containers in production environments with features such as incremental rollouts, rollbacks, self-healing, and service discovery.
You can run Kubernetes on your own system using an all-in-one distribution, such as minikube, or you can provision a cluster from a managed cloud service like Amazon EKS.
Here, you create a new EKS cluster so you can deploy your API straight to the cloud, ready for production use. Note that this accrues charges to your AWS account.
Create IAM Roles
To begin with, log into your AWS account and head to the IAM Dashboard. You can find it using the console’s global search bar:
Screenshot of searching for IAM roles in the AWS console
Before you can use Amazon EKS, you must set up IAM roles that allow the service to create other resources in your AWS account on your behalf.
Click the Roles link that appears in the left sidebar of the IAM interface. Then press the blue Create role button to define a new role:
Screenshot of the Roles page in AWS IAM
On the next screen, keep the AWS service selected as the role’s Trusted entity type:
Screenshot of selecting a role’s trusted entity type in AWS IAM
Scroll down to the Use cases for other AWS services section and use the drop-down menu to select the EKS use case. Then change the use case type to EKS – Cluster:
Screenshot of selecting the use case for an AWS IAM role
Press the blue Next button at the bottom of the screen. Then click Next again on the following screen to reach the final Name, review, and create stage and give your role a name:
Screenshot of setting a new role’s name in AWS IAM
Complete the process by scrolling down the page and pressing Create role.
After adding the first role, repeat the steps earlier to create an additional role. Use the same procedure but apply the following changes:
- Select EC2 as the role’s trusted entity type.
- Use the Permissions policy table to add the
`AmazonEKSWorkerNodePolicy`
,`AmazonEC2ContainerRegistryReadOnly`
,`AmazonEBSCSIDriverPolicy`
, and`AmazonEKS_CNI_Policy`
permissions to your role. - Give your role a name to complete the process.
Create your cluster
Next, use the console’s search bar to switch to the Amazon EKS dashboard and begin creating your Kubernetes cluster. Press the Add cluster button on the landing page. Then select Create from the menu:
Screenshot of the Amazon EKS landing page
On the following screen, give your cluster a name and check that a role is shown in the Cluster service role drop-down. The drop-down should be prefilled with the first role you created earlier:
Screenshot of creating an Amazon EKS cluster
You can leave the other settings at their defaults. Keep pressing the Next button to progress through the following steps and create your cluster.
Add nodes to your cluster
After confirming your cluster’s creation, you are taken to the cluster dashboard screen. Wait a couple of minutes. Then refresh the screen and check that the cluster’s status shows as Active:
Screenshot of the Amazon EKS cluster dashboard screen
Now you can begin adding nodes to your cluster. Switch to the Compute tab on the dashboard, scroll down to the Node groups section, and press the Add node group button:
Screenshot of the Node groups section of the Amazon EKS cluster dashboard
A node group is a set of Amazon Elastic Compute Cloud (Amazon EC2) instances that supply compute capacity to your clusters. The nodes within a node group are created using the same EC2 instance type, but you can add multiple node groups to a single cluster.
Give your node group a name and check that the second IAM role created previously is shown in the Node IAM role drop-down:
Screenshot of creating a node group in Amazon EKS
Scroll down and press the Next button to begin configuring your node group’s compute settings. This is where you define the hardware resources available to your nodes. The defaults are sufficient for this tutorial—you have two `t3.medium`
nodes, each of which provides two vCPUs and 4 GB of memory:
Screenshot of selecting instance options for an Amazon EKS node group
Use the Next button to click through the next few steps and create your node group. You are taken to the group’s overview page; wait a few minutes until the Status displays as Active:
Screenshot of viewing a node group in Amazon EKS
Finally, switch back to the cluster dashboard and click the Add-ons option in its tab strip. Afterward, press the yellow Get more add-ons button:
Screenshot of the Amazon EKS cluster Add-ons screen
On the next screen, enable the Amazon EBS CSI Driver add-on:
Screenshot of enabling the Amazon EBS CSI Driver add-on for an Amazon EKS cluster
On the following screen, leave the default settings. Complete the installation of the add-on to finish the cluster configuration process. This add-on is required to allow the use of persistent storage volumes in your cluster:
Screenshot of the Amazon EBS CSI Driver add-on settings in Amazon EKS
Connect to your cluster using kubectl
Once you’ve finished the cluster configuration process, you can connect your local kubectl client to your new Amazon EKS cluster. It’s easiest to use the AWS CLI utility to automatically generate a kubeconfig entry:
``` $ aws eks update-kubeconfig --name <your-cluster-name> ```
Now you should be able to successfully run kubectl commands against your cluster, such as this one, to check the status of your nodes:
``` $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-30-124.eu-west-2.compute.internal Ready <none> 2m48s v1.27.3-eks-a55165ad ```
Use Argo CD for GitOps-powered CI/CD
Once you’ve connected your cluster using kubectl, you can add Argo CD to your cluster. This installs the agent that connects to your Git repositories, detects changes, and applies them to your cluster.
Install Argo CD
To install Argo CD, start by creating a Kubernetes namespace to hold Argo CD’s components:
``` $ kubectl create namespace argocd namespace/argocd created ```
Next, apply Argo CD’s official YAML manifest to complete the installation in your cluster:
```
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
Use the `kubectl get deployments`
command to check that Argo CD's ready—wait until all six deployments show as available before you continue:
``` $ kubectl get deployments -n argocd NAME READY UP-TO-DATE AVAILABLE AGE argocd-applicationset-controller 1/1 1 1 37s argocd-dex-server 1/1 1 1 36s argocd-notifications-controller 1/1 1 1 36s argocd-redis 1/1 1 1 36s argocd-repo-server 1/1 1 1 36s argocd-server 1/1 1 1 36s ```
Set up the Argo CD CLI
You need the Argo CD CLI installed on your machine in order to manage your installation and create application deployments. The following sequence of commands works on Linux to download the CLI binary and deposit it into your path. You can run the binary with the `argocd`
console command:
``` $ wget https://github.com/argoproj/argo-cd/releases/download/v2.8.0/argocd-linux-amd64 $ chmod +x argocd-linux-amd64 $ mv argocd-linux-amd64 /usr/bin/argocd ```
Check GitHub to find the latest version number; substitute it into the previous command instead of `2.8.0`
.
Next, use the CLI to discover the password that the Argo CD installation process generated for the default `admin`
user account:
``` $ argocd admin initial-password -n argocd ```
To preserve security, delete the Kubernetes secret that contains the password—you won’t be able to retrieve it again after running this command:
``` $ kubectl delete secret argocd-initial-admin-secret -n argocd secret "argocd-initial-admin-secret" deleted.
Connect to Argo CD
Argo CD’s API server isn’t exposed automatically. You must manually open a route to it before you can use the CLI or access the web UI.
kubectl port forwarding is the quickest way to get started for experimentation purposes. However, this method should not be used in production—you can follow the steps in the documentation to permanently expose Argo CD with a TLS-secured Ingress route.
Open a new terminal window. Then run the following command to start a new port forwarding session. It binds your local port 8080 to the Argo CD instance running in your cluster:
``` $ kubectl port-forward svc/argocd-server -n argocd 8080:443 ```
Switch back to your first terminal window to log into the Argo CD CLI, specifying the server to connect to:
``` $ argocd login localhost:8080 ```
Because TLS isn’t being used, you need to acknowledge the self-signed certificate warning:
“` WARNING: server certificate had error: tls: failed to verify certificate: x509: certificate signed by unknown authority. Proceed insecurely (y/n)? “`
Argo CD then prompts for your user credentials. Use `admin`
as the username and enter the password you retrieved previously:
``` 'admin:login' logged in successfully Context 'localhost:8080' updated ```
Deploy your application
Now you’re ready to use Argo CD to deploy your app into your cluster!
Run the following command to create your app:
``` $ argocd app create api-demo \ --repo https://github.com/<username>/<repo>.git \ --path kubernetes/ \ --dest-server https://kubernetes.default.svc \ --dest-namespace api-demo application 'api-demo' created ```
This command instructs Argo CD to register the given repository URL as a new application. It monitors the Kubernetes manifests within the repository’s `kubernetes/`
directory; when they change, Argo CD automatically updates your cluster.
The `--dest-namespace`
flag defines the Kubernetes namespace that your app will be deployed to (it should match the `metadata.namespace`
field set in your Kubernetes manifests). `--dest-server`
tells Argo CD which Kubernetes cluster to target, while `https://kubernetes.default.svc`
resolves to the cluster that Argo CD is running within.
You can check your app’s status by running the `argocd app list`
:
``` $ argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET argocd/api-demo https://kubernetes.default.svc api-demo default OutOfSync Missing <none> <none> https://github.com/ilmiont/tyk-gitops-demo.git kubernetes/ ```
The app shows as `Missing`
and `OutOfSync`
. Although the app’s been created, Argo CD hasn’t automatically synced it into the cluster.
A sync is the Argo CD operation that transitions the cluster’s state into the desired state expressed in your repository. Syncs can be requested on demand or scheduled to run automatically.
Run your first sync to deploy your app:
``` $ argocd app sync api-demo ... GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Namespace api-demo api-demo Running Synced namespace/api-demo created Service api-demo api-demo Synced Progressing service/api-demo created apps Deployment api-demo api-demo Synced Progressing deployment.apps/api-demo created Namespace api-demo Synced ```
The app should now be healthy and displaying the `Synced`
status:
``` $ argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET argocd/api-demo https://kubernetes.default.svc api-demo default Synced Healthy <none> <none> https://github.com/ilmiont/tyk-gitops-demo.git kubernetes/ ```
The sample repository defaults to running three replicas of the application using a Kubernetes deployment object. Use kubectl to check that the deployment is ready and has the expected replica count:
``` $ kubectl get deployment -n api-demo NAME READY UP-TO-DATE AVAILABLE AGE api-demo 3/3 3 3 66s ```
As you can see, Argo CD has successfully deployed the application!
Use GitOps to apply changes
At this point, your GitOps workflow is ready to use. You can apply changes to your deployed application by committing to your repository and then running a new Argo CD sync operation. This pulls the repository’s files, compares them to what’s running in your cluster, and automatically applies any changes. The state of your infrastructure is driven by the content of your Git repository. See this in action.
Open up the `kubernetes/deployment.yml`
file in the sample app’s repository. Find the `spec.replicas`
field and change its value from `3`
to `5`
to scale the app’s deployment up:
```yml apiVersion: apps/v1 kind: Deployment metadata: name: api-demo namespace: api-demo labels: app.kubernetes.io/name: api-demo spec: replicas: 5 ... ```
Commit your changes and push them to GitHub:
``` $ git commit -am "Increase replica count to 5" $ git push ```
Next, repeat the `argocd app sync`
command to sync the change into your Kubernetes cluster:
``` $ argocd app sync api-demo GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Namespace api-demo api-demo Running Synced namespace/api-demo unchanged Service api-demo api-demo Synced Healthy service/api-demo unchanged apps Deployment api-demo api-demo Synced Progressing deployment.apps/api-demo configured Namespace api-demo Synced ```
The deployment should now be running five replicas:
``` $ kubectl get deployment -n api-demo NAME READY UP-TO-DATE AVAILABLE AGE api-demo 5/5 5 5 8m14 ```
As you can see, you’ve used GitOps to scale the app without directly interacting with Kubernetes.
Monitor with the Argo CD Web UI
Using your port forwarding session, you can access Argo CD’s web UI by visiting `localhost:8080`
in your browser:
Screenshot of the Argo CD web UI
The Applications dashboard shows the status of all the apps that Argo CD has deployed. You can configure app options, start a sync, refresh all your apps, and change Argo CD settings. It’s a convenient way to monitor running apps without using the CLI.
Add Tyk API gateway to secure and monitor your API
The demo application is a simple API that provides the current time. It’s not yet publicly accessible, although the repository configures a ClusterIP service that can only be reached within the cluster.
You could use a Kubernetes LoadBalancer service or an Ingress to expose your API, but there are problems with these methods: directly exposing the API renders it accessible to everyone, and you have no way of monitoring usage. Additionally, building these features yourself would require a substantial development investment.
That’s where Tyk can help. It’s an API gateway that’s as simple and reliable as your GitOps workflow. Running Tyk in your cluster allows you to benefit from automatic API management without any additional work from your developers.
Tyk is available in several different flavors, including open source, self-managed, and cloud-hosted options. For this article, you’ll run Tyk Gateway (the open source product) inside Kubernetes. This is supported by the Tyk Operator, which allows declarative configuration of your APIs using Kubernetes CRDs.
Install Tyk
The Tyk Kubernetes demo project is the easiest way to deploy a complete Tyk stack into your cluster. Start off by cloning the project’s repository into a new directory on your machine:
``` $ git clone https://github.com/TykTechnologies/tyk-k8s-demo.git $ cd tyk-k8s-demo ```
Then run the following commands to deploy the open source Tyk Gateway with Tyk Operator:
``` $ cp .env.example .env $ ./up.sh --deployments operator tyk-gateway ```
The installation process takes several minutes to complete. The components are installed with all their dependencies, including a Redis instance that stores the Tyk data.
Create a Tyk API for your application
With the Tyk Operator installed, you can declaratively define the APIs to be managed by the Tyk Gateway. Tyk Operator provides an `ApiDefinition`
Kubernetes CRD for this purpose. Copy the following YAML manifest and save it to `kubernetes/tyk-api.yml`
inside your repository:
```yaml apiVersion: tyk.tyk.io/v1alpha1 kind: ApiDefinition metadata: name: api-demo spec: name: api-demo use_keyless: true protocol: http active: true proxy: target_url: http://api-demo.api-demo.svc.cluster.local listen_path: /clock strip_listen_path: true ```
This API definition instructs Tyk to proxy requests that start with `/clock`
to `http://api-demo.api-demo.svc.cluster.local`
. Kubernetes assigns services DNS names using `service.namespace.svc.cluster.local`
syntax, so the proxy’s target URL resolves to the service that exposes your API.
Commit the file to your repository and then run another Argo CD sync. The Tyk Operator detects the new `ApiDefinition`
object and automatically registers your API:
``` $ git add kubernetes/tyk-api.yml $ git commit -m "Add Tyk API definition" $ git push $ argo app sync api-demo ```
Use kubectl to check that the `ApiDefinition`
has been registered:
``` $ kubectl get apidefinition -n api-demo NAME DOMAIN LISTENPATH PROXY.TARGETURL ENABLED STATUS api-demo /clock http://api-demo.api-demo.svc.cluster.local true Successful ```
The Tyk Kubernetes demo installation script automatically exposes the Tyk Gateway service on port 8080. You should now be able to access your API by visiting `localhost:8080/clock/time`
—Tyk proxies the request to your API’s service as `/time`
:
``` $ curl http://localhost:8080/clock/time "2023-08-09T10:28:41.574Z" ```
You’ve successfully exposed your first API using Tyk!
Explore Tyk features
Now that your API is running, you can extend your `ApiDefinition`
object with additional options to improve security, performance, and observability. Tyk has many different features you can use to manage and extend your APIs without having to edit your app’s source code. Here are three popular next steps:
Use authentication
Tyk Gateway supports all major authentication standards, including Basic Auth, JSON Web Token, OAuth, and OpenID Connect (OIDC). API keys are also available to use as static bearer tokens presented in an `Authorization`
HTTP header.
By using Tyk, you can secure your APIs without writing a single line of code. Because all external requests terminate at the API gateway, your services are protected from unauthenticated access attempts.
Use IP whitelisting
Many APIs, such as those for internal use, should only be accessed by specific clients. This behavior can be enforced using Tyk’s IP address whitelisting option. Set the `enable_ip_whitelisting`
property in your `ApiDefinition`
object and then populate the `allowed_ips`
field with the IP addresses that are allowed to connect:
```yaml apiVersion: tyk.tyk.io/v1alpha1 kind: ApiDefinition metadata: name: api-demo spec: name: api-demo use_keyless: true protocol: http active: true enable_ip_whitelisting: true allowed_ips: - 127.0.0.1 proxy: target_url: http://api-demo.api-demo.svc.cluster.local listen_path: /clock strip_listen_path: true ```
Set up rate limiting
Rate limiting is an easy way to prevent API abuse, but it’s often challenging to implement by hand. Any API that’s protected by the Tyk Gateway can benefit from global rate limiting that’s enabled using simple configuration values:
```yaml apiVersion: tyk.tyk.io/v1alpha1 kind: ApiDefinition metadata: name: api-demo spec: name: api-demo use_keyless: true protocol: http active: true global_rate_limit: rate: 10 per: 60 proxy: target_url: http://api-demo.api-demo.svc.cluster.local listen_path: /clock strip_listen_path: true ```
This example prevents clients from making more than ten requests to your API in a single sixty-second period. Setting appropriate rate limits for your APIs can drastically reduce the impact of denial-of-service (DoS) attack attempts.
Conclusion
In this article, you learned how to configure a powerful GitOps workflow for APIs using Kubernetes, Argo CD, and Tyk. Pairing a remote Kubernetes cluster from AWS, Google Kubernetes Engine (GKE), or your regular provider with Argo CD gives you the tools to easily deploy applications in the cloud without having to manage your own infrastructure.
Kubernetes and GitOps simplify deployment, but this model doesn’t solve all the challenges of API operation. Deploying the Tyk API Gateway in front of your application allows you to consistently enforce access controls, monitor usage, and maintain security. Try it out for free in your own cluster today!