Integration testing for API management

Integration testing is essential in ensuring the long-term success of API management solutions. From a consumer perspective, it performs the vital work of delivering quality assurance on behalf of consumers, by ensuring that the system performs as intended.

It also complements other testing approaches by providing a different perspective and broadening the test scope.

Here, we explain how integration testing works and how to implement it using practical recommendations designed to enhance API management solutions’ efficiency and reliability. 

If you’re short on time, here’s a summary of the key points:


  • Integration testing is a good choice for API management as integrations are a fundamental element of most implementations.
  • Integration testing complements other forms of testing, such as unit testing, by broadening the testing scope to find errors that may not otherwise be identified.
  • Testing should focus primarily on the API gateway as it’s the central component and contains the most configurations. However, testing can also be extended to other features, such as management dashboards, databases and APIs.
  • Most API integration tests should be written from the perspective of the API client and based on the request/response lifecycle. This aligns the tests with the usage patterns of API clients, who are the API’s primary consumers and fulfil the majority of use cases.
  • Command line-based test automation is essential for efficient integration testing. It enhances contributor productivity and is fundamental to implementing automated CI/CD workflows.
  • The importance of integration testing increases with the size and complexity of the integrated systems being tested.

Otherwise, let’s get started. 

What is API management integration testing?

Integration testing is a specific software testing type used to validate solutions composed of multiple connected software components. Unlike unit testing, which is generally confined to validating code within a single application, integration testing focuses on testing the parts of solutions integrated with others. This makes it particularly valuable in API management, where integrations are intrinsic to any solution.

The primary focus of integration testing in API management is validating API gateway functionality. This covers the actions performed by the gateway as it processes traffic between the client and server, such as authentication, rate limiting and data transformation.

The simplest way to conduct these tests is to send a request to the gateway and validate the response received. The exact implementation of the test depends on what the API gateway has been configured to do. For example, if it’s been configured to authenticate requests, the test should check that unauthorised requests receive an appropriate response informing the client of the authorisation failure.

Use case – The Tyk Demo project 

The use case for this article covers how integration testing was introduced to the Tyk Demo project. For context, Tyk Demo is a pre-configured Docker-based Tyk deployment that contains various practical examples, accompanying explanations and configurations. It’s designed to be quick and easy to set up, so it’s a great way to learn about Tyk.

Tyk Demo includes several Postman collections. These are libraries of pre-prepared requests that introduce the user to Tyk’s features and functionality. They make it easy for users to interact with the Tyk Demo deployment, as they can browse the library and send requests whilst reading the supporting documentation.

Quality assurance is the primary reason for adding integration testing to the Tyk Demo project. The integrity of the project must be maintained as new versions are released. The example library must continue to work as expected, with any issues detected so that they can be rectified.

Implementing integration testing for API management

This section contains seven recommendations for implementing integration testing and maximising impact in API management:

1. Start simple

The best way to start integration testing is to set up some basic tests. The status code is one of the easiest things to test for HTTP responses, but any part of the standard HTTP metadata would also work.

For status code tests, a result of 200 is usually desired. But some scenarios require a different response, such as security testing, where unauthenticated requests should receive a 401, indicating to the client that their request is unauthorised.

These tests can be implemented efficiently using Postman’s test editor, which provides a simple way to validate the response status code. For example, in Tyk Demo, this test checks that the response status code is equal to 200:

pm.test("Status code is 200", function () {;


Since these tests are so simple, they can be added to all requests. They help catch general server-side errors and misconfiguration.

2. Be specific

Once some basic tests have been added, it’s time to add more specific ones. These will give the depth of testing needed to accurately validate responses, which is especially important for endpoints that return data. For these endpoints, the metadata is only part of the response; the response data must be validated too.

The most common approach is to validate the response body data. Validations vary depending on each API and endpoint but commonly involve object ids, status values and general message text.

For example, this Tyk Demo test validates the Tyk health check endpoint response. The endpoint returns JSON containing the health state of various components within the deployment. The test validates that the value of the status field has the value pass, meaning that the gateway and the systems it depends on are working correctly.

pm.test("Gateway status is pass", function () {

    var jsonData = pm.response.json();



The Postman pm object is crucial to the process, as it provides the data and functionality needed to validate JSON responses. It also supports XML, CSV and HTML, but these are best handled using the built-in functions to convert the data to JSON first. For other data types, a general string-matching function can be used.

There are many more examples of specific tests in the Tyk Demo Postman collection. Here are some examples:


  • Rate limits, quotas and throttling: Send multiple requests to trigger these features and ensure the correct HTTP response code is returned.
  • Load balancing: Check that the host server’s name alternates between responses.
  • Request size limiting: Send requests below and above the size limiter and validate that only those over the limit are blocked.
  • Schema validation: Send data with incorrect schemas and check that the response contains the correct contextual error message.
  • Versioning: Checking that different versions of the same API can be accessed and that expired versions cannot.
  • GraphQL: Send a request containing an invalid query and check that the response includes the correct contextual error message.

Transformation: Send a request in one data format and validate that the response shows the upstream server received the data in a different form.

3. Create abstractions

Patterns are likely to emerge from test scripts after a while. These patterns should be encapsulated into a library of helper functions that reduce code clutter in the tests and promote code reuse.

In Postman, these are best added to the pre-request script section in the root element of the collection. This is because this script runs before all other scripts in the collection, making the functions defined therein available to all other scripts. However, these functions don’t have access to the context data from the requests, so any data needed by the functions must be passed into them as parameters.

Tyk Demo implements a function library to wrap Tyk’s product APIs. For example, this function covers an endpoint from the Tyk Dashboard Admin API that gets an organisation:

get: function (organisationId, callback, pm) {



            url: "http://" + pm.variables.get("") + "/admin/organisations/" + organisationId,

            method: "GET",

            header: "admin-auth: " + pm.variables.get("tyk-dashboard.admin-api-key")





The function uses Postman’s sendRequest function to scaffold a request using the correct URL, method and headers. The result is that developers who call this function only need to provide a minimal amount of information:

  • organisationId: The id of the organisation object to retrieve.
  • callback: Optional callback function that’s called after the request is completed.
  • pm: Postman object that contains the context of the test being run and the Postman function library.

Many Tyk Demo tests use this function. For example, this test validates that an organisation can be deleted by attempting to get a deleted organisation and expecting a 404 status code:



    (error, response) => { 





When creating a helper library, namespacing can help organise the code, making it easier to understand. For Tyk Demo, the logical approach was to create separate namespaces for each product API, then again by endpoint group. This is illustrated in the previous example, where the namespaced function call tyk.dashboardAdminApi.organisations.get is used:

  • tyk: The root namespace object.
  • dashboardAdminApi: The product API, in this case, the dashboard admin API.
  • organisations: In this case, the endpoint group is the group of endpoints related to organisations.
  • get: The action performed, in this case, getting a single organisation object.

4. Preserve system state

Tests must be idempotent, meaning that the same result is achieved no matter how often a test is run. This is necessary for tests to be considered fit for purpose.  

Idempotent tests are essential to Tyk Demo because, even though it’s only used for knowledge sharing and proof of concepts, the user experience is improved by consistency in the tests and underlying system state. To help achieve this, tests in Tyk Demo that alter system data also revert those changes after the test is complete. This helps prevent users from finding that their dashboard suddenly becomes polluted with test data.

Achieving this requires that tests be self-contained, such that there aren’t dependencies between them and that they don’t affect each other. It also means that the overall system state should, within reason, be the same after the test as it was before. It’s “within reason” because cleaning up all data after a test may not be strictly necessary. This will, of course, differ from system to system. But in the case of the Tyk Demo, items such as application logs and analytics records can accumulate without harming the system from either user or test perspectives.

Implementing this involves performing some basic data manipulation before and after each test. This isn’t necessary for all tests, especially those that only read data already in the system, but it’s essential for those which create, update or delete data. Postman’s pre-request script is a good location to create any necessary data, and the end of the tests section is the right place to perform any subsequent deletions after the tests have run. 

The following examples show the Postman implementation for the Get an Organisation test. It tests that a single organisation can be retrieved from the Dashboard Admin API’s organisation endpoint using an id. There are three parts; pre-request, request, and post-request tests.



The process starts by using the pre-request script to create the temporary data needed by both the request and post-request tests:

var organisationName = pm.variables.replaceIn("{{$randomCompanyName}}");



owner_name: organisationName


(error, response) => {


pm.variables.set("organisation-id", response.json().Meta);

pm.variables.set("organisation-name", organisationName);




The temporary organisation is created by calling the custom Tyk helper library, into which three parameters are passed:

1. A basic JSON string that contains a randomly generated organisation name. Note that owner_name is the only required field for an organisation.

2. A callback function which performs three tasks; 

  • Check for successful data insertion.
  • Extract and store the id for the newly created organisation.
  • Store the randomly generated organisation name.

3. The Postman context, pm, which is needed by the custom Tyk helper library.

Note that the organisation-id and organisation-name are stored for use in subsequent steps.


Once the pre-request script has been completed, the actual request runs. It uses Postman’s double bracket syntax to inject the stored organisation id at the end of the URL path:


When Postman runs this request, the end user will see the temporary organisation data returned in the response.


Post-request tests

Once the request has finished, the test script runs:

pm.test("Status code is 200", function () {;

pm.test("Response contains correct Organisation name", function () {
    var jsonData = pm.response.json();

    (error, response) => {

It does three things:

  1. Test the request returns a 200 status code.
  2. Test the request returns an organisation whose name matches the name stored when the organisation was created in the pre-request script.
  3. Delete the organisation created by the pre-request script, and verify that it was deleted.


Variable storage

Variables are a crucial part of implementing this process in Postman. As seen in the previous example, where the organisation’s id and name are stored as variables, these can be referenced in subsequent parts of the workflow.

In Postman, variables can be set using the pm.variables.set() function, which takes a key-value pair as parameters. Once a variable is set, the value can be accessed in both the request URL and test script:

  • In the request URL, the variable value is accessed through double bracket syntax e.g. {{organisation-id}} is replaced by the value of the organisation-id variable when the request is run.

In the test script, the variable value is accessed through the pm object e.g. calling pm.variables.get("organisation-name") returns the value of the organisation-name variable.

5. Support multiple environments

All relevant environments need to be supported to get the most impact from testing. This means supporting local development and subsequent environments such as testing and production. If testing is limited to only the local development environment, then it’s not possible to perform testing during the CI/CD workflow, which risks introducing issues into the codebase. 

Conversely, if testing is only performed in CI/CD environments, then the developer debugging life cycle becomes unwieldy, removing their ability to test locally before pushing code back to the repository.

Fortunately, Postman supports environment variables. They mainly operate the same way as the previously mentioned variables, making it possible to reference them in URLs and scripts. 

Tyk Demo uses environment variables to hold the values of hostnames and ports. They’re applied to all request URLs in the Postman collection and are also used throughout the Tyk helper library. For example, this URL, used by the Basic Test Request, uses the environment variable to inject the correct API gateway host:


The default value of is tyk-gateway.localhost:8080. This resolves to a loopback address, which is designed to work in Docker-based local development environments. For test environments, the value is tyk-gateway:8080, which is the gateway hostname within the Docker deployment. Defaulting to the local development environment provides the best developer experience, as it means that the Postman collection is ready for use immediately, rather than needing manual adjustment by the user.

A different set of values is needed for test environments. Tyk Demo stores these in a Postman environment file, which is then imported at runtime when running test CLI commands by specifying the file path as a parameter. See the following sections for more information about this.

6. Automate testing

Test automation is essential for optimising workflows as it helps developers quickly determine whether their code negatively impacts the broader solution. Delays in this process can soon accumulate, as debugging cycles can go for multiple iterations, each needing testing. If issues go undetected, they can propagate throughout the code repository, causing widespread problems. But these issues can be mitigated by applying test automation to the local development environment and CI/CD test environment.

Postman provides a command line test runner called Newman. It takes a Postman collection and runs all the tests it contains. If the tests pass, the Newman process exits with a 0, making it easy to embed into automation scripts and tools. Tyk Demo uses Newman to automate testing in the local development environment and as part of the CI/CD solution.


Local development environment

Tyk Demo supports testing in a local development environment by providing a test script that executes Newman in the context of the local Tyk Demo deployment. This allows the developer to run tests to validate their changes quickly.

The test script uses this command to execute Newman:

docker run -t --rm \
    --network tyk-demo_tyk \
    -v $(pwd)/deployments/tyk/tyk_demo_tyk.postman_collection.json:/etc/postman/tyk_demo.postman_collection.json \
    -v $(pwd)/test.postman_environment.json:/etc/postman/test.postman_environment.json \
    postman/newman:alpine \
    run "/etc/postman/tyk_demo.postman_collection.json" \
    --environment /etc/postman/test.postman_environment.json \

The important parts of this command are:

  • The network parameter joins the container to the Docker network used by the other Tyk Demo containers, tyk-demo_tyk, allowing Newman to connect to the Tyk Demo containers and applications contained within.
  • The two volume parameters (v) map the Postman collection and the environment variables to the container. These make the necessary data available in the container so that Newman can run the tests and use the correct configuration.
  • The run command uses a parameter to set the path to the mapped collection.
  • The environment parameter sets the path to the environment variable configuration file.
  • The insecure flag enables the requests generated by Newman to work with the self-signed certificates generated by Tyk Demo.

CI/CD solution

Tyk Demo uses GitHub actions to automate testing in a CI/CD environment. The implementation relies on the same test process used for local development, but rather than being a manually triggered process, and it’s triggered automatically when code is committed to the repository. 

When GitHub detects a new commit, it runs the test script and uses the exit code to determine whether the tests ran successfully. If no errors are detected, the commit can be merged back into the main branch as part of a pull request. As part of this process, the GitHub action job is configured to store artefacts that can be used to diagnose issues in case of test failure:

- name: Store Bootstrap Log
if: success() || failure()
uses: actions/upload-artifact@v3
  name: bootstrap-log
  path: bootstrap.log
- name: Store Test Log
if: success() || failure()
uses: actions/upload-artifact@v3
  name: test-log
  path: test.log

GitHub’s upload-artifact action stores both the bootstrap.log and test.log files as artefacts for both success and failure scenarios.

7. Localise data sources

Unreliable networks can cause integration tests to fail. APIM integration tests are fundamentally network-based and particularly susceptible to this issue. Rerunning tests once the network has stabilised will likely see them pass again. Still, these types of false negatives generate uncertainty that can undermine the integrity of the whole test process.

To mitigate this issue, deploy the data sources as closely as possible to the APIM and test systems. In the case of Tyk Demo, the entire solution, including the APIM, data sources and test suite, are all deployed within the same Docker host. This provides virtually perfect network reliability and also has the benefit of enabling developers to run this entire environment offline.

However, this approach may only be feasible for some projects. The data sources may be too large to be deployed with the other components, or they already exist in a separate deployment elsewhere. In this situation, the best option is to make the data sources available over a local network. The goal is to reduce reliance on the internet and 3rd party infrastructure.

Alternatively, mocking is another solution to this problem. Suppose data sources can’t be moved closer. In that case, providing a local mock version of the data source endpoints can provide the necessary responses using dummy data with the correct data schema. 

However, mock endpoints are relatively simplistic. They’re like the real thing, but they’re incapable of performing logic or storing state, so they are mostly limited to read operations. 

A smarter type of mock is needed to support a broader range of operations and functionality, such as Tyk virtual endpoints. These are powered by a JavaScript virtual machine, allowing them to generate responses using code.

Creating stable and efficient API management solutions 

Ultimately, embracing integration testing as an integral part of the API development and management process can help organisations identify and resolve potential issues earlier in the development cycle, as well as enhance their APIs’ reliability, performance, and security.

By investing in robust testing strategies, organisations can gain a competitive edge, build user trust, and drive business success in today’s API-driven digital landscape.

If you have any questions about integration testing, get in touch with one of our expert engineers today.