The Strangler Fig Pattern

The Strangler Fig pattern is a long-established approach for incrementally replacing legacy systems. How can it be applied to API Management?

Key Points

  • The Strangler Fig pattern is good fit for API Management scenarios:
    • API requests are naturally interceptable, fulfilling the pattern’s event interception requirement.
    • API Gateways are an excellent choice for implementing the pattern as they are purpose-built for intercepting requests, and they possess the tools necessary to execute the pattern’s request siphoning strategy.
    • The pattern can be implemented by deploying an API Gateway to intercept and siphon requests from the client.
    • For Tyk-based implementations, the URL Rewrite plugin is suitable for performing the task of siphoning requests due to its wide range of options for inspecting and redirecting requests.
  • General comments about the pattern:
    • Can only be applied in situations where it’s possible to intercept messages and take ownership of any related assets.
    • Suitable for larger, complex systems, due to the associated risk and cost of replacing such systems.
    • Unsuitable for smaller, simpler systems which can easily be replaced.
  • Benefits of adopting the pattern:
    • Reduce risk through an incremental release strategy, which keeps the legacy system available throughout the process.
    • Demonstrate value sooner using shorter release cycles.
    • Avoid unnecessary rework by omitting obsolete functionality from future developments.
    • Optimise future development by replacing a legacy system with a more flexible and replaceable system.


The Strangler Fig pattern is an incremental approach to replacing legacy systems. It’s a long-standing concept that was established prior to Martin Fowler giving it the Stranger Fig moniker back in 2004. The Strangler Fig name is well-suited, as the pattern’s approach is analogous to how a Strangler Fig plant grows on a host tree, gradually overwhelming it. The pattern aims to do the same with systems, gradually replacing functionality until the legacy system is entirely superseded.

Fowler states the problem of replacing legacy systems as:

always much more complex than they seem, and overflowing with risk. The big cut-over date looms, the pressure is on. While new features (there are always new features) are liked, old stuff has to remain. Even old bugs often need to be added to the rewritten system.

His proposal for solving this problem is to:

Gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled.

During the application of the pattern, as additional pieces of functionality are added to the new system, it handles a greater number of events and assets. This may eventually end with the legacy system being entirely replaced by the new system, which is usually the overall goal, but this does not necessarily have to be the case. Depending on the goals of the transformation process, some elements of the legacy application can be kept.


Fowler sees these benefits to taking a gradual approach to transforming a legacy system:

  • Reduce risk: The incremental nature of this approach allows for gradual releases, enabling the transformation to occur over multiple manageable steps.
  • Demonstrate value: The agile approach delivers completed functionality in stages, so value can be seen sooner rather than later.
  • Avoid unnecessary rework: By focusing on what users actually need, obsolete functionality can be omitted from new development.
  • Optimise future development: Using the mindset of this pattern during the development process can lead to a more flexible and replaceable system in the future.

A benefit of the incremental nature of this pattern is that it gives many opportunities to evaluate progress, demonstrate value and plan further development, whilst avoiding “big bang” style deployments.


The pattern has two fundamental requirements, without which it cannot succeed:

  • Event Interception: Capability for events/requests sent to the legacy system to be intercepted and siphoned to the new system, enabling asset capture.
  • Asset Capture: Ability for the assets and data managed by the legacy system to become managed by the new system, and for there to be mechanisms for migrating data between the two systems.

Systems which don’t comply with these requirements could be refactored prior to the pattern being applied, but the cost of this could be prohibitive.

Application in Common API Management Scenarios

API-based systems make ideal candidates for applying the pattern. Their network-based operations fulfil the requirement of event interception, by providing opportunity to intercept client requests and redirect them to a new system, enabling the other requirement of asset capture.

When put in an API management context, API Gateways are in a perfect position to handle the interception of requests. From an architectural perspective, they are located between the API clients and servers, and are responsible for routing traffic between the two. Their ability to analyse and redirect requests makes them ideally suited to this task.

During the implementation of the pattern, as functionality is incrementally added to the new system, the Gateway’s configuration can also be incrementally updated to match. Gradually added more rules, to increase the amount of requests siphoned away from the legacy system.

The pattern is best applied when migrating systems to new architectures, such as the Monolith to Microservices scenario, where a single, self-contained system is replaced by one which is composed of many smaller, discrete services. This scenario is common across a wide variety of software verticals, including API Management, where monolithic APIs are being replaced by microservice-based successors.

The following section covers how the pattern can be applied in the context of API management, and how specific Tyk features and functionality can be used to implement them.

Monolith to Microservices

The Monolith to Microservices scenario has become popular in recent years, as organisations look to migrate their systems to architectures which offer better compatibility with modern practices such as loose coupling, scaling, and higher autonomy for software development teams.

The pattern is applied using a six-step process:

  1. Introduce a proxy into the infrastructure
  2. Identify functionality to replace
  3. Build a new replacement microservice
  4. Siphon traffic to the new microservice
  5. Repeat until all functionality is migrated
  6. Decommission the monolith

Step 1: Introduce a Proxy Into the Infrastructure

Introduce a proxy into the infrastructure and configure the network to route all monolith traffic via it. This enables the event interception requirement of the pattern, so that monolith traffic can be processed and siphoned as needed.

The proxy will initially allow all traffic to pass through unmodified to the monolith. Its configuration will be gradually updated as new microservices are developed, routing specific requests to the newly created microservices.

Step 2: Identify Functionality To Replace

Identify a piece of functionality within the monolith to replace with a microservice. The best candidates for replacement are:

  • Small: Microservices should be focussed, thus so should be the functionality scope.
  • Stateless: Without a data dependency, the microservice does not need to worry about maintaining data in the monolith’s database.
  • Shallow: Without dependencies on other functionality within the monolith, the monolith does not need to be refactored to expose the necessary functionality.

Starting with a small piece of functionality which is already relatively decoupled allows the development team to familiarise themselves with the process, as well as reducing the risk, time and effort to deliver.

Step 3: Build a New Replacement Microservice

Build the new microservice to replace the monolith functionality. If a dependency exists between the functionality and the monolith then additional integration work may be needed to ensure the system continues to operate correctly.

Step 4: Siphon Traffic to the New Microservice

Begin siphoning traffic to the microservice by configuring the proxy to inspect requests. This requires that the proxy is able to determine whether to siphon the request to the microservice or allow it to continue to the monolith. Configure the proxy to inspect requests and utilise the necessary data to make this decision. For example, if the microservice handles orders, then all requests to the /orders endpoint should be directed to the microservice built to handle orders.

Step 5: Repeat Until All Functionality Is Migrated

Repeat the process from step 2, until all monolith functionality is being siphoned to the microservices.

Step 6: Decommission the Monolith

The monolith can be decommissioned once all functionality is handled by microservices.

Alternative Scenarios

The Monolith to Microservices scenario is just one of several scenarios to which the Strangler Fig pattern can be applied. It’s a good fit for scenarios where events can be intercepted and a gradual migration is desired. Such scenarios may include:

  • Replatforming across hosting providers, or migrating between cloud and on-premises deployments
  • Rearchitecting or rewriting an an existing application
  • Replacing a legacy application with a new application

There are many individual use cases within these scenarios. Paul Hammant provides several examples on his blog, ranging from rewriting a legacy airline booking system to consolidating a consumer goods magazine website. When considering whether the pattern can be applied to a particular use case, check that the pattern’s fundamental requirements of event interception and asset capture are supported.

Progressive Delivery

Progressive Delivery introduces a few additional concepts which can compliment a Strangler Fit pattern implementation:

  • Parallel Run: Instead of siphoning requests, the proxy forwards requests to both the legacy system and the new system. This is beneficial for testing purposes, as it enables the responses from the two systems to be compared, to verify that they are returning the same response.
  • Dark Launch: Introduces the new system without the knowledge of end users. The pattern is a good fit for this, as requests to the legacy system are gradually siphoned to the new system. However, if the new system uses different protocols or content types then the proxy would need to transform these so that the end user experience is consistent with the legacy system.
  • Canary Release: Instead of siphoning all requests for a particular endpoint, the proxy siphons for only a subset of users. The users are usually selected based on some metadata related to the authorisation token provided or the account data connected to it.

Decorator Pattern

Similar to the Parallel Run concept of Progressive Delivery, the proxy in the Decorator pattern forwards requests to both the legacy system and the new system. However, this approach is not intended to replace the legacy system, but to supplement (or decorate) it such that new functionality can be introduced without having to update the legacy system.

Another thing which differentiates this from the Parallel Run is that the decoration can occur on both the request and response, and the decoration can occur at any point during proxy processing, enabling it to modify the request or response prior to the proxy forwarding it on.

Alternative Event Interception Methods

The Monolith to Microservices scenario uses a path-based method to identify which requests to siphon, using the requested path to determine whether the request should be redirected to a microservice or allowed to continue to the monolith. This is a practical approach, as the path typically references the type of resource being requested, which is well aligned with the typical microservice strategy of building microservices around business capabilities.

As useful as the path-based method is, there are alternatives which may be more relevant in some situations. These alternatives are based on the other data contained within the HTTP request, of which the path is just one element. HTTP requests contain a wide variety of data, and API Gateways provide many methods for reading and processing that data.

For example, given this HTTP request:

GET /resource-path?foo=bar HTTP/1.1
Authorization: my-key

  "hello": "world"

The request can be broken down into its constituent parts:

  • Method: GET
  • Path: /resource-path
  • Query: foo=bar
  • Host:
  • Header: Authorization: my-key
  • Body: { “hello”: “world” }

API Gateways can use any of this data when handling the request. But there are other sources of data which can also be used; stored data and context data.

Stored data is information which the gateway can retrieve using data from the request as a reference. For example, the Authorization header provides a key, my-key, which is a reference to key data stored in a database. The gateway can read the key data associated with my-key from the database, gaining access to information such as rate limits and access rights.

Context data includes information such as the client IP address, which can be inferred from the request.

When these various types of information are combined together, it provides a rich source of data for the Gateway to act upon.

Implementing with Tyk

Implementing the Strangler Fig pattern with Tyk uses the Tyk API Gateway as the proxy to intercept traffic and siphon it. To demonstrate this, let’s take the scenario and steps from the previous section and introduce an example monolith which will have the pattern applied. For this example, the Monolith is an API which handles three different types of object; clients, orders and products.

Note: This is a basic example, designed to show how the API Gateway can operate as a proxy to implement the Stranger Fig pattern. It does not consider or discuss the wider Tyk deployment, dependencies or functionality.

Prior to the pattern being applied, the API Client accesses the Monolith directly.

Strangler fig pattern - API Client to Monolith.

Step 1: Introduce a Proxy Into the Infrastructure

Introduce an API Gateway into the infrastructure. Give it an initial basic configuration using a single API Definition that is bound to the Monolith domain and targets the Monolith host.

Update the DNS records for the Monolith to point to the API Gateway host, so that traffic sent by API Clients to the Monolith now goes via the API Gateway, which will proxy to the Monolith without performing any modification.

The API Gateway now provides the capability to fulfil the event interception requirement of the pattern.

Step 2: Identify Functionality To Replace

Identify a discrete piece of functionality suitable to become a microservice. For this example, the Monolith has three such pieces of functionality; clients, orders and products.

Let’s say the orders functionality is identified for replacement, and is currently accessible through the Monolith’s /orders endpoint. This will be the first piece of functionality to be replaced.

Step 3: Build a New Replacement Microservice

Develop an Orders Microservice to replace the Monolith’s orders functionality.

Deploy the Order Microservice to the infrastructure. The microservice does not yet receive any traffic, but it’s now in a position to do so.

Strangler fig pattern microservices example.

Step 4: Siphon Traffic to the New Microservice

Update the API Definition configuration to instruct the API Gateway to begin siphoning requests for the orders endpoint to the Orders Microservice. This is achieved by using the API Gateway URL Rewrite plugin.

The URL Rewrite plugin can inspect many different aspects of an incoming request. For this example it is the requested path which is needed. Configure the plugin to activate when a request is received using the path /orders, then rewrite the URL so that it targets the Orders Microservice rather than the Monolith.

Step 5: Repeat Until All Functionality Is Migrated

Repeat the process from step 2, until all remaining functionality (clients and products) is migrated from the Monolith to the microservices.


Develop a Clients Microservice to replace the Monolith’s clients functionality, and deploy it to the infrastructure. Then configure the URL Rewrite plugin to rewrite requests for the /clients path to the Clients Microservice.


Develop a Products Microservice to replace the Monolith’s products functionality, and deploy it to the infrastructure. Then configure the URL Rewrite plugin to rewrite requests for the /clients path to the Clients Microservice.

Since this is the last piece of functionality contained within the monolith, the URL Rewrite rules now capture all parts of the monolith’s functionality, meaning that no traffic now reaches the monolith.

Step 6: Decommission the Monolith

Decommission and remove the monolith from the infrastructure.

This step completes this example scenario. The original monolith has now been replaced by three new microservices, with the API Gateway intercepting requests and siphoning them to the microservices based on the requested path.