Login 24/7 Support Community tyk.io

“Pump overloaded“

Description

The Tyk Pump cannot deal with amount of analytics data generated by the Gateway. This means the Pump is unable to process all the analytics data within the purge period.

Cause

If there is excessive analytics data, the pump may become overwhelmed and not able to move the data from Redis to the target data store.

Solution

There are many ways to approach solving this problem.

Scale the Pump

Scale the Pump by either increasing the CPU capacity of the Pump host or by adding more Pump instances.

By adding more instances you are spreading the load of processing analytics records across multiple hosts, which will increase processing capacity.

Disable detailed analytics recording

Set analytics_config.enable_detailed_recording to false in the Gateway configuration file tyk.conf. Detailed analytics records contain much more data and are more expensive to process, by disabling detailed analytics recording the Pump will be able to process higher volumes of data.

Reduce the Pump purge delay

Set purge_delay to a low value e.g. 1 in the Pump configuration file pump.conf. This value is the number of seconds the Pump waits between checking for analytics data. Setting it to a low value will prevent the analytics data set from growing too large as the pump will purge the records more frequently.

Reduce analytics record expiry time

Set analytics_config.storage_expiration_time to a low value e.g. 5 in the Gateway configuration file tyk.conf. This value is the number of seconds beyond which analytics records will be deleted from the database. The value must be higher than the purge_delay set for the Pump. This will allow for analytics records to be discarded in the scenario that the system is becoming overwhelmed. Note that this results in analytics record loss, but will help prevent degraded system performance.