Tyk Pump Environment Variables
You can use environment variables to override the config file for the Tyk Pump. Environment variables are created from the dot notation versions of the JSON objects contained with the config files. To understand how the environment variables notation works, see Environment Variables.
All the Pump environment variables have the prefix TYK_PMP_
. The environment variables will take precedence over the values in the configuration file.
purge_delay
EV: TYK_PMP_PURGEDELAY
Type: int
The number of seconds the Pump waits between checking for analytics data and purge it from Redis.
purge_chunk
EV: TYK_PMP_PURGECHUNK
Type: int64
The maximum number of records to pull from Redis at a time. If it’s unset or 0
, all the
analytics records in Redis are pulled. If it’s set, storage_expiration_time
is used to
reset the analytics record TTL.
storage_expiration_time
EV: TYK_PMP_STORAGEEXPIRATIONTIME
Type: int64
The number of seconds for the analytics records TTL. It only works if purge_chunk
is
enabled. Defaults to 60
seconds.
dont_purge_uptime_data
EV: TYK_PMP_DONTPURGEUPTIMEDATA
Type: bool
Setting this to false
will create a pump that pushes uptime data to Uptime Pump, so the
Dashboard can read it. Disable by setting to true
.
Mongo Uptime Pump
In uptime_pump_config
you can configure a mongo uptime pump. By default, the uptime pump
is going to be mongo
type, so it’s not necessary to specify it here. The minimum required
configurations for uptime pumps are:
collection_name
- That determines the uptime collection name in mongo. By default,tyk_uptime_analytics
.mongo_url
- The uptime pump mongo connection url. It is usually something like “mongodb://username:password@{hostname:port},{hostname:port}/{db_name}”.
uptime_pump_config.mongo_url
EV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOURL
Type: string
uptime_pump_config.mongo_use_ssl
EV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOUSESSL
Type: bool
Set to true to enable Mongo SSL connection.
uptime_pump_config.mongo_ssl_insecure_skip_verify
EV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOSSLINSECURESKIPVERIFY
Type: bool
Allows the use of self-signed certificates when connecting to an encrypted MongoDB database.
uptime_pump_config.mongo_ssl_ca_file
EV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOSSLCAFILE
Type: string
Path to the PEM file with trusted root certificates
uptime_pump_config.collection_name
EV: TYK_PMP_UPTIMEPUMPCONFIG_COLLECTIONNAME
Type: string
Specifies the mongo collection name.
uptime_pump_config.collection_cap_enable
EV: TYK_PMP_UPTIMEPUMPCONFIG_COLLECTIONCAPENABLE
Type: bool
Enable collection capping. It’s used to set a maximum size of the collection.
SQL Uptime Pump
Supported in Tyk Pump v1.5.0+
In uptime_pump_config
you can configure a SQL uptime pump. To do that, you need to add the
field uptime_type
with sql
value. You can also use different types of SQL Uptime pumps,
like postgres
or sqlite
using the type
field.
An example of a SQL Postgres uptime pump would be:
"uptime_pump_config": {
"uptime_type": "sql",
"type": "postgres",
"connection_string": "host=sql_host port=sql_port user=sql_usr dbname=dbname password=sql_pw",
"table_sharding": false
},
Take into account that you can also set log_level
field into the uptime_pump_config
to debug
,
info
or warning
. By default, the SQL logger verbosity is silent
.
uptime_pump_config.type
EV: TYK_PMP_UPTIMEPUMPCONFIG_TYPE
Type: string
The supported and tested types are sqlite
and postgres
.
uptime_pump_config.connection_string
EV: TYK_PMP_UPTIMEPUMPCONFIG_CONNECTIONSTRING
Type: string
Specifies the connection string to the database.
uptime_pump_config.postgres
EV: TYK_PMP_UPTIMEPUMPCONFIG_POSTGRES
Type: PostgresConfig
Postgres configurations.
uptime_pump_config.postgres.prefer_simple_protocol
EV: TYK_PMP_UPTIMEPUMPCONFIG_POSTGRES_PREFERSIMPLEPROTOCOL
Type: bool
Disables implicit prepared statement usage.
uptime_pump_config.mysql
EV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL
Type: MysqlConfig
Mysql configurations.
uptime_pump_config.mysql.default_string_size
EV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_DEFAULTSTRINGSIZE
Type: uint
Default size for string fields. Defaults to 256
.
uptime_pump_config.mysql.disable_datetime_precision
EV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_DISABLEDATETIMEPRECISION
Type: bool
Disable datetime precision, which not supported before MySQL 5.6.
uptime_pump_config.mysql.dont_support_rename_index
EV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_DONTSUPPORTRENAMEINDEX
Type: bool
Drop & create when rename index, rename index not supported before MySQL 5.7, MariaDB.
uptime_pump_config.mysql.dont_support_rename_column
EV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_DONTSUPPORTRENAMECOLUMN
Type: bool
change
when rename column, rename column not supported before MySQL 8, MariaDB.
uptime_pump_config.mysql.skip_initialize_with_version
EV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_SKIPINITIALIZEWITHVERSION
Type: bool
Auto configure based on currently MySQL version.
uptime_pump_config.uptime_type
EV: TYK_PMP_UPTIMEPUMPCONFIG_UPTIMETYPE
Type: string
Determines the uptime type. Options are mongo
and sql
. Defaults to mongo
.
syslog
The default environment variable prefix for each pump follows this format:
TYK_PMP_PUMPS_{PUMP-NAME}_
, for example TYK_PMP_PUMPS_KAFKA_
.
You can also set custom names for each pump specifying the pump type. For example, if you
want a Kafka pump which is called PROD
you need to create TYK_PMP_PUMPS_PROD_TYPE=kafka
and configure it using the TYK_PMP_PUMPS_PROD_
prefix.
pumps.csv.name
EV: TYK_PMP_PUMPS_CSV_NAME
Type: string
Deprecated.
pumps.csv.type
EV: TYK_PMP_PUMPS_CSV_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.csv.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.csv.filters.org_ids
EV: TYK_PMP_PUMPS_CSV_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.csv.filters.api_ids
EV: TYK_PMP_PUMPS_CSV_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.csv.filters.response_codes
EV: TYK_PMP_PUMPS_CSV_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.csv.filters.skip_org_ids
EV: TYK_PMP_PUMPS_CSV_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.csv.filters.skip_api_ids
EV: TYK_PMP_PUMPS_CSV_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.csv.filters.skip_response_codes
EV: TYK_PMP_PUMPS_CSV_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.csv.timeout
EV: TYK_PMP_PUMPS_CSV_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.csv.omit_detailed_recording
EV: TYK_PMP_PUMPS_CSV_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.csv.max_record_size
EV: TYK_PMP_PUMPS_CSV_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.csv.meta.csv_dir
EV: TYK_PMP_PUMPS_CSV_META_CSVDIR
Type: string
The directory and the filename where the CSV data will be stored.
pumps.dogstatsd.name
EV: TYK_PMP_PUMPS_DOGSTATSD_NAME
Type: string
Deprecated.
pumps.dogstatsd.type
EV: TYK_PMP_PUMPS_DOGSTATSD_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.dogstatsd.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.dogstatsd.filters.org_ids
EV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.dogstatsd.filters.api_ids
EV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.dogstatsd.filters.response_codes
EV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.dogstatsd.filters.skip_org_ids
EV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.dogstatsd.filters.skip_api_ids
EV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.dogstatsd.filters.skip_response_codes
EV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.dogstatsd.timeout
EV: TYK_PMP_PUMPS_DOGSTATSD_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.dogstatsd.omit_detailed_recording
EV: TYK_PMP_PUMPS_DOGSTATSD_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.dogstatsd.max_record_size
EV: TYK_PMP_PUMPS_DOGSTATSD_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.dogstatsd.meta.namespace
EV: TYK_PMP_PUMPS_DOGSTATSD_META_NAMESPACE
Type: string
Prefix for your metrics to datadog.
pumps.dogstatsd.meta.address
EV: TYK_PMP_PUMPS_DOGSTATSD_META_ADDRESS
Type: string
Address of the datadog agent including host & port.
pumps.dogstatsd.meta.sample_rate
EV: TYK_PMP_PUMPS_DOGSTATSD_META_SAMPLERATE
Type: float64
Defaults to 1
which equates to 100%
of requests. To sample at 50%
, set to 0.5
.
pumps.dogstatsd.meta.async_uds
EV: TYK_PMP_PUMPS_DOGSTATSD_META_ASYNCUDS
Type: bool
Enable async UDS over UDP https://github.com/Datadog/datadog-go#unix-domain-sockets-client.
pumps.dogstatsd.meta.async_uds_write_timeout_seconds
EV: TYK_PMP_PUMPS_DOGSTATSD_META_ASYNCUDSWRITETIMEOUT
Type: int
Integer write timeout in seconds if async_uds: true
.
pumps.dogstatsd.meta.buffered
EV: TYK_PMP_PUMPS_DOGSTATSD_META_BUFFERED
Type: bool
Enable buffering of messages.
pumps.dogstatsd.meta.buffered_max_messages
EV: TYK_PMP_PUMPS_DOGSTATSD_META_BUFFEREDMAXMESSAGES
Type: int
Max messages in single datagram if buffered: true
. Default 16.
pumps.dogstatsd.meta.tags
EV: TYK_PMP_PUMPS_DOGSTATSD_META_TAGS
Type: []string
List of tags to be added to the metric. The possible options are listed in the below example.
If no tag is specified the fallback behavior is to use the below tags:
path
method
response_code
api_version
api_name
api_id
org_id
tracked
oauth_id
Note that this configuration can generate significant charges due to the unbound nature of
the path
tag.
"dogstatsd": {
"type": "dogstatsd",
"meta": {
"address": "localhost:8125",
"namespace": "pump",
"async_uds": true,
"async_uds_write_timeout_seconds": 2,
"buffered": true,
"buffered_max_messages": 32,
"sample_rate": 0.5,
"tags": [
"method",
"response_code",
"api_version",
"api_name",
"api_id",
"org_id",
"tracked",
"path",
"oauth_id"
]
}
},
On startup, you should see the loaded configs when initializing the dogstatsd pump
[May 10 15:23:44] INFO dogstatsd: initializing pump
[May 10 15:23:44] INFO dogstatsd: namespace: pump.
[May 10 15:23:44] INFO dogstatsd: sample_rate: 50%
[May 10 15:23:44] INFO dogstatsd: buffered: true, max_messages: 32
[May 10 15:23:44] INFO dogstatsd: async_uds: true, write_timeout: 2s
pumps.elasticsearch.name
EV: TYK_PMP_PUMPS_ELASTICSEARCH_NAME
Type: string
Deprecated.
pumps.elasticsearch.type
EV: TYK_PMP_PUMPS_ELASTICSEARCH_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.elasticsearch.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.elasticsearch.filters.org_ids
EV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.elasticsearch.filters.api_ids
EV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.elasticsearch.filters.response_codes
EV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.elasticsearch.filters.skip_org_ids
EV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.elasticsearch.filters.skip_api_ids
EV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.elasticsearch.filters.skip_response_codes
EV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.elasticsearch.timeout
EV: TYK_PMP_PUMPS_ELASTICSEARCH_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.elasticsearch.omit_detailed_recording
EV: TYK_PMP_PUMPS_ELASTICSEARCH_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.elasticsearch.max_record_size
EV: TYK_PMP_PUMPS_ELASTICSEARCH_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.elasticsearch.meta.index_name
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_INDEXNAME
Type: string
The name of the index that all the analytics data will be placed in. Defaults to “tyk_analytics”.
pumps.elasticsearch.meta.elasticsearch_url
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_ELASTICSEARCHURL
Type: string
If sniffing is disabled, the URL that all data will be sent to. Defaults to “http://localhost:9200”.
pumps.elasticsearch.meta.use_sniffing
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_ENABLESNIFFING
Type: bool
If sniffing is enabled, the “elasticsearch_url” will be used to make a request to get a
list of all the nodes in the cluster, the returned addresses will then be used. Defaults to
false
.
pumps.elasticsearch.meta.document_type
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_DOCUMENTTYPE
Type: string
The type of the document that is created in ES. Defaults to “tyk_analytics”.
pumps.elasticsearch.meta.rolling_index
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_ROLLINGINDEX
Type: bool
Appends the date to the end of the index name, so each days data is split into a different
index name. E.g. tyk_analytics-2016.02.28. Defaults to false
.
pumps.elasticsearch.meta.extended_stats
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_EXTENDEDSTATISTICS
Type: bool
If set to true
will include the following additional fields: Raw Request, Raw Response and
User Agent.
pumps.elasticsearch.meta.generate_id
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_GENERATEID
Type: bool
When enabled, generate _id for outgoing records. This prevents duplicate records when retrying ES.
pumps.elasticsearch.meta.decode_base64
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_DECODEBASE64
Type: bool
Allows for the base64 bits to be decode before being passed to ES.
pumps.elasticsearch.meta.version
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_VERSION
Type: string
Specifies the ES version. Use “3” for ES 3.X, “5” for ES 5.X, “6” for ES 6.X, “7” for ES 7.X . Defaults to “3”.
pumps.elasticsearch.meta.disable_bulk
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_DISABLEBULK
Type: bool
Disable batch writing. Defaults to false.
pumps.elasticsearch.meta.bulk_config
Batch writing trigger configuration. Each option is an OR with eachother:
pumps.elasticsearch.meta.bulk_config.workers
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_WORKERS
Type: int
Number of workers. Defaults to 1.
pumps.elasticsearch.meta.bulk_config.flush_interval
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_FLUSHINTERVAL
Type: int
Specifies the time in seconds to flush the data and send it to ES. Default disabled.
pumps.elasticsearch.meta.bulk_config.bulk_actions
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_BULKACTIONS
Type: int
Specifies the number of requests needed to flush the data and send it to ES. Defaults to 1000 requests. If it is needed, can be disabled with -1.
pumps.elasticsearch.meta.bulk_config.bulk_size
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_BULKSIZE
Type: int
Specifies the size (in bytes) needed to flush the data and send it to ES. Defaults to 5MB. If it is needed, can be disabled with -1.
pumps.elasticsearch.meta.auth_api_key_id
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_AUTHAPIKEYID
Type: string
API Key ID used for APIKey auth in ES. It’s send to ES in the Authorization header as ApiKey base64(auth_api_key_id:auth_api_key)
pumps.elasticsearch.meta.auth_api_key
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_AUTHAPIKEY
Type: string
API Key used for APIKey auth in ES. It’s send to ES in the Authorization header as ApiKey base64(auth_api_key_id:auth_api_key)
pumps.elasticsearch.meta.auth_basic_username
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_USERNAME
Type: string
Basic auth username. It’s send to ES in the Authorization header as username:password encoded in base64.
pumps.elasticsearch.meta.auth_basic_password
EV: TYK_PMP_PUMPS_ELASTICSEARCH_META_PASSWORD
Type: string
Basic auth password. It’s send to ES in the Authorization header as username:password encoded in base64.
pumps.graylog.name
EV: TYK_PMP_PUMPS_GRAYLOG_NAME
Type: string
Deprecated.
pumps.graylog.type
EV: TYK_PMP_PUMPS_GRAYLOG_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.graylog.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.graylog.filters.org_ids
EV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.graylog.filters.api_ids
EV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.graylog.filters.response_codes
EV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.graylog.filters.skip_org_ids
EV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.graylog.filters.skip_api_ids
EV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.graylog.filters.skip_response_codes
EV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.graylog.timeout
EV: TYK_PMP_PUMPS_GRAYLOG_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.graylog.omit_detailed_recording
EV: TYK_PMP_PUMPS_GRAYLOG_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.graylog.max_record_size
EV: TYK_PMP_PUMPS_GRAYLOG_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.graylog.meta.host
EV: TYK_PMP_PUMPS_GRAYLOG_META_GRAYLOGHOST
Type: string
Graylog host.
pumps.graylog.meta.port
EV: TYK_PMP_PUMPS_GRAYLOG_META_GRAYLOGPORT
Type: int
Graylog port.
pumps.graylog.meta.tags
EV: TYK_PMP_PUMPS_GRAYLOG_META_TAGS
Type: []string
List of tags to be added to the metric. The possible options are listed in the below example.
If no tag is specified the fallback behaviour is to don’t send anything. The possible values are:
path
method
response_code
api_version
api_name
api_id
org_id
tracked
oauth_id
raw_request
raw_response
request_time
ip_address
pumps.influx.name
EV: TYK_PMP_PUMPS_INFLUX_NAME
Type: string
Deprecated.
pumps.influx.type
EV: TYK_PMP_PUMPS_INFLUX_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.influx.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.influx.filters.org_ids
EV: TYK_PMP_PUMPS_INFLUX_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.influx.filters.api_ids
EV: TYK_PMP_PUMPS_INFLUX_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.influx.filters.response_codes
EV: TYK_PMP_PUMPS_INFLUX_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.influx.filters.skip_org_ids
EV: TYK_PMP_PUMPS_INFLUX_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.influx.filters.skip_api_ids
EV: TYK_PMP_PUMPS_INFLUX_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.influx.filters.skip_response_codes
EV: TYK_PMP_PUMPS_INFLUX_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.influx.timeout
EV: TYK_PMP_PUMPS_INFLUX_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.influx.omit_detailed_recording
EV: TYK_PMP_PUMPS_INFLUX_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.influx.max_record_size
EV: TYK_PMP_PUMPS_INFLUX_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.influx.meta.database_name
EV: TYK_PMP_PUMPS_INFLUX_META_DATABASENAME
Type: string
InfluxDB pump database name.
pumps.influx.meta.address
EV: TYK_PMP_PUMPS_INFLUX_META_ADDR
Type: string
InfluxDB pump host.
pumps.influx.meta.username
EV: TYK_PMP_PUMPS_INFLUX_META_USERNAME
Type: string
InfluxDB pump database username.
pumps.influx.meta.password
EV: TYK_PMP_PUMPS_INFLUX_META_PASSWORD
Type: string
InfluxDB pump database password.
pumps.influx.meta.fields
EV: TYK_PMP_PUMPS_INFLUX_META_FIELDS
Type: []string
Define which Analytics fields should be sent to InfluxDB. Check the available
fields in the example below. Default value is ["method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "raw_request", "request_time", "raw_response", "ip_address"]
.
pumps.kafka.name
EV: TYK_PMP_PUMPS_KAFKA_NAME
Type: string
Deprecated.
pumps.kafka.type
EV: TYK_PMP_PUMPS_KAFKA_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.kafka.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.kafka.filters.org_ids
EV: TYK_PMP_PUMPS_KAFKA_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.kafka.filters.api_ids
EV: TYK_PMP_PUMPS_KAFKA_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.kafka.filters.response_codes
EV: TYK_PMP_PUMPS_KAFKA_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.kafka.filters.skip_org_ids
EV: TYK_PMP_PUMPS_KAFKA_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.kafka.filters.skip_api_ids
EV: TYK_PMP_PUMPS_KAFKA_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.kafka.filters.skip_response_codes
EV: TYK_PMP_PUMPS_KAFKA_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.kafka.timeout
EV: TYK_PMP_PUMPS_KAFKA_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.kafka.omit_detailed_recording
EV: TYK_PMP_PUMPS_KAFKA_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.kafka.max_record_size
EV: TYK_PMP_PUMPS_KAFKA_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.kafka.meta.broker
EV: TYK_PMP_PUMPS_KAFKA_META_BROKER
Type: []string
The list of brokers used to discover the partitions available on the kafka cluster. E.g. “localhost:9092”.
pumps.kafka.meta.client_id
EV: TYK_PMP_PUMPS_KAFKA_META_CLIENTID
Type: string
Unique identifier for client connections established with Kafka.
pumps.kafka.meta.topic
EV: TYK_PMP_PUMPS_KAFKA_META_TOPIC
Type: string
The topic that the writer will produce messages to.
pumps.kafka.meta.timeout
EV: TYK_PMP_PUMPS_KAFKA_META_TIMEOUT
Type: time.Duration
Timeout is the maximum amount of time will wait for a connect or write to complete.
pumps.kafka.meta.compressed
EV: TYK_PMP_PUMPS_KAFKA_META_COMPRESSED
Type: bool
Enable “github.com/golang/snappy” codec to be used to compress Kafka messages. By default
is false
.
pumps.kafka.meta.meta_data
EV: TYK_PMP_PUMPS_KAFKA_META_METADATA
Type: map[string]string
Can be used to set custom metadata inside the kafka message.
pumps.kafka.meta.use_ssl
EV: TYK_PMP_PUMPS_KAFKA_META_USESSL
Type: bool
Enables SSL connection.
pumps.kafka.meta.ssl_insecure_skip_verify
EV: TYK_PMP_PUMPS_KAFKA_META_SSLINSECURESKIPVERIFY
Type: bool
Controls whether the pump client verifies the kafka server’s certificate chain and host name.
pumps.kafka.meta.ssl_cert_file
EV: TYK_PMP_PUMPS_KAFKA_META_SSLCERTFILE
Type: string
Can be used to set custom certificate file for authentication with kafka.
pumps.kafka.meta.ssl_key_file
EV: TYK_PMP_PUMPS_KAFKA_META_SSLKEYFILE
Type: string
Can be used to set custom key file for authentication with kafka.
pumps.kafka.meta.sasl_mechanism
EV: TYK_PMP_PUMPS_KAFKA_META_SASLMECHANISM
Type: string
SASL mechanism configuration. Only “plain” and “scram” are supported.
pumps.kafka.meta.sasl_username
EV: TYK_PMP_PUMPS_KAFKA_META_USERNAME
Type: string
SASL username.
pumps.kafka.meta.sasl_password
EV: TYK_PMP_PUMPS_KAFKA_META_PASSWORD
Type: string
SASL password.
pumps.kafka.meta.sasl_algorithm
EV: TYK_PMP_PUMPS_KAFKA_META_ALGORITHM
Type: string
SASL algorithm. It’s the algorithm specified for scram mechanism. It could be sha-512 or sha-256. Defaults to “sha-256”.
pumps.logzio.name
EV: TYK_PMP_PUMPS_LOGZIO_NAME
Type: string
Deprecated.
pumps.logzio.type
EV: TYK_PMP_PUMPS_LOGZIO_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.logzio.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.logzio.filters.org_ids
EV: TYK_PMP_PUMPS_LOGZIO_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.logzio.filters.api_ids
EV: TYK_PMP_PUMPS_LOGZIO_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.logzio.filters.response_codes
EV: TYK_PMP_PUMPS_LOGZIO_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.logzio.filters.skip_org_ids
EV: TYK_PMP_PUMPS_LOGZIO_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.logzio.filters.skip_api_ids
EV: TYK_PMP_PUMPS_LOGZIO_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.logzio.filters.skip_response_codes
EV: TYK_PMP_PUMPS_LOGZIO_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.logzio.timeout
EV: TYK_PMP_PUMPS_LOGZIO_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.logzio.omit_detailed_recording
EV: TYK_PMP_PUMPS_LOGZIO_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.logzio.max_record_size
EV: TYK_PMP_PUMPS_LOGZIO_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.logzio.meta.check_disk_space
EV: TYK_PMP_PUMPS_LOGZIO_META_CHECKDISKSPACE
Type: bool
Set the sender to check if it crosses the maximum allowed disk usage. Default value is
true
.
pumps.logzio.meta.disk_threshold
EV: TYK_PMP_PUMPS_LOGZIO_META_DISKTHRESHOLD
Type: int
Set disk queue threshold, once the threshold is crossed the sender will not enqueue the
received logs. Default value is 98
(percentage of disk).
pumps.logzio.meta.drain_duration
EV: TYK_PMP_PUMPS_LOGZIO_META_DRAINDURATION
Type: string
Set drain duration (flush logs on disk). Default value is 3s
.
pumps.logzio.meta.queue_dir
EV: TYK_PMP_PUMPS_LOGZIO_META_QUEUEDIR
Type: string
The directory for the queue.
pumps.logzio.meta.token
EV: TYK_PMP_PUMPS_LOGZIO_META_TOKEN
Type: string
Token for sending data to your logzio account.
pumps.logzio.meta.url
EV: TYK_PMP_PUMPS_LOGZIO_META_URL
Type: string
If you do not want to use the default Logzio url i.e. when using a proxy. Default is
https://listener.logz.io:8071
.
pumps.moesif.name
EV: TYK_PMP_PUMPS_MOESIF_NAME
Type: string
Deprecated.
pumps.moesif.type
EV: TYK_PMP_PUMPS_MOESIF_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.moesif.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.moesif.filters.org_ids
EV: TYK_PMP_PUMPS_MOESIF_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.moesif.filters.api_ids
EV: TYK_PMP_PUMPS_MOESIF_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.moesif.filters.response_codes
EV: TYK_PMP_PUMPS_MOESIF_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.moesif.filters.skip_org_ids
EV: TYK_PMP_PUMPS_MOESIF_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.moesif.filters.skip_api_ids
EV: TYK_PMP_PUMPS_MOESIF_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.moesif.filters.skip_response_codes
EV: TYK_PMP_PUMPS_MOESIF_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.moesif.timeout
EV: TYK_PMP_PUMPS_MOESIF_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.moesif.omit_detailed_recording
EV: TYK_PMP_PUMPS_MOESIF_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.moesif.max_record_size
EV: TYK_PMP_PUMPS_MOESIF_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.moesif.meta.application_id
EV: TYK_PMP_PUMPS_MOESIF_META_APPLICATIONID
Type: string
Moesif Application Id. You can find your Moesif Application Id from <em>Moesif Dashboard</em> -> Top Right Menu -> API Keys . Moesif recommends creating separate Application Ids for each environment such as Production, Staging, and Development to keep data isolated.
pumps.moesif.meta.request_header_masks
EV: TYK_PMP_PUMPS_MOESIF_META_REQUESTHEADERMASKS
Type: []string
An option to mask a specific request header field.
pumps.moesif.meta.response_header_masks
EV: TYK_PMP_PUMPS_MOESIF_META_RESPONSEHEADERMASKS
Type: []string
An option to mask a specific response header field.
pumps.moesif.meta.request_body_masks
EV: TYK_PMP_PUMPS_MOESIF_META_REQUESTBODYMASKS
Type: []string
An option to mask a specific - request body field.
pumps.moesif.meta.response_body_masks
EV: TYK_PMP_PUMPS_MOESIF_META_RESPONSEBODYMASKS
Type: []string
An option to mask a specific response body field.
pumps.moesif.meta.disable_capture_request_body
EV: TYK_PMP_PUMPS_MOESIF_META_DISABLECAPTUREREQUESTBODY
Type: bool
An option to disable logging of request body. Default value is false
.
pumps.moesif.meta.disable_capture_response_body
EV: TYK_PMP_PUMPS_MOESIF_META_DISABLECAPTURERESPONSEBODY
Type: bool
An option to disable logging of response body. Default value is false
.
pumps.moesif.meta.user_id_header
EV: TYK_PMP_PUMPS_MOESIF_META_USERIDHEADER
Type: string
An optional field name to identify User from a request or response header.
pumps.moesif.meta.company_id_header
EV: TYK_PMP_PUMPS_MOESIF_META_COMPANYIDHEADER
Type: string
An optional field name to identify Company (Account) from a request or response header.
pumps.moesif.meta.enable_bulk
EV: TYK_PMP_PUMPS_MOESIF_META_ENABLEBULK
Type: bool
Set this to true
to enable bulk_config
.
pumps.moesif.meta.bulk_config
EV: TYK_PMP_PUMPS_MOESIF_META_BULKCONFIG
Type: map[string]interface{}
Batch writing trigger configuration.
"event_queue_size"
- (optional) An optional field name which specify the maximum number of events to hold in queue before sending to Moesif. In case of network issues when not able to connect/send event to Moesif, skips adding new events to the queue to prevent memory overflow. Type: int. Default value is10000
."batch_size"
- (optional) An optional field name which specify the maximum batch size when sending to Moesif. Type: int. Default value is200
."timer_wake_up_seconds"
- (optional) An optional field which specifies a time (every n seconds) how often background thread runs to send events to moesif. Type: int. Default value is2
seconds.
pumps.moesif.meta.authorization_header_name
EV: TYK_PMP_PUMPS_MOESIF_META_AUTHORIZATIONHEADERNAME
Type: string
An optional request header field name to used to identify the User in Moesif. Default value
is authorization
.
pumps.moesif.meta.authorization_user_id_field
EV: TYK_PMP_PUMPS_MOESIF_META_AUTHORIZATIONUSERIDFIELD
Type: string
An optional field name use to parse the User from authorization header in Moesif. Default
value is sub
.
pumps.mongo.name
EV: TYK_PMP_PUMPS_MONGO_NAME
Type: string
Deprecated.
pumps.mongo.type
EV: TYK_PMP_PUMPS_MONGO_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.mongo.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.mongo.filters.org_ids
EV: TYK_PMP_PUMPS_MONGO_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.mongo.filters.api_ids
EV: TYK_PMP_PUMPS_MONGO_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.mongo.filters.response_codes
EV: TYK_PMP_PUMPS_MONGO_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.mongo.filters.skip_org_ids
EV: TYK_PMP_PUMPS_MONGO_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.mongo.filters.skip_api_ids
EV: TYK_PMP_PUMPS_MONGO_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.mongo.filters.skip_response_codes
EV: TYK_PMP_PUMPS_MONGO_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.mongo.timeout
EV: TYK_PMP_PUMPS_MONGO_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.mongo.omit_detailed_recording
EV: TYK_PMP_PUMPS_MONGO_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.mongo.max_record_size
EV: TYK_PMP_PUMPS_MONGO_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.mongo.meta.mongo_use_ssl
EV: TYK_PMP_PUMPS_MONGO_META_MONGOUSESSL
Type: bool
Set to true to enable Mongo SSL connection.
pumps.mongo.meta.mongo_ssl_insecure_skip_verify
EV: TYK_PMP_PUMPS_MONGO_META_MONGOSSLINSECURESKIPVERIFY
Type: bool
Allows the use of self-signed certificates when connecting to an encrypted MongoDB database.
pumps.mongo.meta.mongo_ssl_ca_file
EV: TYK_PMP_PUMPS_MONGO_META_MONGOSSLCAFILE
Type: string
Path to the PEM file with trusted root certificates
pumps.mongo.meta.collection_name
EV: TYK_PMP_PUMPS_MONGO_META_COLLECTIONNAME
Type: string
Specifies the mongo collection name.
pumps.mongo.meta.max_insert_batch_size_bytes
EV: TYK_PMP_PUMPS_MONGO_META_MAXINSERTBATCHSIZEBYTES
Type: int
Maximum insert batch size for mongo selective pump. If the batch we are writing surpass this value, it will be send in multiple batchs. Defaults to 10Mb.
pumps.mongo.meta.max_document_size_bytes
EV: TYK_PMP_PUMPS_MONGO_META_MAXDOCUMENTSIZEBYTES
Type: int
Maximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb.
pumps.mongo.meta.collection_cap_max_size_bytes
EV: TYK_PMP_PUMPS_MONGO_META_COLLECTIONCAPMAXSIZEBYTES
Type: int
Amount of bytes of the capped collection in 64bits architectures. Defaults to 5GB.
pumps.mongo.meta.collection_cap_enable
EV: TYK_PMP_PUMPS_MONGO_META_COLLECTIONCAPENABLE
Type: bool
Enable collection capping. It’s used to set a maximum size of the collection.
pumps.mongoaggregate.name
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_NAME
Type: string
Deprecated.
pumps.mongoaggregate.type
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.mongoaggregate.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.mongoaggregate.filters.org_ids
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.mongoaggregate.filters.api_ids
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.mongoaggregate.filters.response_codes
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.mongoaggregate.filters.skip_org_ids
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.mongoaggregate.filters.skip_api_ids
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.mongoaggregate.filters.skip_response_codes
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.mongoaggregate.timeout
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.mongoaggregate.omit_detailed_recording
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.mongoaggregate.max_record_size
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.mongoaggregate.meta.mongo_use_ssl
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOUSESSL
Type: bool
Set to true to enable Mongo SSL connection.
pumps.mongoaggregate.meta.mongo_ssl_insecure_skip_verify
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOSSLINSECURESKIPVERIFY
Type: bool
Allows the use of self-signed certificates when connecting to an encrypted MongoDB database.
pumps.mongoaggregate.meta.mongo_ssl_ca_file
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOSSLCAFILE
Type: string
Path to the PEM file with trusted root certificates
pumps.mongoaggregate.meta.use_mixed_collection
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_USEMIXEDCOLLECTION
Type: bool
If set to true
your pump will store analytics to both your organisation defined
collections z_tyk_analyticz_aggregate_{ORG ID} and your org-less tyk_analytics_aggregates
collection. When set to ‘false’ your pump will only store analytics to your org defined
collection.
pumps.mongoaggregate.meta.track_all_paths
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_TRACKALLPATHS
Type: bool
Specifies if it should store aggregated data for all the endpoints. By default, false
which means that only store aggregated data for tracked endpoints
.
pumps.mongoaggregate.meta.ignore_tag_prefix_list
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_IGNORETAGPREFIXLIST
Type: []string
Specifies prefixes of tags that should be ignored.
pumps.mongoaggregate.meta.threshold_len_tag_list
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_THRESHOLDLENTAGLIST
Type: int
Determines the threshold of amount of tags of an aggregation. If the amount of tags is superior to the threshold, it will print an alert. Defaults to 1000.
pumps.mongoaggregate.meta.store_analytics_per_minute
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_STOREANALYTICSPERMINUTE
Type: bool
Determines if the aggregations should be made per minute instead of per hour.
pumps.mongoaggregate.meta.ignore_aggregations
EV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_IGNOREAGGREGATIONSLIST
Type: []string
This list determines which aggregations are going to be dropped and not stored in the collection. Posible values are: “APIID”,“errors”,“versions”,“apikeys”,“oauthids”,“geo”,“tags”,“endpoints”,“keyendpoints”, “oauthendpoints”, and “apiendpoints”.
pumps.mongoselective.name
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_NAME
Type: string
Deprecated.
pumps.mongoselective.type
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.mongoselective.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.mongoselective.filters.org_ids
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.mongoselective.filters.api_ids
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.mongoselective.filters.response_codes
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.mongoselective.filters.skip_org_ids
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.mongoselective.filters.skip_api_ids
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.mongoselective.filters.skip_response_codes
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.mongoselective.timeout
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.mongoselective.omit_detailed_recording
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.mongoselective.max_record_size
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.mongoselective.meta.mongo_use_ssl
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOUSESSL
Type: bool
Set to true to enable Mongo SSL connection.
pumps.mongoselective.meta.mongo_ssl_insecure_skip_verify
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOSSLINSECURESKIPVERIFY
Type: bool
Allows the use of self-signed certificates when connecting to an encrypted MongoDB database.
pumps.mongoselective.meta.mongo_ssl_ca_file
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOSSLCAFILE
Type: string
Path to the PEM file with trusted root certificates
pumps.mongoselective.meta.max_insert_batch_size_bytes
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MAXINSERTBATCHSIZEBYTES
Type: int
Maximum insert batch size for mongo selective pump. If the batch we are writing surpass this value, it will be send in multiple batchs. Defaults to 10Mb.
pumps.mongoselective.meta.max_document_size_bytes
EV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MAXDOCUMENTSIZEBYTES
Type: int
Maximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb.
pumps.prometheus.name
EV: TYK_PMP_PUMPS_PROMETHEUS_NAME
Type: string
Deprecated.
pumps.prometheus.type
EV: TYK_PMP_PUMPS_PROMETHEUS_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.prometheus.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.prometheus.filters.org_ids
EV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.prometheus.filters.api_ids
EV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.prometheus.filters.response_codes
EV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.prometheus.filters.skip_org_ids
EV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.prometheus.filters.skip_api_ids
EV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.prometheus.filters.skip_response_codes
EV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.prometheus.timeout
EV: TYK_PMP_PUMPS_PROMETHEUS_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.prometheus.omit_detailed_recording
EV: TYK_PMP_PUMPS_PROMETHEUS_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.prometheus.max_record_size
EV: TYK_PMP_PUMPS_PROMETHEUS_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.prometheus.meta.listen_address
EV: TYK_PMP_PUMPS_PROMETHEUS_META_ADDR
Type: string
The full URL to your Prometheus instance, {HOST}:{PORT}. For example localhost:9090
.
pumps.prometheus.meta.path
EV: TYK_PMP_PUMPS_PROMETHEUS_META_PATH
Type: string
The path to the Prometheus collection. For example /metrics
.
pumps.splunk.name
EV: TYK_PMP_PUMPS_SPLUNK_NAME
Type: string
Deprecated.
pumps.splunk.type
EV: TYK_PMP_PUMPS_SPLUNK_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.splunk.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.splunk.filters.org_ids
EV: TYK_PMP_PUMPS_SPLUNK_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.splunk.filters.api_ids
EV: TYK_PMP_PUMPS_SPLUNK_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.splunk.filters.response_codes
EV: TYK_PMP_PUMPS_SPLUNK_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.splunk.filters.skip_org_ids
EV: TYK_PMP_PUMPS_SPLUNK_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.splunk.filters.skip_api_ids
EV: TYK_PMP_PUMPS_SPLUNK_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.splunk.filters.skip_response_codes
EV: TYK_PMP_PUMPS_SPLUNK_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.splunk.timeout
EV: TYK_PMP_PUMPS_SPLUNK_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.splunk.omit_detailed_recording
EV: TYK_PMP_PUMPS_SPLUNK_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.splunk.max_record_size
EV: TYK_PMP_PUMPS_SPLUNK_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.splunk.meta.collector_token
EV: TYK_PMP_PUMPS_SPLUNK_META_COLLECTORTOKEN
Type: string
Address of the datadog agent including host & port.
pumps.splunk.meta.collector_url
EV: TYK_PMP_PUMPS_SPLUNK_META_COLLECTORURL
Type: string
Endpoint the Pump will send analytics too. Should look something like:
https://splunk:8088/services/collector/event
.
pumps.splunk.meta.ssl_insecure_skip_verify
EV: TYK_PMP_PUMPS_SPLUNK_META_SSLINSECURESKIPVERIFY
Type: bool
Controls whether the pump client verifies the Splunk server’s certificate chain and host name.
pumps.splunk.meta.ssl_cert_file
EV: TYK_PMP_PUMPS_SPLUNK_META_SSLCERTFILE
Type: string
SSL cert file location.
pumps.splunk.meta.ssl_key_file
EV: TYK_PMP_PUMPS_SPLUNK_META_SSLKEYFILE
Type: string
SSL cert key location.
pumps.splunk.meta.ssl_server_name
EV: TYK_PMP_PUMPS_SPLUNK_META_SSLSERVERNAME
Type: string
SSL Server name used in the TLS connection.
pumps.splunk.meta.obfuscate_api_keys
EV: TYK_PMP_PUMPS_SPLUNK_META_OBFUSCATEAPIKEYS
Type: bool
Controls whether the pump client should hide the API key. In case you still need substring
of the value, check the next option. Default value is false
.
pumps.splunk.meta.obfuscate_api_keys_length
EV: TYK_PMP_PUMPS_SPLUNK_META_OBFUSCATEAPIKEYSLENGTH
Type: int
Define the number of the characters from the end of the API key. The obfuscate_api_keys
should be set to true
. Default value is 0
.
pumps.splunk.meta.fields
EV: TYK_PMP_PUMPS_SPLUNK_META_FIELDS
Type: []string
Define which Analytics fields should participate in the Splunk event. Check the available
fields in the example below. Default value is ["method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "raw_request", "request_time", "raw_response", "ip_address"]
.
pumps.splunk.meta.ignore_tag_prefix_list
EV: TYK_PMP_PUMPS_SPLUNK_META_IGNORETAGPREFIXLIST
Type: []string
Choose which tags to be ignored by the Splunk Pump. Keep in mind that the tag name and value
are hyphenated. Default value is []
.
pumps.splunk.meta.enable_batch
EV: TYK_PMP_PUMPS_SPLUNK_META_ENABLEBATCH
Type: bool
If this is set to true
, pump is going to send the analytics records in batch to Splunk.
Default value is false
.
pumps.sql.name
EV: TYK_PMP_PUMPS_SQL_NAME
Type: string
Deprecated.
pumps.sql.type
EV: TYK_PMP_PUMPS_SQL_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.sql.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.sql.filters.org_ids
EV: TYK_PMP_PUMPS_SQL_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.sql.filters.api_ids
EV: TYK_PMP_PUMPS_SQL_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.sql.filters.response_codes
EV: TYK_PMP_PUMPS_SQL_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.sql.filters.skip_org_ids
EV: TYK_PMP_PUMPS_SQL_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.sql.filters.skip_api_ids
EV: TYK_PMP_PUMPS_SQL_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.sql.filters.skip_response_codes
EV: TYK_PMP_PUMPS_SQL_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.sql.timeout
EV: TYK_PMP_PUMPS_SQL_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.sql.omit_detailed_recording
EV: TYK_PMP_PUMPS_SQL_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.sql.max_record_size
EV: TYK_PMP_PUMPS_SQL_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.sql.meta.type
EV: TYK_PMP_PUMPS_SQL_META_TYPE
Type: string
The supported and tested types are sqlite
and postgres
.
pumps.sql.meta.connection_string
EV: TYK_PMP_PUMPS_SQL_META_CONNECTIONSTRING
Type: string
Specifies the connection string to the database.
pumps.sql.meta.postgres
Postgres configurations.
pumps.sql.meta.postgres.prefer_simple_protocol
EV: TYK_PMP_PUMPS_SQL_META_POSTGRES_PREFERSIMPLEPROTOCOL
Type: bool
Disables implicit prepared statement usage.
pumps.sql.meta.mysql
Mysql configurations.
pumps.sql.meta.mysql.default_string_size
EV: TYK_PMP_PUMPS_SQL_META_MYSQL_DEFAULTSTRINGSIZE
Type: uint
Default size for string fields. Defaults to 256
.
pumps.sql.meta.mysql.disable_datetime_precision
EV: TYK_PMP_PUMPS_SQL_META_MYSQL_DISABLEDATETIMEPRECISION
Type: bool
Disable datetime precision, which not supported before MySQL 5.6.
pumps.sql.meta.mysql.dont_support_rename_index
EV: TYK_PMP_PUMPS_SQL_META_MYSQL_DONTSUPPORTRENAMEINDEX
Type: bool
Drop & create when rename index, rename index not supported before MySQL 5.7, MariaDB.
pumps.sql.meta.mysql.dont_support_rename_column
EV: TYK_PMP_PUMPS_SQL_META_MYSQL_DONTSUPPORTRENAMECOLUMN
Type: bool
change
when rename column, rename column not supported before MySQL 8, MariaDB.
pumps.sql.meta.mysql.skip_initialize_with_version
EV: TYK_PMP_PUMPS_SQL_META_MYSQL_SKIPINITIALIZEWITHVERSION
Type: bool
Auto configure based on currently MySQL version.
pumps.sql.meta.table_sharding
EV: TYK_PMP_PUMPS_SQL_META_TABLESHARDING
Type: bool
Specifies if all the analytics records are going to be stored in one table or in multiple
tables (one per day). By default, false
. If false
, all the records are going to be
stored in tyk_aggregated
table. Instead, if it’s true
, all the records of the day are
going to be stored in tyk_aggregated_YYYYMMDD
table, where YYYYMMDD
is going to change
depending on the date.
pumps.sql.meta.log_level
EV: TYK_PMP_PUMPS_SQL_META_LOGLEVEL
Type: string
Specifies the SQL log verbosity. The possible values are: info
,error
and warning
. By
default, the value is silent
, which means that it won’t log any SQL query.
pumps.sql.meta.batch_size
EV: TYK_PMP_PUMPS_SQL_META_BATCHSIZE
Type: int
Specifies the amount of records that are going to be written each batch. Type int. By default, it writes 1000 records max per batch.
pumps.sqlaggregate.name
EV: TYK_PMP_PUMPS_SQLAGGREGATE_NAME
Type: string
Deprecated.
pumps.sqlaggregate.type
EV: TYK_PMP_PUMPS_SQLAGGREGATE_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.sqlaggregate.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.sqlaggregate.filters.org_ids
EV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.sqlaggregate.filters.api_ids
EV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.sqlaggregate.filters.response_codes
EV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.sqlaggregate.filters.skip_org_ids
EV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.sqlaggregate.filters.skip_api_ids
EV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.sqlaggregate.filters.skip_response_codes
EV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.sqlaggregate.timeout
EV: TYK_PMP_PUMPS_SQLAGGREGATE_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.sqlaggregate.omit_detailed_recording
EV: TYK_PMP_PUMPS_SQLAGGREGATE_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.sqlaggregate.max_record_size
EV: TYK_PMP_PUMPS_SQLAGGREGATE_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.sqlaggregate.meta.type
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_TYPE
Type: string
The supported and tested types are sqlite
and postgres
.
pumps.sqlaggregate.meta.connection_string
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_CONNECTIONSTRING
Type: string
Specifies the connection string to the database.
pumps.sqlaggregate.meta.postgres
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_POSTGRES
Type: PostgresConfig
Postgres configurations.
pumps.sqlaggregate.meta.postgres.prefer_simple_protocol
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_POSTGRES_PREFERSIMPLEPROTOCOL
Type: bool
Disables implicit prepared statement usage.
pumps.sqlaggregate.meta.mysql
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL
Type: MysqlConfig
Mysql configurations.
pumps.sqlaggregate.meta.mysql.default_string_size
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_DEFAULTSTRINGSIZE
Type: uint
Default size for string fields. Defaults to 256
.
pumps.sqlaggregate.meta.mysql.disable_datetime_precision
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_DISABLEDATETIMEPRECISION
Type: bool
Disable datetime precision, which not supported before MySQL 5.6.
pumps.sqlaggregate.meta.mysql.dont_support_rename_index
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_DONTSUPPORTRENAMEINDEX
Type: bool
Drop & create when rename index, rename index not supported before MySQL 5.7, MariaDB.
pumps.sqlaggregate.meta.mysql.dont_support_rename_column
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_DONTSUPPORTRENAMECOLUMN
Type: bool
change
when rename column, rename column not supported before MySQL 8, MariaDB.
pumps.sqlaggregate.meta.mysql.skip_initialize_with_version
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_SKIPINITIALIZEWITHVERSION
Type: bool
Auto configure based on currently MySQL version.
pumps.sqlaggregate.meta.track_all_paths
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_TRACKALLPATHS
Type: bool
Specifies if it should store aggregated data for all the endpoints. By default, false
which means that only store aggregated data for tracked endpoints
.
pumps.sqlaggregate.meta.ignore_tag_prefix_list
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_IGNORETAGPREFIXLIST
Type: []string
Specifies prefixes of tags that should be ignored.
pumps.sqlaggregate.meta.store_analytics_per_minute
EV: TYK_PMP_PUMPS_SQLAGGREGATE_META_STOREANALYTICSPERMINUTE
Type: bool
Determines if the aggregations should be made per minute instead of per hour.
pumps.statsd.name
EV: TYK_PMP_PUMPS_STATSD_NAME
Type: string
Deprecated.
pumps.statsd.type
EV: TYK_PMP_PUMPS_STATSD_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.statsd.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.statsd.filters.org_ids
EV: TYK_PMP_PUMPS_STATSD_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.statsd.filters.api_ids
EV: TYK_PMP_PUMPS_STATSD_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.statsd.filters.response_codes
EV: TYK_PMP_PUMPS_STATSD_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.statsd.filters.skip_org_ids
EV: TYK_PMP_PUMPS_STATSD_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.statsd.filters.skip_api_ids
EV: TYK_PMP_PUMPS_STATSD_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.statsd.filters.skip_response_codes
EV: TYK_PMP_PUMPS_STATSD_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.statsd.timeout
EV: TYK_PMP_PUMPS_STATSD_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.statsd.omit_detailed_recording
EV: TYK_PMP_PUMPS_STATSD_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.statsd.max_record_size
EV: TYK_PMP_PUMPS_STATSD_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.statsd.meta.address
EV: TYK_PMP_PUMPS_STATSD_META_ADDRESS
Type: string
Address of statsd including host & port.
pumps.statsd.meta.fields
EV: TYK_PMP_PUMPS_STATSD_META_FIELDS
Type: []string
Define which Analytics fields should have its own metric calculation.
pumps.statsd.meta.tags
EV: TYK_PMP_PUMPS_STATSD_META_TAGS
Type: []string
List of tags to be added to the metric.
pumps.stdout.name
EV: TYK_PMP_PUMPS_STDOUT_NAME
Type: string
Deprecated.
pumps.stdout.type
EV: TYK_PMP_PUMPS_STDOUT_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.stdout.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.stdout.filters.org_ids
EV: TYK_PMP_PUMPS_STDOUT_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.stdout.filters.api_ids
EV: TYK_PMP_PUMPS_STDOUT_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.stdout.filters.response_codes
EV: TYK_PMP_PUMPS_STDOUT_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.stdout.filters.skip_org_ids
EV: TYK_PMP_PUMPS_STDOUT_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.stdout.filters.skip_api_ids
EV: TYK_PMP_PUMPS_STDOUT_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.stdout.filters.skip_response_codes
EV: TYK_PMP_PUMPS_STDOUT_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.stdout.timeout
EV: TYK_PMP_PUMPS_STDOUT_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.stdout.omit_detailed_recording
EV: TYK_PMP_PUMPS_STDOUT_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.stdout.max_record_size
EV: TYK_PMP_PUMPS_STDOUT_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.stdout.meta.format
EV: TYK_PMP_PUMPS_STDOUT_META_FORMAT
Type: string
Format of the analytics logs. Default is text
if json
is not explicitly specified. When
JSON logging is used all pump logs to stdout will be JSON.
pumps.stdout.meta.log_field_name
EV: TYK_PMP_PUMPS_STDOUT_META_LOGFIELDNAME
Type: string
Root name of the JSON object the analytics record is nested in.
pumps.syslog.name
EV: TYK_PMP_PUMPS_SYSLOG_NAME
Type: string
Deprecated.
pumps.syslog.type
EV: TYK_PMP_PUMPS_SYSLOG_TYPE
Type: string
Sets the pump type. This is needed when the pump key does not equal to the pump name type.
pumps.syslog.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:
"filters":{
"api_ids":[],
"org_ids":[],
"response_codes":[],
"skip_api_ids":[],
"skip_org_ids":[],
"skip_response_codes":[]
}
The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.
The priority is always block list configurations over allow list.
An example of configuration would be:
"csv": {
"type": "csv",
"filters": {
"org_ids": ["org1","org2"]
},
"meta": {
"csv_dir": "./bar"
}
}
pumps.syslog.filters.org_ids
EV: TYK_PMP_PUMPS_SYSLOG_FILTERS_ORGSIDS
Type: []string
Filters pump data by the whitelisted org_ids.
pumps.syslog.filters.api_ids
EV: TYK_PMP_PUMPS_SYSLOG_FILTERS_APIIDS
Type: []string
Filters pump data by the whitelisted api_ids.
pumps.syslog.filters.response_codes
EV: TYK_PMP_PUMPS_SYSLOG_FILTERS_RESPONSECODES
Type: []int
Filters pump data by the whitelisted response_codes.
pumps.syslog.filters.skip_org_ids
EV: TYK_PMP_PUMPS_SYSLOG_FILTERS_SKIPPEDORGSIDS
Type: []string
Filters pump data by the blacklisted org_ids.
pumps.syslog.filters.skip_api_ids
EV: TYK_PMP_PUMPS_SYSLOG_FILTERS_SKIPPEDAPIIDS
Type: []string
Filters pump data by the blacklisted api_ids.
pumps.syslog.filters.skip_response_codes
EV: TYK_PMP_PUMPS_SYSLOG_FILTERS_SKIPPEDRESPONSECODES
Type: []int
Filters pump data by the blacklisted response_codes.
pumps.syslog.timeout
EV: TYK_PMP_PUMPS_SYSLOG_TIMEOUT
Type: int
You can configure a different timeout for each pump with the configuration option timeout
.
Its default value is 0
seconds, which means that the pump will wait for the writing
operation forever.
An example of this configuration would be:
"mongo": {
"type": "mongo",
"timeout":5,
"meta": {
"collection_name": "tyk_analytics",
"mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
}
}
In case that any pump doesn’t have a configured timeout, and it takes more seconds to write
than the value configured for the purge loop in the purge_delay
config option, you will
see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
.
In case that you have a configured timeout, but it still takes more seconds to write than
the value configured for the purge loop in the purge_delay
config option, you will see the
following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump.
.
pumps.syslog.omit_detailed_recording
EV: TYK_PMP_PUMPS_SYSLOG_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request
in pumps. Defaults to false
.
pumps.syslog.max_record_size
EV: TYK_PMP_PUMPS_SYSLOG_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
pumps.syslog.meta.transport
EV: TYK_PMP_PUMPS_SYSLOG_META_TRANSPORT
Type: string
Possible values are udp, tcp, tls
in string form.
pumps.syslog.meta.network_addr
EV: TYK_PMP_PUMPS_SYSLOG_META_NETWORKADDR
Type: string
Host & Port combination of your syslog daemon ie: "localhost:5140"
.
pumps.syslog.meta.log_level
EV: TYK_PMP_PUMPS_SYSLOG_META_LOGLEVEL
Type: int
The severity level, an integer from 0-7, based off the Standard: Syslog Severity Levels.
pumps.syslog.meta.tag
EV: TYK_PMP_PUMPS_SYSLOG_META_TAG
Type: string
Prefix tag
When working with FluentD, you should provide a FluentD Parser based on the OS you are using so that FluentD can correctly read the logs.
"syslog": {
"name": "syslog",
"meta": {
"transport": "udp",
"network_addr": "localhost:5140",
"log_level": 6,
"tag": "syslog-pump"
}
analytics_storage_type
EV: TYK_PMP_ANALYTICSSTORAGETYPE
Type: string
Sets the analytics storage type. Where the pump will be fetching data from. Currently, only
the redis
option is supported.
analytics_storage_config
Example Redis storage configuration:
"analytics_storage_config": {
"type": "redis",
"host": "localhost",
"port": 6379,
"hosts": null,
"username": "",
"password": "",
"database": 0,
"optimisation_max_idle": 100,
"optimisation_max_active": 0,
"enable_cluster": false,
"redis_use_ssl": false,
"redis_ssl_insecure_skip_verify": false
},
analytics_storage_config.type
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_TYPE
Type: string
Deprecated.
analytics_storage_config.host
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_HOST
Type: string
Redis host value.
analytics_storage_config.port
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_PORT
Type: int
Redis port value.
analytics_storage_config.hosts
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_HOSTS
Type: map[string]string
Deprecated. Use Addrs instead.
analytics_storage_config.addrs
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_ADDRS
Type: []string
Use instead of the host value if you’re running a redis cluster with mutliple instances.
analytics_storage_config.master_name
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_MASTERNAME
Type: string
Sentinel redis master name.
analytics_storage_config.sentinel_password
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_SENTINELPASSWORD
Type: string
Sentinel redis password.
analytics_storage_config.username
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_USERNAME
Type: string
Redis username.
analytics_storage_config.password
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_PASSWORD
Type: string
Redis password.
analytics_storage_config.database
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_DATABASE
Type: int
Redis database.
analytics_storage_config.timeout
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_TIMEOUT
Type: int
How long to allow for new connections to be established (in milliseconds). Defaults to 5sec.
analytics_storage_config.optimisation_max_idle
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_MAXIDLE
Type: int
Maximum number of idle connections in the pool.
analytics_storage_config.optimisation_max_active
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_MAXACTIVE
Type: int
Maximum number of connections allocated by the pool at a given time. When zero, there is no limit on the number of connections in the pool. Defaults to 500.
analytics_storage_config.enable_cluster
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_ENABLECLUSTER
Type: bool
Enable this option if you are using a redis cluster. Default is false
.
analytics_storage_config.redis_key_prefix
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_REDISKEYPREFIX
Type: string
Prefix the redis key names. Defaults to “analytics-”.
analytics_storage_config.redis_use_ssl
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_REDISUSESSL
Type: bool
Setting this to true to use SSL when connecting to Redis.
analytics_storage_config.redis_ssl_insecure_skip_verify
EV: TYK_PMP_ANALYTICSSTORAGECONFIG_REDISSSLINSECURESKIPVERIFY
Type: bool
Set this to true
to tell Pump to ignore Redis’ cert validation.
statsd_connection_string
EV: TYK_PMP_STATSDCONNECTIONSTRING
Type: string
Connection string for StatsD monitoring for information please see the Instrumentation docs.
statsd_prefix
EV: TYK_PMP_STATSDPREFIX
Type: string
Custom prefix value. For example separate settings for production and staging.
log_level
EV: TYK_PMP_LOGLEVEL
Type: string
Set the logger details for tyk-pump. The posible values are: info
,debug
,error
and
warn
. By default, the log level is info
.
log_format
EV: TYK_PMP_LOGFORMAT
Type: string
Set the logger format. The possible values are: text
and json
. By default, the log
format is text
.
Health Check
From v2.9.4, we have introduced a /health
endpoint to confirm the Pump is running. You
need to configure the following settings. This returns a HTTP 200 OK response if the Pump is
running.
health_check_endpoint_name
EV: TYK_PMP_HEALTHCHECKENDPOINTNAME
Type: string
The default is “hello”.
health_check_endpoint_port
EV: TYK_PMP_HEALTHCHECKENDPOINTPORT
Type: int
The default port is 8083.
omit_detailed_recording
EV: TYK_PMP_OMITDETAILEDRECORDING
Type: bool
Setting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to false.
max_record_size
EV: TYK_PMP_MAXRECORDSIZE
Type: int
Defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
"csv": {
"type": "csv",
"max_record_size":1000,
"meta": {
"csv_dir": "./"
}
}
omit_config_file
EV: TYK_PMP_OMITCONFIGFILE
Type: bool
Defines if tyk-pump should ignore all the values in configuration file. Specially useful when setting all configurations in environment variables.