Insufficient Logging & Monitoring is not a direct vulnerability or threat, but rather the organisation is blind to current active attacks, previous attacks, and the information needed in the forensics process to determine the impact of the attack. Without this insight the organisation is vulnerable to future attacks through the same methods or backdoors planted in previous attacks which might be even more difficult to detect.
The impact various due to the type of business the organisation conducts but a few relatable examples are:
- A retailer might have customer information exposed, new product lines leaked, or merchandise stolen.
- A manufacturer could have supply chains disrupted, logistics altered, or CAM systems interrupted.
- A bank could have account holder information altered or deleted, financial systems disrupted, or partner integration compromised.
To mitigate Insufficient Logging & Monitoring organisations must ensure that both application and transaction logs are kept and secured in separate data storage and analytics systems. All systems in the request/response path of the API must send their log information to a separate storage system so it can be correlated during forensics. If an API system is compromised then forensic information stored on that system cannot be trusted. There are many options that facilitate collecting the log information but it is important to highlight that the information flow needs to be unidirectional. Data from a compromised system must not be allowed to modify data already sent to the storage and analytics system, so implementations that use a shared drive, or tools like rsync, should be avoided since manipulation on the compromised system will change data on the storage system.
|Threat agents/Attack vectors||Security Weakness||Impacts|
|API Specific : Exploitability 2||Prevalence 3 : Detectability 1||Technical 2 : Business Specific|
|Attackers take advantage of lack of logging and monitoring to abuse systems without being noticed.||Without logging and monitoring, or with insufficient logging and monitoring, it is almost impossible to track suspicious activities and respond to them in a timely fashion.||Without visibility over on-going malicious activities, attackers have plenty of time to fully compromise systems.|
The APIM plays a significant role in the collection of transaction log information since it is part of the request/response path in addition to its own application logs. The APIM should help facilitate transport of this information to separate storage and analytics systems so that organisations are not left with creating their own solutions which may be vulnerable to the source system modifying the target system.
Various features of Tyk can enhance data collection
- Tyk Gateway Application Logs can be configured to output to various other 3rd party aggregation and error tools. The verbosity of the log can also be increased if needed.
- Tyk Analytics allows viewing the transaction log meta-data and as a configuration option the whole request/response payloads.
- Tyk Pump allows shipping of transaction logs into Tyk Analytics as well as other systems.
- Tyk Instrumentation allows sending statistics and metrics to other services such as StatsD and NewRelic.
- Tyk Context Variables can be used to inject a correlation id (request_id) into requests as they pass through the Gateway. This facilitates tracking of requests as they pass through the remaining API infrastructure.
- Audit logs can be generated based on usage of the Tyk Dashboard. This provides a record of all interactions between the Dashboard and its users.
It is important to differentiate but not separate application logs from transaction logs. Application logs are the logs the program creates during its operation. These might be warnings or errors, stack traces, object state dumps, etc. that are generated by the program itself. The transaction logs are the request/response information being handled by the application, the transactions being operated on by the program. Both are valuable information in attack detection and forensics and should be treated equally in regards to storage mechanisms and retention policies. In addition, many times the ability to correlate attacks in different systems is the timestamp so all systems must have their clocks in sync. Drift of those clocks should be monitored and alerts sent when outside of thresholds.