08/16/2022 | Press release | Distributed by Public on 08/16/2022 10:26
Dynatrace Log Monitoring is now enhanced with advanced processing, making the data locked in logs actionable and completing the observability picture on the Dynatrace platform. Easy-to-use pattern matching lets you extract any data from logs, transform and manipulate attributes, and filter or drop insignificant logs.
The data locked in your log files can be a goldmine for your application developers, operations teams, and your enterprise as a whole. However, it can be complicated, expensive, or even impossible to set up robust observability that makes use of this data.
For example:
Overcoming these issues can be complex and expensive in terms of additional time and effort. Pre-formatting and unifying data with domain-related attributes on-source where the info is logged, might require software reconfiguration or even be impossible. Deploying log shippers that are expressly set up to reformat logs before they're sent to the observability tool increases overhead and complexity. This also adds requirements for maintenance, decreases time to market for new services, blurs the observability picture, and ultimately holds you back from making data-driven decisions.
Log files provide an unparalleled level of detail about the performance of your software. Whether a web server, mobile app, backend service, or other custom application, log data can provide you with deep insights into your software's operations and events. When log data is granularly structured in JSON format, it's fairly easy to extract and observe these details. For example, Dynatrace recently introduced the extraction of log-based metrics for JSON logs.
But there's a catch: there's no universally guaranteed format and structure for logs. For example, Apache access logs store each event as a single line while Java debug logs store each individual event across multiple lines. FortiGate traffic logs store data elements in key-value pairs while NGINX custom access logs store events in arrays. Add multi-event logs (think OpenLDAP access logs), multi-application logs (like Linux Syslog), nested JSONs, and CSV logs to the mix and the complexity becomes overwhelming.
Dynatrace now includes powerful log-processing capabilities for all types of log data. These capabilities make your log data actionable directly on the Dynatrace platform without requiring additional log preprocessors, shippers, handlers, or other software overhead.
Log data is processed on ingest, whether coming from a fleet of OneAgents deployed across your enterprise hosts or generic API ingest from cloud services. Formatting in single- or multi-line logs, key-value pairs or arrays, nested JSON, or CSVs is no longer an obstacle to making your log data actionable in your observability mission.
Log processing enables:
If a log record holds any attributes that are irrelevant or need to be removed, FIELDS_REMOVE() can be used to filter them out.
FIELDS_ADD(batchjobs.time: (batchjob1.time + batchjob2.time))
The new attribute batchjobs.time can then be converted into a metric for monitoring.
The six new commands and tens of functions for log processing, in combination with the power of Dynatrace Pattern Language, enable you to finally unlock the full value of the data locked in your log files.
Taking full advantage of these new capabilities might seem like a big effort-it's not. Dynatrace now offers built-in processing rules to support you in unlocking the full power of Dynatrace log processing.
These built-in rules enrich your logs with distributed traces and links to Real User Monitoring insights. They also cover most common technologies, like NGINX, HAProxy, Elasticsearch, and Cassandra. Logs from these technologies are parsed out of the box for straightforward analysis. At any time, you can inspect these built-in rules or switch them on and off.
Early users have already shared how much more value they get out of their logs with processing. You can also start taking full advantage of the insights locked in your log data:
We look forward to seeing how you put log processing to work and reviewing your feedback in the Dynatrace Community.