11/22/2021 | Press release | Distributed by Public on 11/22/2021 03:35
Visibility into system activity and behavior has become increasingly critical given organizations' widespread use of Amazon Web Services (AWS) and other serverless platforms.
With AWS, applications may be distributed horizontally across worker nodes, and microservices may run in Kubernetes clusters that interact with AWS managed services or in serverless functions. These resources generate vast amounts of data in various locations, including containers, which can be virtual and ephemeral, thus more difficult to monitor. These challenges make AWS observability a key practice for building and monitoring cloud-native applications.
Let's take a closer look at what observability in dynamic AWS environments means, why it's so important, and some AWS monitoring best practices.
Like general observability, AWS observability is the capacity to measure the current state of your AWS environment based on the data it generates, including its logs, metrics, and traces.
Because of its matrix of cloud services across multiple environments, AWS and other multicloud environments can be more difficult to manage and monitor compared with traditional on-premises infrastructure. To cope with this complexity, IT pros need a clear understanding of what's happening, the context it's happening in, and what is affected. With dependable, contextual observability data, teams can develop data-driven service-level agreements (SLAs) and service-level objectives (SLOs) to make their AWS infrastructure more reliable and resilient.
AWS provides a suite of technologies and serverless tools for running modern applications in the cloud. Here are a few of the most popular.
Serverless technologies can reduce management complexity. But like any other tool used in production, it's critical to understand how these technologies interact with the broader technology stack. If a user encounters an error page on a website, for example, it's vital to trace the behavior to the original source of failure.
While AWS provides the foundation for running serverless workloads and coordinated tools for monitoring AWS-related workloads, it lacks comprehensive instrumentation for observability across the multicloud stack. As a result, various application performance and security problems can go unnoticed absent sufficient monitoring.
To gain insight into these problems, software engineers typically deploy application instrumentation frameworks that provide insight into applications and code. These frameworks can include break-points/debuggers and logging instrumentation, or processes, such as manually reading log files. The manual approach is usually effective only in smaller environments where applications are limited in scope. Here are some best practices for maintaining AWS observability in larger, multicloud environments.
Dynatrace provides a wide range of Powered by AI and automation at its core, Dynatrace turns your application data and log analytics into actionable insights and automatable SLOs.
As a long-standing AWS Advanced Technology Partner, Dynatrace integrates closely with AWS services with no code changes. Through auto-instrumentation, Dynatrace provides seamless end-to-end distributed tracing for AWS Lambda functions. Using OneAgent with Dynatrace Operator, Dynatrace combines observability for EKS clusters, nodes, and pods on AWS Fargate with distributed tracing, application metrics, and real user monitoring. Dynatrace ingests CloudWatch metrics and, as a launch partner for Amazon CloudWatch Metric Streams, can provide full observability of AWS services with a fast and direct push of metric data from the source to Dynatrace.
To learn more about how Dynatrace manages AWS observability, join us for an on-demand demo, AWS Observability with Serverless.