Precise answers from unified data in context
See how we're different
1/11
Dynatrace ingests and analyzes petabytes of log data into its data lakehouse, Grail. The context of observability and security data is automatically and efficiently retained, serving as the single log management and analytics platform to bring your IT, security, and business teams together to meet your service level objectives.
Previous
Next
Dashboard
Thank you for going through the tour! As you can see, our unique approach to enterprise log management and analytics automatically provides log data, metrics, and analysis in the context of topology, traces, and user sessions – with no schemas or storage tiers to manage!
Precise answers from unified data in context
Thank you for your feedback!
or
Did you find this product tour helpful?
Explore our unique approach to log management and analytics to see how Dynatrace is the only platform to provide log data, metrics, and analysis in the context of topology, traces, and user sessions – all with no schemas or storage tiers to manage.
Request demo
Free trial
Download product brief
Request a live demo
LOG MANAGEMENT AND ANALYTICS
LOG MANAGEMENT AND ANALYTICS
Logs are integrated everywhere within the platform so any user you choose can have access for troubleshooting and analysis.
2/11
Next
Previous
Next
Previous
Let's start with a simple use case. From the log viewer, examine logs within the production namespace, for a Kubernetes frontend deployment, and log level of ERROR.
3/11
Next
Previous
Refine results further using any combination of the available log attributes which are automatically-detected or custom-defined.
4/11
Next
Previous
Click on a log row to view all the log attributes. Any attribute can be quickly added to the filter.
5/11
Next
Previous
Topology-based attributes are also automatically connected to log records and are hyperlinked to the associated host, process, cluster, or service view. No tagging required.
6/11
Next
Previous
Now, navigate to the trace associated with this log event.
7/11
Next
Previous
The trace view helps you understand code dependencies, timing, errors, and logs in context of each request in the flow.
8/11
Next
Previous
For Advanced users, the Dynatrace Query Language (DQL) offers maximum flexibility and analyzes all types of observability data in a way that is easy to read, author, and automate.
9/11
Next
Previous
For example, this query gets a count of created, updated, and deleted configurations by cluster tenant and user – all on-demand with no indexing or rehydration.
10/11
Next
Previous
Before we explore how to use DQL for advanced analytics, let's quickly look at how logs support troubleshooting with the Dynatrace software intelligence platform.
11/11
Next
Previous
Here on a Problem Card, review the business impact, affected SLOs, and root cause detected by Dynatrace’s causal AI engine, Davis.
1/3
Next
Previous
Davis has detected the root cause of this problem as a Failure Rate Increase.
2/3
Next
Previous
After doing simple log filtering in this view, let's do advanced log analysis with DQL.
3/3
Next
Previous
The DQL query is pre-loaded, but we want to build out the statement to summarize how many errors were occurring by day.
1/7
Next
Previous
First, the error message we want is within the original log content and with DQL we can easily extract this with no upfront definition. With other tools, you are forced to define a new custom field and ingest the logs again.
2/7
Next
Previous
When the query is run, the error message is now parsed into its own field, demonstrating the power of schema-on-read.
3/7
Next
Previous
Next, isolate the error message using the PARSE command and get a count with the SUMMARIZE command.
4/7
Next
Previous
If you like, pin the chart to a dashboard and share the results with your team.
6/7
Next
Previous
Lastly, adjust the SUMMARIZE command to group by day, and choose bar chart visualization.
5/7
Next
Previous
Log data is displayed on your dashboard with data from your SLOs, custom metrics, applications, services, and user sessions to keep tabs on overall performance.
7/7
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Advanced analytics
Troubleshooting basics
Observability overview
Next
Previous
Observability overview
Dynatrace ingests and analyzes petabytes of log data into its data lakehouse, Grail. The context of observability and security data is automatically and efficiently retained, serving as the single log management and analytics platform to bring your IT, security, and business teams together to meet your service level objectives.
1/21
Next
Previous
Observability overview
Logs are integrated everywhere within the platform so any user you choose can have access for troubleshooting and analysis.
2/21
Next
Previous
Observability overview
Let's start with a simple use case. From the log viewer, examine logs within the production namespace, for a Kubernetes frontend deployment, and log level of ERROR.
3/21
Next
Previous
Observability overview
Refine results further using any combination of the available log attributes which are automatically-detected or custom-defined.
4/21
Next
Previous
Observability overview
Click on a log row to view all the log attributes. Any attribute can be quickly added to the filter.
5/21
Next
Previous
Observability overview
Topology-based attributes are also automatically connected to log records and are hyperlinked to the associated host, process, cluster, or service view. No tagging required.
6/21
Next
Previous
Observability overview
Now, navigate to the trace associated with this log event.
7/21
Next
Previous
Observability overview
The trace view helps you understand code dependencies, timing, errors, and logs in context of each request in the flow.
8/21
Next
Previous
Observability overview
For Advanced users, the Dynatrace Query Language (DQL) offers maximum flexibility and analyzes all types of observability data in a way that is easy to read, author, and automate.
9/21
Next
Previous
Observability overview
For example, this query gets a count of created, updated, and deleted configurations by cluster tenant and user – all on-demand with no indexing or rehydration.
10/21
Next
Previous
Observability overview
Before we explore how to use DQL for advanced analytics, let's quickly look at how logs support troubleshooting with the Dynatrace software intelligence platform.
11/21
Next
Previous
Troubleshooting basics
Here on a Problem Card, review the business impact, affected SLOs, and root cause detected by Dynatrace’s causal AI engine, Davis.
12/21
Next
Previous
Troubleshooting basics
Davis has detected the root cause of this problem as a Failure Rate Increase.
13/21
Next
Previous
Troubleshooting basics
After doing simple log filtering in this view, let's do advanced log analysis with DQL.
14/21
Next
Previous
Advanced analytics
The DQL query is pre-loaded, but we want to build out the statement to summarize how many errors were occurring by day.
15/21
Next
Previous
Advanced analytics
First, the error message we want is within the original log content and with DQL we can easily extract this with no upfront definition. With other tools, you are forced to define a new custom field and ingest the logs again.
16/21
Next
Previous
Advanced analytics
When the query is run, the error message is now parsed into its own field, demonstrating the power of schema-on-read.
17/21
Next
Previous
Advanced analytics
Next, isolate the error message using the PARSE command and get a count with the SUMMARIZE command.
18/21
Next
Previous
Advanced analytics
Lastly, adjust the SUMMARIZE command to group by day, and choose bar chart visualization.
19/21
Next
Previous
Advanced analytics
If you like, pin the chart to a dashboard and share the results with your team.
20/21
Next
Previous
Advanced analytics
Log data is displayed on your dashboard with data from your SLOs, custom metrics, applications, services, and user sessions to keep tabs on overall performance.
21/21