With countless monitoring tools and data silos, it's increasingly difficult to monitor and optimize app performance and availability. At Perform 2024, experts discussed how consolidating tool sprawl can help, but unified observability is key.
Today’s organizations are drowning in data. So, it’s no surprise that data volumes have grown beyond humans’ ability to manage. But the data deluge isn’t the only problem facing enterprises, as many struggle with tool sprawl. Monitoring myriad tools and the accompanying data silos make it increasingly difficult for enterprises to optimize application performance and availability.
IT and security teams must dedicate their effort to managing and maintaining all these different tools, which diverts time and resources from critical innovation in the process. Despite this, organizations continue to adopt multiple tools and software services to address their monitoring needs. In fact, according to recent Dynatrace research, the average multicloud environment spans 12 different platforms and services.
Enter unified observability, which presents data in intuitive, user-friendly ways to enable data gathering, analysis, and collaboration, while reducing mean time to repair (MTTR) issues and boosting application performance and availability.
At the Dynatrace Perform 2024 conference in Las Vegas, Safia Habib, lead solutions engineer at Dynatrace, joined Chris Conklin, technology executive for enterprise monitoring at TD Bank, as he discussed his journey from traditional monitoring to observability, getting stakeholder buy-in, and other challenges facing businesses along the way.
Tackling the tool sprawl problem
Tool sprawl is an often-overwhelming situation in which an organization uses too many different IT tools, dedicating each tool for a singular use case. These tools and services are often siloed and expensive from both financial and productivity perspectives.
“When people start on that tool sprawl journey, [they] tend to forget the productivity part and only focus on the cost part, which is kind of counterproductive in the end,” Habib says. “If you have 10 tools and your team is looking at those 10 things, then they need to educate themselves on all of those [tools], they need to keep up with all [of them], and they need to keep learning about them, because it’s an ongoing journey.”
So, how do organizations end up in these unwieldy situations? Many organizations are adopting multicloud environments for their various business services. And as these environments flood organizations with data, they need to find ways to manage it, protect it, and derive value from it. This, in turn, leads them to adopt a different tool for each use case.
TD Bank was in such a situation. “We were basically inundated by maintaining products, which means we had to make sure our software was updated, we had to make sure our vulnerabilities were taken care of, monitoring capacity and upgrading infrastructure, and so on,” Conklin says. “Enough turns into enough, and then you have to realize it’s time … to start consolidating down to what provides you the most value.”
Making the journey from traditional monitoring to unified observability
To combat the challenges created by tool sprawl, TD Bank made the move from traditional monitoring to unified observability from Dynatrace. Many confuse monitoring and observability for the same thing, but there are key differences to consider. While both collect, analyze, and use data to track the progress of an application or service, observability goes a step further, helping teams to understand what’s happening in a multicloud environment in context.
“Monitoring, in the traditional sense, is [best described as], ‘Something happened, here’s the alert, try to figure out what’s going on, and you go fix it,’” Conklin says. “Observability is [best described as], ‘Something happened. Here’s what it means impact-wise, and here are some data touchpoints on what else is breaking because of that.’ And having that full context can help you address those things going forward.”
Not all observability is equal, however. Modern observability necessitates a holistic approach to collecting, processing, and analyzing data. It needs to go beyond logs, metrics, and traces, focusing on security, user experience, and business impact.
“In the observability space, there is a lot of overlap that happens,” Habib says. “Most of the time, what I see in the field is that a lot of customers don’t even know what [the] capabilities are to figure out where that overlap is.”
This confusion inevitably leads to tool sprawl. “The more tools that you have, the more productivity that you’re losing,” Habib adds.
Overcoming the challenges of resistance to technology change
While the benefits to adopting observability are obvious, change is difficult—especially when it comes to technology. Therefore, selling the necessary stakeholders on moving away from the tools they’ve become accustomed to can be challenging.
“It took a while for our partners to buy into it and to really see the value in it,” Conklin says. “You’re always going to have people that come in with their own train of thought and with their own preferred tool choice. You [may have someone who has] all the expertise in tool A, but we don’t actually have that or support that. And so, if we want to add that to the mix, then what we have to think about is maintenance, tool sprawl, everything.”
The key to getting organizational buy-in is showing the necessary stakeholders the value and ROI a solution delivers. But how do you measure that?
“I would say marry it to the customer experiences,” Conklin says. “Whatever tool it is, if it’s going to give you those insights instantly or help you minimize the impact on your customer or help you avoid impact on your customer [entirely], that’s the value proposition,” he continues. “That’s everything to do with the mean time to resolve, the impact assessments, the mean time to awareness—every bit of that.”
How unified observability untangles the complicated web tool sprawl creates
Unified observability and data security are critical to generating trustworthy insights. That’s where Dynatrace can help, combining observability and continuous runtime application security with its Davis AI to provide answers and intelligent automation from data at a massive scale.
“There is tremendous value in having a centralized eye, because you do focus on how we can leverage fewer tool sets to consolidate data and really let that data cultivate and get insights from it,” Conklin says. “If you have too many silos, it’s incredibly difficult to stitch it all together—and you can’t do that if you don’t have that centralized eye [that unified observability provides].”
For the latest news and insights from the Dynatrace Perform 2024 conference, check out our guide.
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.
Go to forum