Digital experinces bg
eBook

5 Key Considerations for Monitoring Pivotal Platform

  

Download the eBook

Software is taking over the world

As a result, every business needs to embrace software as a core competency to ensure survival and prosperity. However, transforming into a software company is a significant task–building and running software today is harder than ever. And if you think it’s hard now, consider that–just like most businesses–you are only just at the beginning of the journey.

Speed and scale: a double-edged sword

You invested in Pivotal Platform to build and run your software at a speed and scale that will transform your business. And that’s where Pivotal Platform excels. But are you prepared for the complexity of building and running applications at speed and scale? As software development transitions to a cloud native approach, you will be dealing with hundreds, if not thousands, of microservices and containers as well as software-defined cloud infrastructure. Any complexity you already face now will be dwarfed in the immediate future to the point that it will become too immense for humans to handle.

No doubt you have invested in monitoring tools–probably lots of them over the years. But traditional monitoring tools don’t work in this new dynamic world of speed and scale that Pivotal Platform enables. That’s why many analysts and industry leaders predict that more than 50% of enterprises will have to entirely replace their traditional monitoring tools in the next few years.

What killed traditional monitoring?
  • Manual effort

    Slow, manual deployment and configuration + manual upgrades and re-work for changing environments = a maximum of just 5% apps are monitored

  • Monitoring tool proliferation

    Multiple monitoring tools for different purposes with siloed teams looking at myopic data sets

  • Agent complexity

    Complex mix of agents for diverse technologies, types each with different deployment, installation, and configuration processes

  • Charts and data, but no answers

    Data from multiple agents and different sources look great but are just a bunch of charts on a dashboard with no answers

Which brings us to why we’ve written this guide. We understand how important your software is. And we know that choosing the right monitoring platform is mandatory if you want to live by speed and scale, and not die by speed and scale.

We worked with your peers from across industries to arrive at our insights

As a Pivotal Premier Technology Partner, Dynatrace supports some of the world’s most recognized brands. We help them to automate their operations and release better software faster. We have experience monitoring the largest cloud and Pivotal Platform implementations. This gives us a unique perspective into how enterprises manage the significant complexity challenges of speed and scale. Examples include:

  • A large retailer managing 2,000,000 transactions a second
  • An airline with 9,200 agents on 550 hosts capturing 300,000 measurements per minute and more than 3,000,000 events per minute
  • A large health insurer with 2,200 agents on 350 hosts, with 900,000 events per minute and 200,000 measures per minute

Read on to reveal five critical factors that dictate the right monitoring platform for Pivotal Platform.

At Dynatrace, we experienced our own transformation—embracing cloud, automation, containers, microservices, and NoOps. We saw the shift early on and transitioned from delivery software through a traditional on-premise model to the successful hybrid-SaaS innovator it is today. Read the Game changing – From zero to DevOps cloud in 80 days-brief to learn more, too.

  • Speed

    26 releases per year

  • Agility

    5,000 cloud deployments

  • Quality

    93% reduction in production bugs

  • Innovation

    Hundreds of developers, no operations

  • Customers

    Ecstatic

Chapter 1

Hybrid, multi-cloud is the norm

Insight
Enterprises are rapidly adopting cloud infrastructure as a service (IaaS), platform as a service (PaaS), and function as a service (FaaS) to increase agility and accelerate innovation. Cloud adoption is so widespread that hybrid, multi-cloud is now the norm. According to RightScale, 81% of enterprises are executing a multi-cloud strategy.

  • Hybrid cloud
    As enterprises migrate applications to the cloud or build new cloud-native applications, they also maintain traditional applications and infrastructure. Over time, this balance will shift from the traditional tech stack to the new stack, but both new and old will continue to coexist and interact.

    Artwork hybrid cloud
  • Multi-cloud
    Different cloud platforms have different features and benefits, technologies, levels of abstraction, prices, and geographic footprints. Each of these differences make them suitable for specific services. Enterprises started with a single cloud provider but quickly embraced multiple clouds, resulting in highly distributed application and infrastructure architectures.

    Artwork multi cloud

Challenge
The result of hybrid multi-cloud is bimodal IT—the practice of building and running two distinctly different application and infrastructure environments. Enterprises need to continue to enhance and maintain existing, relatively static environments. They also need to build and run new applications and scalable, dynamic software defined infrastructure in the cloud.

Putting traditional IT to one side for a moment to focus solely on multiple cloud platforms, the frequent output is monitoring tool proliferation. This is because of teams operating in silos, despite critical interdependencies between services running across clouds.

The challenge of multiple monitoring tools across clouds is further compounded when we bring traditional IT back into focus. And with it, the need to monitor and manage a range of existing technologies that also have service interdependencies with cloud environments.

Cloud services

Key consideration
Simplicity and cost saving were the drivers for early cloud adoption. But today, cloud use has evolved into complex and dynamic landscapes that incorporate multiple clouds as well as traditional on-premise technologies. Being able to seamlessly monitor the full technology stack across multiple clouds as well as traditional on-premise technology stacks is critical to automating operations–no matter how highly distributed the applications and infrastructure.

Chapter 2

Microservices and containers introduce speed

Insight
Microservices and containers are revolutionizing the way applications are built and deployed. They provide tremendous benefits in terms of speed, agility, and scale. In fact, 98% of enterprise development teams expect microservices to become their default architecture. IDC predicts that by 2022, 90% of all apps will feature microservices architectures.

Challenge
Close to three in four (72%) CIOs say that monitoring containerized microservices in real-time is almost impossible. Moving to microservices running in containers makes it harder to get visibility into environments. Each container acts like a tiny server, multiplying the number of points you need to monitor. They live, scale, and die based on health and demand. As you scale your Pivotal Platform environment from on-premise to cloud to multi-cloud, the number of dependencies and data generated increases exponentially. This makes it seem impossible to understand the system as a whole.

The traditional approach to instrumenting applications involves manual deployment of multiple agents. When environments consist of thousands of containers with orchestrated scaling, manual instrumentation becomes unfeasible and severely restricts your ability to innovate.

Key consideration
A manual approach to instrumenting, discovering, and monitoring microservices and containers will not work. For dynamic, scalable platforms like Pivotal Platform, a fully automated approach becomes a requirement. For agent deployment, for continuous discovery of containers, and for monitoring the applications and services running within them.

72% of CIOs say monitoring containerized microservices in real-time is almost impossible.

— Dynatrace CIO Complexity Report 2018

Chapter 3

Not all AI is equal

Insights
Gartner predicts 30% of IT organizations that fail to adopt AI will no longer be operationally viable by 2022. As enterprises embrace hybrid, multi-cloud environments, the sheer volume of data and massive complexity created will make it impossible for humans to monitor, comprehend, and take action in a timely manner. This critical need for machines to solve data volume and speed challenges resulted in Gartner developing a new category for the industry, known as “AIOps” (or AI for IT Operations).

Challenge
There is plenty of hype about AI across industries, and making sense of the market noise is difficult. To help, here are three key AI use cases to keep in mind when considering how to monitor your Pivotal Platform and applications:

  • AI and root cause analysis

    The biggest benefit of AI to monitoring is the ability to automate root cause analysis, enabling problems to be identified and resolved at speed. An AI engine that has access to more complete data (including third-party data) will provide faster, contextual insights.

  • AI and alert storms

    AI is perfectly suited to real-time monitoring and analysis of large data sets to provide the most probable reason for a performance issue. AI can recognize when related anomalies occur within your environment (i.e. when thresholds are broken), preventing alert storms.

  • AI and auto-remediation

    AI should be a part of your CI/CD pipeline, deployment, and remediation processes. Problems can be detected instantly, and bad builds will be identified earlier so you can automatically remediate or roll back to a previous state.

Many enterprises are trying to address these use cases by adding an AIOps solution to the 10-25+ monitoring tools they already have. This approach may have limited benefits, such as alert noise reduction. But it will have minimal impact on addressing root cause analysis and auto-remediation requirements as it lacks the contextual understanding of the data to draw any meaningful conclusions.

You will also find there are many different approaches to AI. Here are a few of the more popular ones you are likely to encounter as you move towards an AIOps strategy:

  • Deterministic AI

    Five star

    This gives you the ability to discover the topology of your environment and the metrics produced by all components. It works immediately and adapts to changes without having to re-learn patterns. It is also excellent at event noise reduction (alert storms), dependency detection, root cause analysis, and business impact analysis.

  • Machine learning AI

    Two star

    This is a metrics-based approach. It takes time to build a data set to which it can compare previous states. Its strongest feature is limiting event noise reduction. However, it does not offer root cause or business impact analysis.

  • Anomaly-based AI


    With this form of AI, both event noise reduction and dependency detection are okay. One of the major drawbacks is that it takes a lot of time to build a metrics model that would show a correlation for root cause analysis.

Topology

Key consideration
Not all AI is created equally. Attempting to enhance existing monitoring tools with AI, such as machine learning and anomaly-based AI, will provide limited value. AI needs to be inherent in all aspects of the monitoring platform and see everything in real-time—from the topology of the architecture to dependencies and service flow. AI should also be able to ingest additional data sources for inclusion in the AI algorithms rather than by people having to correlate data via charts and graphs.

30% of IT organizations that fail to adopt AI will no longer be operationally viable by 2022.

— Gartner

Chapter 4

DevOps: Innovation’s soulmate

Insight
DevOps is perhaps the most critical consideration when maximizing investment in Pivotal Platform and other cloud technologies. Implemented and executed correctly, DevOps enhances an enterprise’s ability to innovate with speed, scale, and agility. Research shows that high performing DevOps teams have 46x more frequent code deployments and a 440x faster lead-time from commit to deploy.

Challenge
As enterprises scale DevOps across multiple teams there will be hundreds or thousands of changes a day, resulting in code pushes every few minutes. CI/CD tooling helps mitigate error-prone manual tasks through automated build, test and deployment. But code still has the propensity to make it into production. The complexity of highly dynamic and distributed cloud environments, along with thousands of deployments a day, will only exacerbate this risk.

As the software stakes get higher, shifting performance checks left—that is, earlier in the pipeline— to enable faster feedback loops becomes critical. Yet this is not easy to achieve with a multi-tool approach to monitoring. To be effective, a monitoring solution needs to have a holistic view of every component and every change. It also needs a contextual understanding of the impact each change has on the system as a whole.

Key consideration
To go fast and not break things, automatic performance checks must happen earlier in the pipeline. This requires a monitoring solution with tight integration into a wide range of DevOps tooling. Combined with the right AI, these integrations will also help support the move to AIOps, enabling automated remediation that will limit the business impact of bad software releases.

When selecting your monitoring solution, check which DevOps tooling it integrates with and supports as well as how it will impact your ability to automate things in the future.

DevOps tooling
Chapter 5

Digital experiences matter

Insight
Enterprises are striving to accelerate innovation without putting customer experiences at risk. But it’s not just end-customer web and mobile app experiences that are at risk. Apps built on Pivotal Platform support a much broader range of services and audiences, including:

  • Wearables, smart homes, smart cars and life-critical health devices that have rapidly developed since the consumerization of IT.
  • Corporate employees working remotely who need access to systems that are in the corporate datacenter but also cloud-based.
  • Office-based employees who rely on smart features for lighting, temperature, safety, and security that depend on machine-to-machine (M2M) communications and the Internet of Things.
  • The rise of the machines

    Machines are used in unimaginable areas worldwide and are increasingly being hooked into the Internet, across all industries, creating a colossal communication network at the global scale. Gartner estimates connected devices in use worldwide will top 20 billion by 2020.

What was simply regarded as user experience is now something altogether different. It has evolved into digital experience across end-users, employees and IoT.

Challenge
Enterprise IT departments face mounting pressure to accelerate the speed of innovation. Meanwhile, people’s demands for speed, usability, and availability of applications and services continue to rise unabated. Then there is the explosion of IoT devices and the increasingly vast array of technologies involved. Managing and optimizing digital experiences alongside high frequency software release cycles and operating complex hybrid cloud environments presents a major headache.

If digital experiences are not measured then how can enterprises prioritize and react when problems occur? Are they even aware there are problems? And if experiences are quantified, is it in context to the supporting applications, services, and infrastructure that will permit rapid root cause analysis and remediation? These questions must be answered before enterprises are able to deliver the extraordinary digital customer experiences that will ensure they stay relevant and prosper.

  • 53%


    Performance of mobile users abandon a session if it takes longer than 3 seconds to load

  • 79%


    Impact of users will not return after a negative experience

  • 75%


    Root Cause of customers expect online help resolution within 5 minutes

  • 74%


    Revenue of CIOs fear IoT performance problems could derail operations and significantly damage revenues

Key consideration
Enterprises need confidence that they’re delivering—or on the path to delivering—exceptional digital experiences despite increasingly complex environments. To achieve this, they require real-time monitoring and 100% visibility across all types of customer-, employee-, and machine-based experiences. Key things to look for include:

  • Visualizing and prioritizing impact

    Understand how specific issues or overall performance impacts every single user session or device and prioritize by magnitude.

  • Visibility from the edge to the core

    A single view across your entire multi-cloud ecosystem. From the performance of users and edge devices to your applications and cloud platforms all in context.

  • A single source of truth for all

    Ensure stakeholders, from IT to Marketing, have access to the same data to avoid silos, finger-pointing and war rooms.

76% of CIOs say multi-cloud deployments make monitoring user experience difficult.

— Dynatrace CIO Complexity Report 2018

A Leader in the 2024 Gartner® Magic Quadrant™ for Observability Platforms

See why Gartner positioned us furthest for Completeness of Vision and highest for Ability to Execute in the latest Magic Quadrant.

Free trial

Trusted by thousands of top global brands

Try it free

See our unified observability and security platform in action.
Full wave bg