New App in Dynatrace
The Site Reliability Guardian provides an automated change impact analysis to validate service availability, performance, and capacity objectives across various systems. This enables DevOps platform engineers to make the right release decisions for new versions and empowers SREs to apply Service-Level Objectives (SLOs) for their critical services.
Building resilient systems has been a driving motivation for many architects, performance engineers and site reliability engineers (SREs) over the past years. There are plenty of blogs, conference talks and examples that show how companies such as Netflix, Google, Amazon, Microsoft, Facebook and others, baked resiliency into everything they do. Using monitoring data is a key part of building resilient systems. But instead of just focusing on keeping a system up & running in production through smart auto-remediation, auto-scaling, failover and high-availability… we at Dynatrace believe that building a reliable system must be done throughout your Continuous Delivery Pipeline: from Dev all the way to Ops.
To showcase how such an End-to-End Delivery Pipeline, that prevents bad code changes or bad situations in production from impacting the reliability of your system, looks like – I built my “Unbreakable Delivery Pipeline” tutorial. The tutorial was first given at our Dynatrace PERFORM 2018 User Conference in Las Vegas as a HOT (Hands-On-Training) session with a focus on showing the integration between Dynatrace and AWS CodePipeline:
Based on the positive feedback at PERFORM, we decided to “go on tour” with our “Unbreakable Pipeline Tutorial” workshop and bring it to cities all around the globe! Having delivered the workshop in several cities I thought it is time give a “Mid Tour Status Report”!
Feedback from cities we toured so far
We started our tour in Atlanta, GA and went on to Columbus, OH. These two cities were essential to shape the pipeline example. If you look at the GitHub repo, you can see that some of the early feedback that came in, in Atlanta and Columbus, allowed me to make some necessary improvements of my AWS Lambda functions. THANKS to all the attendees for your constructive feedback!
From there, I continued the tour and brought it to DevOps Fusion in Zurich before heading back to the US hitting Minneapolis, San Francisco (AWS Summit) and Denver:
To sum up what has happened so far:
- About 800 attendees saw the unbreakable pipeline at 2 conference sessions
- About 100 attendees implemented the unbreakable pipeline in hands-on workshops
- Several important changes were added to the GitHub repo based on attendee feedback
One important question I kept hearing in every city: “Andi – you built the Unbreakable Pipeline using AWS CodePipeline, CodeDeploy, DynamoDB, EC2 and Lambdas. Can it also be implemented with other CI/CD tools such as Jenkins, Bamboo, Concourse; deployment automation tools such as Ansible, Puppet, Chef; and testing tools such as JMeter, Neotys, Gatling …?”
The answer is: YES! All the magic happens by leveraging the Dynatrace REST API to implement concepts such as:
- Monitoring as Code: Define and Enforce Monitoring Contracts for apps and services as code
- Shift-Right: Push Deployment Information from your Pipeline to Dynatrace via Events API
- Shift-Left: Pull Performance and AI-Detected Problem details back to the Pipeline to act as automatic Quality Gates via the Problems, Smartscape and Timeseries API
- Self-Heal: Leverage Dynatrace Problem Notification integrations with ServiceNow, xMatters, PagerDuty, Ansible Tower, StackStorm or any custom integration to implement smart auto-remediation in case of issues
My team is currently working on building another “Unbreakable Pipeline” demo for Jenkins and Concourse. Until we have it ready, you can already go ahead and directly call these REST APIs from your respective tools in your CI/CD DevOps tool chain. Find all the REST API Documentation, leverage the REST API examples from Wolfgang Beer, use my Python based Dynatrace CLI or call these bash scripts I put out on GitHub!
Join US – but make sure to come prepared!
The workshop typically runs for 3 hours and we are doing a lot of hands-on work with Dynatrace, as well as AWS. While I am always happy to help people getting started with a new technology, we require attendees to have basic skills with AWS when attending this workshop. You must bring your own AWS Account and you have to be familiar with navigating through the AWS Console. We will be creating and manipulating S3 buckets, will create CloudFormation stacks, explore AWS CodePipeline and potentially look at some AWS Lambda functions and AWS API Gateway configurations. We do recommend that you walk through my 101 AWS Monitoring Tutorial for Dynatrace. If you master the first 2 labs in that tutorial, you will be READY for our AWS DevOps Tutorial for Dynatrace (aka the Unbreakable Pipeline Tutorial).
And if you really want to know more about what you are getting into – watch my 30 minute Online Performance Clinic where I give an end-to-end overview of this tutorial or read the Unbreakable Pipeline, Shift-Left, Shift-Right & Self-Healing blog.
BE PART of the Unbreakable Pipeline Movement
Thanks to all of you that have already participated in one of the workshops, downloaded or cloned the GitHub repo, watched my presentations at conferences, read my blog or watched my Observability Clinic. Please keep giving us feedback on how this concept works for you and your organization. Let us know which additional concepts we should include and share your success stories with us!
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.
Go to forum