I recently joined two industry veterans and Dynatrace partners, Syed Husain of Orasi and Paul Bruce of Neotys as panelists to discuss how performance engineering and test strategies have evolved as it pertains to customer experience. This blog summarizes our great conversation for the posed questions.
What trends are you seeing in the industry?
We see every industry feeling the pressure to respond to the increasing customer demand for full-service web and mobile channels to transact. Company brands are now measured by the “app” and “app experience” and expect every application to be as fast as Google. During the recent pandemic, organizations that lack processes and systems to scale and adapt to remote workforces and increased online shopping are feeling the pressure even more.
Many organizations recognize addressing customer demand requires multiple different transformations to accelerate time to market and provide first class user experience. Change starts by thoroughly evaluating whether the current architecture, tools, and processes for configuration, infrastructure, code delivery pipelines, testing, and monitoring enable improved customer experience faster and with high quality or not.
Organizations we see changing are increasing investment in their service offerings, adopting Scaled Agile Frameworks (SAFe) to deliver incremental value and increase productivity, and putting DevOps principles and culture at their cornerstone. Rethinking the process means digital transformation. That is, modernizing legacy monolithic applications to micro-service architectures that use Cloud infrastructure and services and are built, deployed, and operating with “Everything as code” Successful teams are the ones managing complexity by making smaller, more precise changes (think microservices over monolithic systems) and big investments in automating common, toilsome tasks.
What do you see as the biggest challenge for performance and reliability?
Biz and tech are getting more complex, not less. This complexity comes in the form of new technologies like microservices and containers, heavy use of third-party integrations, and distributed transactions across multiple cloud environments and data centers. All this complexity raises the bar for the end-user and application monitoring, and for many organizations, their existing tooling poses challenges such as:
- A disconnect from actual end-user experience and typical IT metric sources of logs, metrics, and traces.
- Different teams have their own siloed monitoring solution.
- Traditional monitoring tools don’t scale to dynamic cloud environments, support new technologies, provide distributed tracing nor have an Application Programming Interface (API) to integrate testing and workflow platforms.
Another challenge comes from the delays created by the hand-offs between development teams and the traditionally organizational function of a Performance Center of Excellence (COE). Although these COE teams have highly skilled subject matter experts, with rigorous processes, and controls over the tools and testing solutions, these centralized models don’t scale to meet the needs for ever-growing distributed Agile teams.
As a counterpoint to the COE challenges, we also observe some challenges within independent Agile development teams.
- Non-functional requirements (NFR) aren’t being captured earlier in the software development lifecycle (SDLC) due to the disconnect with their operational team counterparts or lack of expertise and access to data.
- Constantly reinventing wheels with a “Not Invented Here” bias. Teams who fall prey to this mentality often simply don’t apply known good practices consistently. We have heard, “let’s do some chaos engineering…” before exercising basic practices like load testing and proper system monitoring, both of which are required components of chaos engineering.
What are the key requirements to address these challenges?
It helps to think of the requirements from a “three body problem” angle solving the People, Process and Technology perspectives of where you are and where you aspire to be in the future. If you just try to solve for a technology change without addressing how it affects the people and processes in place (or vice versa), you’ll not see true success. If you want to change the process you’ll need the right tools, but also, you’ll need to engage, enable, and embed with other teams outside your own subject-matter-experts group.
For the people perspective, the requirement for Quality and Engineering managers and VPs who want to materially improve their systems and performance engineering approach is to make time for learning new tools and skills (technology), automating toil (out of their processes), and delivering trainings (to product teams). In highly successful performance engineering, automation has a strategic plan, aligned with the priorities of the business (critical-path systems) and to the customer experience (key workflows that directly affect revenue).
For the process perspective, development teams need to be enabled for self-service to their environments, data, and performance testing. Automation is key to ensuring teams follow standard operating procedures and remain within the guardrails of your governance model.
For the technology perspective, providing a single source of truth for Biz, Dev, and Ops teams should be viewed as a strategic investment and enabler that supports and adapts to complex and dynamic environments. The solution needs to connect the customer experience to the backend resource utilization and performance bottlenecks for scenarios such as:
- Employees working from home.
- Business events like a marketing campaign.
- Increased reliance on third-party integrations for payment processing, supply chain, or media content.
With “shift left” and enablement being key to transformation, where are the key areas of focus to ensure success?
A proven process for transformation is a current state assessment of people, process, and technology followed by a future state vision and roadmap. This starts with engaging all key-stakeholders, to understand their business need, business drivers, and identify gaps. Then, using a design thinking process, ideate on the future and establish the business vision. Building a prototype is a great way to quickly kick-off an initiative and show proof-of-value for a new tool or processes.
As you start automating some of your performance practices, you’ll want to start with the “right early customers”…one or two product teams that are looking for fast, reliable feedback on API performance and who have a positive, collaborative attitude towards your assistance. As you walk the journey with them, you’ll learn lessons and tweak your approach, usually building out reusable pipelines and infrastructure logistics.
These early successes lead to better relationships between product teams and should be regularly used by quality or practice managers to advocate for modernizing performance in other teams so that the next round of candidate internal customers can be chosen. Usually, when organizations with many products and projects see the value coming from this iterative improvement, eventually a Senior VP or CTO says, “why isn’t this adopted across the whole organization?”. By then, you’ve gone through numerous iterations and improvement cycles so that your performance automation is scalable and produces highly valuable feedback to product teams who espouse it. Leveraging the expertise and implementation methodologies of trusted advisors, such as Orasi, help lead to improved outcomes.
In continuous delivery, software is broken into smaller components and each build should be validated. Automated data collection and scoring of response times, resource consumption like HTTP or database calls and resource utilization are achievable today with observability platforms like Dynatrace and NeoLoad that provides APIs and support use cases such automated quality gates within the open-source project Keptn and performance testing as a self-service.
Want to learn more about enhancing your customer experience?
Visit our Customer Experience solutions page for additional resources to learn more about the ways Dynatrace helps you monitor, analyze, and boost customer experiences.
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.
Go to forum