ChatGPT and generative AI have become a global sensation, grabbing headlines and sparking debates around the world. Although generative pre-trained transformer (GPT) technology is in its early stages and comes with risks, it has the potential to transform industries, including software development and delivery. Paired with causal AI, organizations can increase the impact and safer use of ChatGPT and other generative AI technologies.
With the launch of ChatGPT, an AI chatbot developed by OpenAI in November 2022, large language models (LLMs) and generative AI have become a global sensation, making their way to the top of boardroom agendas and household discussions worldwide.
GPT (generative pre-trained transformer) technology and the LLM-based AI systems that drive it have huge implications and potential advantages for many tasks, from improving customer service to increasing employee productivity.
At Dynatrace, we’ve been exploring the many ways of using GPTs to accelerate our innovation on behalf of our customers and the productivity of our teams. At Perform, our annual user conference, in February 2023, we demonstrated how people can use natural or human language to query our data lakehouse. This is one example of the many use cases we’re exploring. It highlights the potential of GPT technology to drive “information democracy” even further. Like others, we’re only starting to scratch the surface of these opportunities, as the technology is in its early stages.
ChatGPT and generative AI: A new world of innovation
Software development and delivery are key areas where GPT technology such as ChatGPT shows potential. For example, it can help DevOps and platform engineering teams write code snippets by drawing on information from software libraries. In addition, it can expedite how teams resolve problems in custom code by feeding root-cause context into a GPT, augmenting problem tickets or alerts with this context, and using it as the base for auto-generated remediation.
These examples reflect dramatic improvements over existing, time-wasting manual processes, including writing routine and easily replicable code or trawling through countless Stack Overflow pages before finding an answer.
GPTs can also help quickly onboard team members to new development platforms and toolsets. The technology lets people learn about solutions by typing questions into a search bar, such as, “How do I import and export test cases between my environments?” and “What’s the best way to integrate this solution with my toolchain?”
Again, this GPT approach represents a significant productivity and user satisfaction improvement over the current paradigm, where users search documents manually, and the ability to find answers depends on the quality and structure of the resources provided by vendors.
Establishing guardrails to protect intellectual property and data privacy
As DevOps and platform engineering teams use GPTs to accelerate software development, site reliability engineers (SREs) and privacy teams must ensure these technologies have the proper controls to avoid creating more problems than those they’re solving.
First, SREs must ensure teams recognize intellectual property (IP) rights on any code shared by and with GPTs and other generative AI, including copyrighted, trademarked, or patented content. It will be equally critical for organizations to prevent ChatGPT and similar technologies from inadvertently sharing their IP or confidential data as they increasingly use repositories such as GitHub in their software development.
Organizations should also consider regional and country-specific privacy and security regulations such as GDPR or the proposed European AI Act to ensure that their teams don’t use GPT technologies in a way that could inadvertently lead to data breaches or fines.
Understanding the risks of GPTs and generative AI
Organizations must be especially mindful that the LLM-based generative AI that powers ChatGPT and similar technologies is susceptible to error and manipulation. It relies on the accuracy and quality of the publicly available information and input it draws from, which may be untrustworthy or biased.
In software development and delivery use cases, those sources could include code libraries that are legally protected or contain syntax errors or vulnerabilities planted by cybercriminals to perpetuate flaws that create more exploit opportunities. Engineering teams will, therefore, always need to check the code they get from GPTs to ensure it doesn’t risk software reliability, performance, compliance, or security.
Mastering prompt engineering: The growing importance of causal AI
While developers provide their code and comments as context for GPT tools, DevOps, SRE, and platform engineering teams feed this context into the generative AI using prompt engineering techniques. To do this effectively, the input from prompt engineering needs to be trustworthy and actionable. For example, if GPT tools only have access to general input about a CPU spike, they will just provide general answers about the need for additional CPUs or scaling. But if the GPT tools have access to precise details about the conditions behind the CPU spike, they can provide a specific response with a detailed root cause. Achieving this precision requires another type of artificial intelligence: causal AI.
Causal AI, like the AI at the core of the Dynatrace platform, draws precise insights in near-real time from continuously observed relationships and dependencies within a technology ecosystem or across the software lifecycle. These dependency graphs or topologies enable causal AI to generate fully explainable, repeatable, and trustworthy answers that detail the cause, nature, and severity of any issue it discovers. Combining causal AI with GPTs will empower teams to automate analytics that explore the impact of their code, applications, and the underlying infrastructure while retaining full context.
Increasing the impact of ChatGPT and generative AI
In the future, combining generative AI and causal AI to increase the impact and value of ChatGPT and related technologies could become even more powerful and unlock additional use cases for driving productivity and efficiency in software delivery. For example, by integrating GPTs into the Dynatrace unified observability and security platform, we can combine natural language queries with causal AI-powered answers to provide accurate and clear context. This precise input engineering makes the GPT’s proposals more precise and actionable for remediation and automation.
DevOps and platform teams can use this capability to ask questions such as, “How can I improve the response time of my application?” or execute commands like, “Create an automated workflow that scales my cluster based on actual user experience and my service level” and get precise recommendations for a solution.
Generative AI and causal AI are better together
The impact of GPT technology will undoubtedly be profound, and the rapid pace at which people worldwide are adopting it will dramatically affect how many of us work. However, the adage, “garbage in, garbage out,” is highly pertinent.
ChatGPT and similar technologies don’t provide solutions by themselves. Their proposals are only as good as the quality, depth, and precision of the information and context that organizations feed them.
Organizations will be in a much better position to maximize the impact of generative AI by combining it with causal AI to ensure they avoid getting highly generic or misfitting answers. This combined approach provides reliable answers for two key purposes. First, to drive trustworthy automation that is deterministic and repeatable through causal AI. Second, for causal AI to provide a deep and rich context to unleash GPT’s full potential for software delivery and productivity use cases.
After addressing security and privacy concerns, DevOps and platform engineering teams can leverage automated prompt engineering to feed their GPT with real-time data and causal AI-powered context. This will allow GPTs to drive productivity with suitable and meaningful suggestions.
Combining causal AI and generative AI will eventually give rise to the next phase of GPT-powered innovation. DevOps and platform engineering teams will use causal AI to verify the output of their generative AI – such as code snippets – to ensure they don’t introduce reliability or security problems. They will also use intelligent automation to execute their reliable and secure code automatically.
As engineering teams progress along this journey, organizations can build a lasting competitive advantage by achieving significant productivity gains and accelerating the speed of software innovation to levels many people would previously have considered impossible.
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.
Go to forum