The phrase “serverless computing” may appear contradictory at first, but for years now, successful organizations have understood the benefit of using serverless technologies to streamline operations and reduce costs. Serverless compute models, also known as serverless architecture, allow you to leverage cloud-native attributes to execute key user-defined functions. Today, organizations leverage serverless computing directly to create agile ecosystems that improve cloud-native toolboxes.
That said, serverless architecture is not ideal in every situation, which is why it’s important to understand the pros and cons of this approach so your team can put this powerful technology to best use. But what exactly does “serverless” mean, and how can your organization benefit from it?
What is serverless computing?
Serverless computing is a cloud-based, on-demand execution model where customers consume resources solely based on their application usage. Unlike a traditional virtual machine-model where customers must build and manage an entire VM, serverless computing provides the ability to purchase only the CPU cycles and memory needed to support an application using an event-based pay-per-use model. This allows teams to sidestep much of the cost and time associated with managing hardware, platforms, and operating systems on-premises, while also gaining the flexibility to scale rapidly and efficiently.
Within this paradigm, it’s possible to run entire architectures without touching a traditional virtual server, either locally or in the cloud. Serverless resources are highly flexible and customized based on the application. REST APIs, authentication, databases, email, and video processing all have a home on serverless platforms. Services scale to meet demand. There is no need to plan for extra resources, update operating systems, or install frameworks, because the provider is essentially your system administrator.
In contrast, traditional computing models rely on virtual or physical machines, where each instance includes a complete operating system, CPU cycles, and memory. Virtualization is a great example. VMware commercialized the idea of virtual machines (VMs), and cloud providers embraced the same concept with services like Amazon EC2, Google Compute, and Azure virtual machines. Serverless computing is a newer approach that simplifies manageability and reduces costs.
How serverless computing works
To answer the question “what is serverless computing?” it helps to first understand how it works. With this type of architecture, applications are distributed to meet demand and scale requirements efficiently. Customers are only billed when an application is used, so this is particularly cost-effective in environments where applications run on-demand.
AWS Lambda functions are a good example of how a serverless framework works:
- Developers write a function in a supported language or platform.
- The developer uploads the function and configuration for how to run the function to the cloud.
- The platform handling the function containerizes it.
- The platform builds the trigger to initiate the app.
- Every time the trigger executes, the function runs on an available resource.
When an application is triggered, it can cause latency as the application starts.
How does serverless computing solve inefficiencies?
Inefficiencies cost technology companies up to $100 billion per year. Performing updates, installing software, and resolving hardware issues requires up to 17 hours of developer time every week.
Cloud-hosted managed services eliminate the minute day-to-day tasks associated with hosting IT infrastructure on-premises. Security, databases, and programming languages effortlessly remain up to date and secure in the serverless model. AWS Cognito, for example, is billed as an always up-to-date authentication service that complies with rigorous industry standards.
Developers stay focused on developing technologies rather than the underpinnings of applications. Operations teams focus on building consistent delivery pipelines instead of wrangling multitudes of hardware.
Despite being hosted externally, you still have direct control over the finer details, such as the languages and versions of tools used. AWS Lambda allows developers to use NodeJS or Python while they control nearly every detail of a REST API.
Pay per use
Serverless computing allows organizations to purchase backend services on a flexible “pay-as-you-go” model. This means they only pay for the services they use.
With a pay-per-use model, resources never go to waste. There is no cost for downtime — unlike on-premises machines, shared servers, or rented virtual machines. Computing power is no longer a depreciating asset.
Serverless vendors make resources available exactly when you need them and there’s no guesswork about the number of instances you need. Containers spawn and disappear behind the scenes based on need, with billing based on actual usage.
When to leverage serverless
Serverless applications can save an organization time, effort, and resources. Still, benefits are task-specific. Services tend to run on warehouse-scale computers meant more for edge applications than high-performance computing.
You’ll benefit from serverless computing when:
- Authenticating users (e.g., Okta, Azure Active Directory)
- Building APIs (e.g., Amazon API Gateway)
- Connecting edge services in the cloud (e.g., AWS for the Edge)
- Sending emails in bulk (e.g., Amazon Simple Email Service)
- Writing highly responsive, real-time applications (e.g., single high-volume events)
- Creating a prototype (e.g., on Azure)
- Connecting IoT devices (e.g., AWS IoT Device Management)
For DevOps teams, developers can leverage serverless computing to simplify backend code by calling simple functions that independently perform a single purpose or task. For example, making an API call can be much easier using serverless computing.
Each of these applications requires minimal compute time. The average request is handled, processed, and returned quickly.
When the serverless model is NOT a benefit
Not every application is an ideal candidate for serverless. Since you pay per transaction, an application that works under constant stress may not be a good fit. Here are some challenges to be aware of:
- Dynamic and ephemeral – Serverless apps are short-lived. For example, they are created and broken down dynamically and often quickly. In most environments, this leads to observability challenges.
- Part of a larger application – Serverless applications rarely exist alone and are typically part of a larger application. An effective observability strategy requires complete visibility into every service included in the application.
- Cloud lock-in – Serverless is typically delivered in clouds and every cloud has a different serverless model. When dealing with multiple cloud platforms, it’s best to work with solutions that offer seamless multi-cloud integration and usability.
- Performance – Unlike traditional services that are online all the time, serverless functions have to start-up which can lead to inconsistent performance.
Furthermore, data usage, request handling, and processing time accumulate. Apps and services running around the clock or under extremely heavy loads become expensive quickly.
At the other end of the spectrum, infrequent requests lead services to terminate the containers running your applications. This creates latency when a container needs to “cold start.”
Benefits of the serverless model
Agile and scalable development practices will help an organization stay ahead in a digital market. This is specifically where serverless computing comes in to help. When developing your next service or app, consider these benefits of serverless computing.
1. Dynamic scalability
One key feature of serverless is that you can leverage auto scalability capabilities that grow and shrink your computing power according to the task at hand. For example, when a serverless computing environment is introduced to the public cloud, infrastructure scaling becomes flexible and autonomous. This makes serverless computing attractive for unpredictable workloads. With a serverless design, the scalability of cloud-based applications is not limited by the underlying infrastructure.
2. Cost-friendly hosted solutions
Cloud costs can quickly add up. Serverless computing uses resources more economically than dedicated deployments of infrastructure. With this type of model, you don’t have a large CapEx investment with in-house infrastructure, which can accumulate quickly. Instead, cloud-native serverless computing provides an agile method of leveraging resources when you need them.
3. Increased agility
Serverless computing can make a developer’s life easier. These solutions are designed to ease development headaches, automate core processes, and even help with shorter release cycles. Serverless computing relieves developers from the tedious work of maintaining servers, enabling them to make more time for writing code and creating innovative services. Aside from helping reduce development time, serverless computing also ensures elastic scalability and further agility by ensuring that developers don’t have to manage the underlying infrastructure.
4. Faster time to market
Your ability to respond to market demands is greatly improved with serverless computing. Applications and services can be developed and deployed faster and converted into marketable solutions. You won’t need to wait for expensive infrastructure or get hung up by IT setups that take forever. Staff is no longer distracted by having to update and maintain infrastructure. Rather, they can focus on innovating and responding to customer needs.
5. Improved disaster response time
The agile nature of resource allocation within serverless means applications and core services can be up and running far faster than traditional infrastructure. Recovery following an outage can be almost instantaneous. This helps reduce costs, efforts, and headaches. On that theme, it’s important to note anything could fail at any time. It’s vital to architect disaster recovery into your serverless application design. This means working to prevent things like logic bugs in your code, failing application dependencies, and other similar application-level issues. Here’s another key point around failover: many customers in AWS, for example, are likely not using 100% of the compute capacity at all times. According to Amazon, many customers only use 10–20% of the available capacity in their EC2 fleet at any point in time. This average is also affected by high availability and Disaster Recovery requirements, which typically result in idle servers waiting for traffic from failovers. This is why serverless architectures can lower the overall Total Cost of Ownership (TCO) since many of the networking, security, and DevOps management tasks are included in the cost of the service.
Monitoring serverless applications
Because serverless applications typically run in specialized environments, administrators worry about having adequate monitoring and observability capabilities. Serverless application providers do provide basic monitoring and insights, but the features are limited.
In practice, organizations tend to combine multiple services to accomplish an objective. This makes it difficult to achieve observability using built-in monitoring through AWS CloudWatch or Azure Application Logging, because they work on a per-service or even per-function basis.
Meanwhile, the amount of information tracked grows exponentially over time. Vendors offer no off-the-shelf way to track services while logs expand rapidly. You can build your own data warehouse-driven dashboards in AWS Quicksight or with tools such as Tableau, but creating, maintaining, and modifying these assets can be challenging and time-consuming.
Your team should incorporate performance metrics, errors, and access logs into your monitoring platform. Bugs, security, and throttling-related slowdowns are concerns. Monitoring these issues in a service built from the ground up is a painstaking process.
Getting a no-hassle bird’s-eye view of serverless architecture
While building custom applications can be expensive, using an automatic and intelligent observability platform such as Dynatrace to monitor your serverless environments allows you to avoid flying blind and incurring the cost of building code and dashboards from the ground up.
Dynatrace’s Software Intelligence Platform automatically discovers applications, processes, and services running across hybrid, multicloud, and serverless environments in real time. It captures metrics, logs, traces, and user experience data, and it analyzes them in the context of their dependencies among other services and infrastructure. Customizable, no-code dashboards in Dynatrace give you direct insight into every service without scanning through the countless logs generated across your applications.
Dynatrace combines information from more than 500 applications on AWS, Azure, and the Google Cloud Platform. Extending real-time observability beyond the tools offered by your services provider helps monitor your entire hybrid cloud environment with no blind spots, stay on top of the dynamics of your enterprise, and avoid performance issues and cost overruns.
Powerful artificial intelligence automatically consolidates meaningful data to flag slowdowns and pinpoint root causes for quick remediation. Dynatrace connects directly to serverless platforms through a single no-configuration agent, OneAgent. A layer on AWS Lambda and agents for many other tools and platforms lets applications push information directly to Dynatrace to provide 360-degree observability of your serverless architecture. OneAgent supports nearly all major programming languages and tools.
Making use of serverless architecture
This type of architecture increases developer efficiency, decreases time to production, and reduces costs for maintaining on-premises infrastructure or unused VMs. Today, vendors offer dozens of technologies to help you cut hardware — but often at the cost of observability.
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.
Go to forum