Characteristics of a Serverless Application

Now that you understand something about the philosophy around serverless, what are some of the characteristics of a serverless application? Though you may get varying answers as to what serverless is, following are some traits and characteristics that are generally agreed upon by the industry.


Serverless architectures typically allow you to shift more of your operational responsibilities to a cloud provider or third party.

When you decide to implement FaaS, the only thing you should have to worry about is the code running in your function. All of the server patching, updating, maintaining, and upgrading is no longer your responsibility. This goes back to the core of what cloud computing, and by extension serverless, attempts to offer: a way to spend less time managing infrastructure and spend more time building features and delivering business value.


Managed services usually assume responsibility for providing a defined set of features. They are serverless in the sense that they scale seamlessly, don’t require any server operations or need to manage uptime, and, most importantly, are essentially codeless.

Benefits of a Serverless Architecture

These days there are many ways to architect an application. The decisions that are made early on will impact not only the application life cycle, but also the development teams and ultimately the company or organization. In this book, I advocate for building your applications using serverless technologies and methodologies and lay out some ways in which you can do this. But what are the advantages of building your application like this, and why is serverless becoming so popular?


One of the primary advantages of going serverless is out-of-the-box scalability. When building your application, you don’t have to worry about what would happen if the application becomes wildly popular and you onboard a large number of new users quickly—the cloud provider will handle this for you.


The pricing models of serverless architectures and traditional cloud-based or on-premises infrastructures differ greatly.

With the traditional approach, you often paid for computing resources whether or not they were utilized. This meant that if you wanted to make sure your application would scale, you needed to prepare for the largest workload you thought you might see regardless of whether you actually reached that point. This approach meant you were paying for unused resources for the majority of the life of your application.

With serverless technologies, you pay only for what you use. With FaaS, you’re billed based on the number of requests for your functions, the time it takes for your function code to execute, and the reserved memory for each function. With managed services like Amazon Rekognition, you are only charged for the images processed and minutes of video processed, etc.—again paying only for what you use.

The bill from your cloud provider is only one part of the total cost of your cloud infrastructure—there’s also the operations’ salaries. That cost decreases if you have fewer ops resources.

In addition, building applications in this way usually facilitates a faster time to market, decreasing overall development time and, therefore, development costs.


With fewer features to build, developer velocity increases. Being able to spin up the types of features that are typical for most applications allows you to quickly focus on writing the core functionality and business logic for the features that you want to deliver.


If you are not investing a lot of time building out repetitive features, you are able to experiment more easily and with less risk.

When shipping a new feature, you often assess the risk (time and money involved with building the feature) against the possible return on investment (ROI). As the risk involved in trying out new things decreases, you are free to test out ideas that in the past may not have seen the light of day.

A/B testing (also known as bucket testing or split testing) is a way to compare multiple versions of an application to determine which one performs best. Because of the increase in developer velocity, serverless applications usually enable you to A/B test different ideas much more quickly and easily


Because the services that you are subscribing to are the core competency of the service provider maintaining them, you are usually getting something that is much more polished and more secure than you could have built yourself. Imagine that a company’s core business model has been, for many years, the delivery of a pristine authentication service, having fixed issues and edge cases for thousands of companies and customers.

Now, imagine trying to replicate a service like that within your own team or organization. Though this is completely possible, choosing to use a service built and maintained by those whose only job is to build and maintain that exact thing is a safe bet that will ultimately save you time and money.

Another advantage of using these service providers is that they will strive for the least amount of downtime possible. This means that they are taking on the burden of not only building, deploying, and maintaining these services, but also doing everything they can to make sure that they are stable.


Most engineers will agree that, at the end of the day, code is a liability. What has value is the feature that the code delivers, not the code itself. When you find ways to deliver these features while simultaneously limiting the amount of code you need to maintain, and even doing away with the code completely, you are reducing overall complexity in your application.

Different Implementations of Serverless

Let’s take a look at the different ways that you can build serverless applications as well as some of the differences between them.


One of the first serverless implementations, the Serverless Framework, is the most popular. At first, the Serverless Framework only supported AWS, but then it added support for cloud providers like Google and Microsoft Azure, among others.

The Serverless Framework utilizes a combination of a configuration file (serverless.yml), CLI, and function code to provide a nice experience for people wanting to deploy serverless functions and other AWS services to the cloud from a local environment. Getting up and running with the Serverless Framework can present a somewhat steep learning curve, especially for developers new to cloud computing. There is much terminology to learn and a lot that goes into understanding how cloud services work in order to build anything more than just a “Hello World” application.

Overall, the Serverless Framework is a good option if you understand to some extent how cloud infrastructure works, and are looking for something that will work with other cloud providers in addition to AWS.

Leave a Reply

Your email address will not be published. Required fields are marked *