A computer screen shows lines of code
Cloud Management

What is Serverless?

Explore serverless computing. Eliminate infrastructure management, focus on code, and embrace auto-scaling and pay-per-use models for modern applications.

Christina Harker, PhD

Christina Harker, PhD

Marketing

Serverless computing, also known as Function as a Service (FaaS), is a cloud computing model where developers can build and run applications without the need to manage underlying server infrastructure. This can be a major benefit in many different kinds of businesses. In a serverless architecture, the cloud provider will dynamically manage the allocation and provisioning of resources based on the application's demand.

What Defines Serverless Computing?

No server management 

With serverless, developers don’t need to provision or manage servers, VMs, or containers. The cloud provider handles all the infrastructure management tasks. This includes things like capacity planning, scaling, and maintenance.

Event-driven execution 

Serverless applications are triggered by events. What kind of events trigger serverless apps? HTTP requests, database updates, file uploads, or scheduled tasks, for example. And when an event occurs, the associated code (known as a function) is executed and the necessary resources are allocated on-demand. This all means that the application remains dormant when not processing events and that results in cost savings.

Automatic scaling

Serverless platforms automatically scale the resources allocated to a function based on the incoming workload. The provider will make sure there are enough resources available to handle the demand, and this eliminates the need for manual scaling.

Pay-per-use pricing 

Serverless computing follows a pay-as-you-go pricing model. Users are billed based on the actual usage of computing resources. In turn, that means that costs are only incurred when functions are executed, and users aren’t charged for idle time.

What are the Benefits of Serverless Computing?

Reduced operational overhead 

Serverless abstracts away infrastructure management. That abstraction allows developers to focus on writing code rather than dealing with server provisioning, scaling, and maintenance tasks.

Improved scalability 

Serverless platforms automatically scale resources in response to workload changes. Applications can handle sudden spikes in traffic without manual intervention. This advantage means it’s easier to ensure optimal performance and responsiveness.

Cost efficiency

Pay-per-use pricing in serverless computing eliminates the need for overprovisioning or paying for idle resources. Users only pay for the actual execution time and resources used by their functions, potentially resulting in cost savings.

Faster time to market

Serverless architectures can also help by promoting faster development cycles. They allow devs to focus on building core application logic, instead of having to spend time on managing infrastructure. Because it frees up time, serverless also enables rapid prototyping, deployment, and iterative development.

Serverless computing workers well for event-driven and highly scalable applications. This includes things like microservices, real-time data processing, chatbots, IoT backends, and web APIs. But serverless may not be right for applications with long-running or continuously active workloads. That’s because serverless functions have certain execution time and resource limitations that are imposed by cloud providers.

Popular serverless platforms include AWS Lambda, Azure Functions, Google Cloud Functions, and IBM Cloud Functions. These platforms all support different programming languages. And they provide integration with other cloud services and event sources.

What are the Risks of Going Serverless?

While serverless computing offers numerous advantages, there are also some potential disadvantages you will need to consider.

Cold start latency 

When a serverless function is triggered after a period of inactivity or if it's the first invocation, there will likely be a noticeable delay. This is due to a cold start. Cold starts happen because the cloud provider needs to allocate and initialize the necessary resources to execute the function. So while that is happening, you have a delay. This latency usually impacts applications that require near-instantaneous response times the worst.

Execution time limitations 

Serverless functions have execution time limits imposed by cloud providers. Functions exceeding these limits will probably be terminated abruptly. That can of course be problematic for tasks that require longer processing times. Some providers also enforce limitations on the amount of memory or CPU resources a function can use.

Vendor lock-in 

Serverless platforms often have proprietary programming models and specific integration points with other cloud services. Unfortunately, this can lead to vendor lock-in. It becomes hard to migrate applications to alternative platforms and it’s difficult to run them in a hybrid or multi-cloud environment.

Limited control over infrastructure 

Serverless abstracts away infrastructure management. While this does provide a lot of convenience, the tradeoff is that you have limited control. Developers end up with less control over the underlying servers, networking configurations, or low-level system operations. If you have an application with specific infrastructure requirements or complex networking setups, this can definitely turn into a disadvantage.

Debugging and testing challenges 

Testing and debugging serverless applications can be worse and more complicated compared to traditional architectures. You have to take into account the distributed nature of serverless systems. On top of that, you’re dealing with event-driven workflows and need to set up appropriate logging and monitoring mechanisms so that you can diagnose and troubleshoot issues effectively.

Continuous execution costs 

While serverless architectures can be cost-effective for low or sporadic workloads, that is not true for apps with continuous or high-frequency execution patterns. For those kinds of apps, serverless can quickly become very expensive. Essentially, the pay-per-use pricing model can lead you into unexpected costs if the application experiences frequent spikes in traffic or high compute demand. Which is what you want, if you’re growing.

Stateless nature 

Serverless functions are typically stateless, meaning they don’t retain any in-memory state between invocations. If an application requires maintaining session data or shared state, additional mechanisms like external databases or cache systems have to be used. And that only adds complexity to the architecture.

Conclusion

It's important to evaluate these disadvantages against the specific requirements and characteristics of your app or apps. Some limitations can be handled through careful architecture design, leveraging additional services, or combining serverless components with other deployment models. But that doesn’t mean that Serverless is automatically the right choice for you or your application/s. If you’re trying to figure it out and could use some free cloud advice, reach out to Divio to discuss your needs. Even if you don’t end up using the Divio PaaS, we are happy to share our expertise and knowledge with you and help you make a choice that you’ll feel confident about.