Chapter 1. Introducing Serverless
In this chapter we’re first going to go on a little history lesson to see what led us to Serverless. Given that context we’ll describe what Serverless is. Finally we’ll close out by summarizing why Serverless is both part of the natural growth of the cloud, and a jolt to how we approach application delivery.
Setting the Stage
To place a technology like Serverless in its proper context, we must first outline the steps along its evolutionary path.
The Birth of the Cloud
Let’s travel back in time to 2006. No one has an iPhone yet, Ruby on Rails is a hot new programming environment, and Twitter is being launched. More germane to this report, however, is that many people are hosting their server-side applications on physical servers that they own and have racked in a data center.
In August of 2006 something happened which would fundamentally change this model. Amazon’s new IT Division, Amazon Web Services (AWS), announced the launch of Elastic Compute Cloud (EC2).
EC2 was one of the first of many Infrastructure as a Service (IaaS) products. IaaS allows companies to rent compute capacity—that is, a host to run their internet-facing server applications—rather than buying their own machines. It also allows them to provision hosts just in time, with the delay from requesting a machine to its availability being in the order of minutes.
EC2’s five key advantages are:
- Reduced labor cost
-
Before Infrastructure as a Service, companies needed to hire specific technical operations staff who would work in data centers and manage their physical servers. This meant everything from power and networking, to racking and installing, to fixing physical problems with machines like bad RAM, to setting up the operating system (OS). With IaaS all of this goes away and instead becomes the responsibility of the IaaS service provider (AWS in the case of EC2).
- Reduced risk
-
When managing their own physical servers, companies are exposed to problems caused by unplanned incidents like failing hardware. This introduces downtime periods of highly volatile length since hardware problems are usually infrequent and can take a long time to fix. With IaaS, the customer, while still having some work to do in the event of a hardware failure, no longer needs know what to do to fix the hardware. Instead the customer can simply request a new machine instance, available within a few minutes, and re-install the application, limiting exposure to such issues.
- Reduced infrastructure cost
-
In many scenarios the cost of a connected EC2 instance is cheaper than running your own hardware when you take into account power, networking, etc. This is especially valid when you only want to run hosts for a few days or weeks, rather than many months or years at a stretch. Similarly, renting hosts by the hour rather than buying them outright allows different accounting: EC2 machines are an operating expense (Opex) rather than the capital expense (Capex) of physical machines, typically allowing much more favorable accounting flexibility.
- Scaling
-
Infrastructure costs drop significantly when considering the scaling benefits IaaS brings. With IaaS, companies have far more flexibility in scaling the numbers and types of servers they run. There is no longer a need to buy 10 high-end servers up front because you think you might need them in a few months’ time. Instead you can start with one or two low-powered, inexpensive instances, and then scale your number and types of instances up and down over time without any negative cost impact.
- Lead time
-
In the bad old days of self-hosted servers, it could take months to procure and provision a server for a new application. If you came up with an idea you wanted to try within a few weeks, then that was just too bad. With IaaS, lead time goes from months to minutes. This has ushered in the age of rapid product experimentation, as encouraged by the ideas in Lean Startup.
Infrastructural Outsourcing
Using IaaS is a technique we can define as infrastructural outsourcing. When we develop and operate software, we can break down the requirements of our work in two ways: those that are specific to our needs, and those that are the same for other teams and organizations working in similar ways. This second group of requirements we can define as infrastructure, and it ranges from physical commodities, such as the electric power to run our machines, right up to common application functions, like user authentication.
Infrastructural outsourcing can typically be provided by a service provider or vendor. For instance, electric power is provided by an electricity supplier, and networking is provided by an Internet Service Provider (ISP). A vendor is able to profitably provide such a service through two types of strategies: economic and technical, as we now describe.
Economy of Scale
Almost every form of infrastructural outsourcing is at least partly enabled by the idea of economy of scale—that doing the same thing many times in aggregate is cheaper than the sum of doing those things independently due to the efficiencies that can be exploited.
For instance, AWS can buy the same specification server for a lower price than a small company because AWS is buying servers by the thousand rather than individually. Similarly, hardware support cost per server is much lower for AWS than it is for a company that owns a handful of machines.
Technology Improvements
Infrastructural outsourcing also often comes about partly due to a technical innovation. In the case of EC2, that change was hardware virtualization.
Before IaaS appeared, a few IT vendors had started to allow companies to rent physical servers as hosts, typically by the month. While some companies used this service, the alternative of renting hosts by the hour was much more compelling. However, this was really only feasible once physical servers could be subdivided into many small, rapidly spun-up and down virtual machines (VMs). Once that was possible, IaaS was born.
Common Benefits
Infrastructural outsourcing typically echoes the five benefits of IaaS:
-
Reduced labor cost—fewer people and less time required to perform infrastructure work
-
Reduced risk—fewer subjects required to be expert in and more real time operational support capability
-
Reduced resource cost—smaller cost for the same capability
-
Increased flexibility of scaling—more resources and different types of similar resource can be accessed, and then disposed of, without significant penalty or waste
-
Shorter lead time—reduced time-to-market from concept to production availability
Of course, infrastructural outsourcing also has its drawbacks and limitations, and we’ll come to those later in this report.
The Cloud Grows
IaaS was one of the first key elements of the cloud, along with storage, e.g., the AWS Simple Storage Service (S3). AWS was an early mover and is still a leading cloud provider, but there are many other vendors from the large, like Microsoft and Google, to the not-yet-as-large, like DigitalOcean.
When we talk about “the cloud,” we’re usually referring to the public cloud, i.e., a collection of infrastructure services provided by a vendor, separate from your own company, and hosted in the vendor’s own data center. However, we’ve also seen a related growth of cloud products that companies can use in their own data centers using tools like Open Stack. Such self-hosted systems are often referred to as private clouds, and the act of using your own hardware and physical space is called on-premise (or just on-prem.)
The next evolution of the public cloud was Platform as a Service (PaaS). One of the most popular PaaS providers is Heroku. PaaS layers on top of IaaS, adding the operating system (OS) to the infrastructure being outsourced. With PaaS you deploy just applications, and the platform is responsible for OS installation, patch upgrades, system-level monitoring, service discovery, etc.
PaaS also has a popular self-hosted open source variant in Cloud Foundry. Since PaaS sits on top of an existing virtualization solution, you either host a “private PaaS” on-premise or on lower-level IaaS public cloud services. Using both public and private Cloud systems simultaneously is often referred to as hybrid cloud; being able to implement one PaaS across both environments can be a useful technique.
An alternative to using a PaaS on top of your virtual machines is to use containers. Docker has become incredibly popular over the last few years as a way to more clearly delineate an application’s system requirements from the nitty-gritty of the operating system itself. There are cloud-based services to host and manage/orchestrate containers on a team’s behalf, often referred to as Containers as a Service (CaaS). A public cloud example is Google’s Container Engine. Some self-hosted CaaS’s are Kubernetes and Mesos, which you can run privately or, like PaaS, on top of public IaaS services.
Both vendor-provided PaaS and CaaS are further forms of infrastructural outsourcing, just like IaaS. They mainly differ from IaaS by raising the level of abstraction further, allowing us to hand off more of our technology to others. As such, the benefits of PaaS and CaaS are the same as the five we listed earlier.
Slightly more specifically, we can group all three of these (IaaS, PaaS, CaaS) as Compute as a Service; in other words, different types of generic environments that we can run our own specialized software in. We’ll use this term again soon.
Enter Serverless, Stage Right
So here we are, a little over a decade since the birth of the cloud. The main reason for this exposition is that Serverless, the subject of this report, is most simply described as the next evolution of cloud computing, and another form of infrastructural outsourcing. It has the same general five benefits that we’ve already seen, and is able to provide these through economy of scale and technological advances. But what is Serverless beyond that?
Defining Serverless
As soon as we get into any level of detail about Serverless, we hit the first confusing point: Serverless actually covers a range of techniques and technologies. We group these ideas into two areas: Backend as a Service (BaaS) and Functions as a Service (FaaS).
Backend as a Service
BaaS is all about replacing server side components that we code and/or manage ourselves with off-the-shelf services. It’s closer in concept to Software as a Service (SaaS) than it is to things like virtual instances and containers. SaaS is typically about outsourcing business processes though—think HR or sales tools, or on the technical side, products like Github—whereas with BaaS, we’re breaking up our applications into smaller pieces and implementing some of those pieces entirely with external products.
BaaS services are domain-generic remote components (i.e., not in-process libraries) that we can incorporate into our products, with an API being a typical integration paradigm.
BaaS has become especially popular with teams developing mobile apps or single-page web apps. Many such teams are able to rely significantly on third-party services to perform tasks that they would otherwise have needed to do themselves. Let’s look at a couple of examples.
First up we have services like Google’s Firebase (and before it was shut down, Parse). Firebase is a database product that is fully managed by a vendor (Google in this case) that can be used directly from a mobile or web application without the need for our own intermediary application server. This represents one aspect of BaaS: services that manage data components on our behalf.
BaaS services also allow us to rely on application logic that someone else has implemented. A good example here is authentication—many applications implement their own code to perform signup, login, password management, etc., but more often than not this code is very similar across many apps. Such repetition across teams and businesses is ripe for extraction into an external service, and that’s precisely the aim of products like Auth0 and Amazon’s Cognito. Both of these products allow mobile apps and web apps to have fully featured authentication and user management, but without a development team having to write or manage any of the code to implement those features.
Backend as a Service as a term became especially popular with the rise in mobile application development; in fact, the term is sometimes referred to as Mobile Backend as a Service (MBaaS). However, the key idea of using fully externally managed products as part of our application development is not unique to mobile development, or even front-end development in general. For instance, we might stop managing our own MySQL database server on EC2 machines, and instead use Amazon’s RDS service, or we might replace our self-managed Kafka message bus installation with Kinesis. Other data infrastructure services include filesystems/object stores and data warehouses, while more logic-oriented examples include speech analysis as well as the authentication products we mentioned earlier, which can also be used from server-side components. Many of these services can be considered Serverless, but not all—we’ll define what we think differentiates a Serverless service in Chapter 5.
Functions as a Service/Serverless Compute
The other half of Serverless is Functions as a Service (FaaS). FaaS is another form of Compute as a Service—a generic environment within which we can run our software, as described earlier. In fact some people (notably AWS) refer to FaaS as Serverless Compute. Lambda, from AWS, is the most widely adopted FaaS implementation currently available.
FaaS is a new way of building and deploying server-side software, oriented around deploying individual functions or operations. FaaS is where a lot of the buzz about Serverless comes from; in fact, many people think that Serverless is FaaS, but they’re missing out on the complete picture.
When we traditionally deploy server-side software, we start with a host instance, typically a virtual machine (VM) instance or a container (see Figure 1-1). We then deploy our application within the host. If our host is a VM or a container, then our application is an operating system process. Usually our application contains of code for several different but related operations; for instance, a web service may allow both the retrieval and updating of resources.
FaaS changes this model of deployment (see Figure 1-2). We strip away both the host instance and application process from our model. Instead we focus on just the individual operations or functions that express our application’s logic. We upload those functions individually to a vendor-supplied FaaS platform.
The functions are not constantly active in a server process though, sitting idle until they need to be run as they would in a traditional system (Figure 1-3). Instead the FaaS platform is configured to listen for a specific event for each operation. When that event occurs, the vendor platform instantiates the Lambda function and then calls it with the triggering event.
Once the function has finished executing, the FaaS platform is free to tear it down. Alternatively, as an optimization, it may keep the function around for a little while until there’s another event to be processed.
FaaS is inherently an event-driven approach. Beyond providing a platform to host and execute code, a FaaS vendor also integrates with various synchronous and asynchronous event sources. An example of a synchronous source is an HTTP API Gateway. An example of an asynchronous source is a hosted message bus, an object store, or a scheduled event similar to (cron).
AWS Lambda was launched in the Fall of 2014 and since then has grown in maturity and usage. While some usages of Lambda are very infrequent, just being executed a few times a day, some companies use Lambda to process billions of events per day. At the time of writing, Lambda is integrated with more than 15 different types of event sources, enabling it to be used for a wide variety of different applications.
Beyond AWS Lambda there are several other commercial FaaS offerings from Microsoft, IBM, Google, and smaller providers like Auth0. Just as with the various other Compute-as-a-Service platforms we discussed earlier (IaaS, PaaS, CaaS), there are also open source projects that you can run on your own hardware or on a public cloud. This private FaaS space is busy at the moment, with no clear leader, and many of the options are fairly early in their development at time of writing. Examples are Galactic Fog, IronFunctions, Fission (which uses Kubernetes), as well as IBM’s own OpenWhisk.
The Common Theme of Serverless
Superficially, BaaS and FaaS are quite different—the first is about entirely outsourcing individual elements of your application, and the second is a new hosting environment for running your own code. So why do we group them into the one area of Serverless?
The key is that neither require you to manage your own server hosts or server processes. With a fully Serverless app you are no longer thinking about any part of your architecture as a resource running on a host. All of your logic—whether you’ve coded it yourself, or whether you are integrated with a third party service—runs within a completely elastic operating environment. Your state is also stored in a similarly elastic form. Serverless doesn’t mean the servers have gone away, it means that you don’t need to worry about them any more.
Because of this key theme, BaaS and FaaS share some common benefits and limitations, which we look at in Chapters 3 and 4. There are other differentiators of a Serverless approach, also common to FaaS and BaaS, which we’ll look at in Chapter 5.
An Evolution, with a Jolt
We mentioned in the preface that Serverless is an evolution. The reason for this is that over the last 10 years we’ve been moving more of what is common about our applications and environments to commodity services that we outsource. We see the same trend with Serverless—we’re outsourcing host management, operating system management, resource allocation, scaling, and even entire components of application logic, and considering those things commodities. Economically and operationally there’s a natural progression here.
However, there’s a big change with Serverless when it comes to application architecture. Most cloud services, until now, have not fundamentally changed how we design applications. For instance, when using a tool like Docker, we’re putting a thinner “box” around our application, but it’s still a box, and our logical architecture doesn’t change significantly. When hosting our own MySQL instance in the cloud, we still need to think about how powerful a virtual machine we need to handle our load, and we still need to think about failover.
That changes with Serverless, and not gradually, but with a jolt. Serverless FaaS drives a very different type of application architecture through a fundamentally event-driven model, a much more granular form of deployment, and the need to persist state outside of our FaaS components (we’ll see more of this later). Serverless BaaS frees us from writing entire logical components, but requires us to integrate our applications with the specific interface and model that a vendor provides.
So what does a Serverless application look like if it’s so different? That’s what we’re going to explore next, in Chapter 2.
Get What Is Serverless? now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.