The ideas and concepts of serverless computing have been around for awhile. However the implementation of serverless computing has only started two or three years ago. AWS was one of the first major cloud providers to offer a serverless computing solution called Lamda at the end of 2014.
What is Serverless Computing?
The basic idea is to provide the developer with another abstraction layer to enable API-driven creation and management of the underlying infrastructure – in other words server, storage and networking becomes invisible to the developer. When looking at the detail serverless computing can cause some confusion as it might imply no servers are needed – this is not true. Servers as well as other infrastructure related aspects are still needed; it is just that these are being managed and controlled by software without the need to install / start / stop a virtual or physical server etc.
As referred to in AWS developed a serverless capability called Lamda. In detail Lambda is:
“a compute service that executes code written in JavaScript (node.js), Python, or Java on AWS infrastructure. Source code is deployed to an isolated container that has its own allocation of memory, disk space, and CPU. The combination of code, configuration and dependencies, is typically referred to as a Lambda function. The Lambda runtime can invoke a function multiple times in parallel. Lambda supports push and pull event models of operation and integrates with a large number of AWS services. Functions can be invoked by an HTTP request through the API Gateway, or run on a scheduler.”
In Peter Sbarski and Sam Kroonenburg outlined the concept behind Lamda -- to create a serverless computing platform:
For serverless to work the underlying infrastructure has to be fully “invisible” and has to follow a Lego-based approach. Invisibility is being achieved by complying with the following 13 key principles: Utility-Based, Software based, Open Standards Based, Virtualised, Service Oriented, Horizontally and Elastically, Fit-for-Purpose, Fully Automated, Lego Based Modularity, Globally Operated, Secured and Fully SLA and KPI cognisant. See here for more detail in
In detail a serverless computing model (again here an example using AWS and taking a diagram from )
AWS Lamda is basically an abstraction layer between the calling code and the actual provisioning of infrastructure services. Using an Lamda API (application programming interface) the requestor will invoke the necessary Lamda functions in a stateless fashion to execute the required function. Note that Lamda is stateless and if stateful is needed the services like S3 or DynamoDB will be required.
Why is Serverless Computing needed?
The idea of “No Ops” as well as “only focus on developing functionality” has been around for a long time. Solutions like IaaS as well as PaaS have addressed some of these aspects. However only when software and infrastructure patterns converged has it been possible to allow developers to just develop code, to compile it and to simply upload without having to have a black belt in bash or puppet.
The drive of serverless computing is fueled by the digital agenda; to build and consume a compelling digital platform – featuring APIs, open datasets, service catalogues, integration, frameworks, solutions guidance, tools and collaboration support; it will enable businesses to quickly create their own, market-focused solutions while leveraging enterprise grade information and services and without having to get “hands on” with infrastructure.
What are the advantages?
In both models there are several advantages:
- Time savings: As the event triggered containers are 3rd party managed the developer can focus on the functional aspects of his/her code, rather than having to also develop backend or other functionality.
- Cost savings: Using FaaS and/or BaaS can save cost as the 3rd party / hosting provider will only charge for the duration of the code / function is being triggered. Also there is no need to own, host and operate your own infrastructure as with a FaaS / BaaS all that has been outsourced.
- Better scaling : As 3rd party providers will invoke the server infrastructure that is needed to cater for the event triggered, there is no need to having to deal with scaling, it is all automatic and build-in.
- No system administration: In a full serverless model (everything API based) there is no need for any system related admin. There are no bash shell scripts, stopping and startging services etc – all that is needed is to compile, zip and upload your code to a FaaS platform.
What are the disadvantages?
In both models there are several disadvantages:
- Vendor lock-in: Both models will use a 3rd party provided API plus the developer will “outsource” certain functions to the 3rd party. This means that there are similar dependencies to a COTS (commercial of the shelf) products related to API, cost, functionality, upgrade path and support.
- Multitenancy: Economies of scale in a BaaS and FaaS model can only be achieved if the (backend / function) code is running on shared infrastructure. This however can mean that customers with significant security requirements have limited options or are not able to apply a public BaaS / FaaS model.
- Maturity: Serverless computing has only been around for 3threeyears now, and as with all new code, some time is needed to get it to a stable version.
Summary
A number of vendors have launched serverless computing platforms since 2015 - Google Cloud Functions, IBM’s OpenWhisk, Microsoft’s Azure Functions as well as AWS Lamda. Each platform comes with a different set of functions, and features and given the relative young age of serverless computing many new functions and features will follow. Whilst serverless computing allows the developer to focus on developing the actual code that delivers the function required, it does still require server infrastructure, albeit invisible to the developer.
Thanks for Reading.
Gunnar Menzel is VP and chief architect officer for Capgemini’s Cloud Infrastructure Business. Read more Capgemini blogs here.
References: