Pages

Saturday, June 13, 2020

Serverless - Less Server

Traditional applications once developed need to deploy to servers in order to provide for customers. The problem with this approach is that it requires capacity planning, procurement of hardware, installation of software and making them ready for the application. This normally takes a week to months in setting up everything. It is time consuming and includes initial and running expenses.
 

With the introduction of Cloud with on-demand hardware minimizes the capacity planning and solving many issues. There will still be costs involved with CAPEX( Capital expenditure ) and OPEX( Operational expenditure ) but it decreases deployment time, no staff to manage hardware etc. In this article we will see what is serverless and how it can be used

Introducing Serverless

Serverless can be defined as “an approach that replaces long running machines with ephemeral compute power that comes into existence on request and disappears immediately after use”

Understanding Serverless
Serverless doesn't mean no server but a server which we don't need to manage. Instead of maintaining a server and running the services on that, we delegate the maintenance of the server to a third party and we concentrate on the development of the services that we need to run on those servers. Generally these third parties that manage the servers are Cloud vendors.

It would be very useful if we concentrate on the development of the service rather than managing a server. This is where serverless comes into the picture. Serverless or serverless computing is an execution model in which we run our services on a hardware provided by a Cloud vendor like Aws, Google or Azure. These hardware are managed by the cloud and resources are attached and detached to the server based on the requirements. The cost will be based on the amount of resources consumed by the service. This is what makes this different from other models. In normal cases, we buy a server and run our services on it. We manage the server like adding memory or cpu when required but in the serverless the management of the server is handled by the cloud including the resources and everything. All we need to do is to run our services on that.

As we already said that serverless does not require a pre-defined hardware for the execution of the application, but it is the role of the application to trigger an action which will cause the hardware to be created and application is executed on that. Once the execution is completed, the hardware is stopped until another action is triggered.

For instance, let's say we have a content management application where users upload an image to the articles that they write. If we are in a serverless architecture built with Aws Lambda , the image will get uploaded to the S3 bucket first and an event is triggered. The trigger will cause a Aws Lambda function written in multiple programming languages to resize the image and compress it to fit for displaying on multiple devices. The aws lambda code or functions that gets executed by the events triggered run on a hardware built on demand by the cloud provider. Once the execution is complete, the hardware is stopped and will be waiting for further triggers.

Serverless Providers

Most of the other major cloud computing providers have their own serverless offerings. Aws Lambda was the first serverless framework launched on 2014 and is the most matured one. Lambda supports multiple programming languages like Node.js, Java, Python and C# and the best part is this lambda integrates with many other Aws Services.

Google Cloud Functions is also available ,Azure functions from Microsoft and OpenWhisk is a Open source serverless platform run by IBM. Other Serverless options include Iron.io, Webtask etc.

Function as a Service or Faas
When we say that servers are dynamically managed or created when we want to run the service, the idea is that you write your application code in the form of functions.

Faas or Serverless is a concept of Serverless computing where Software developers can leverage this concept to deploy an individual “function”, action, or piece of business logic. They are expected to start within milliseconds and process individual requests and then the process ends. The developer does not need to write code that fit the underlying infrastructure and he can concentrate on writing the business logic

One important thing to understand here is that When we deploy the function, we then need a way to invoke it in the form of an event. The event can be any time from API gateway ( http Request ), an event from another serverless function or an event from another service from cloud like S3.

The cloud provider executes the function code on your behalf. The cloud provider also takes care of provisioning and managing the servers to run the code upon invocation.

Pros & Cons

Serverless provides many pros to developers and cons even. Here are few Pro’s

Pay only for what we use : the first pro is that we don't need to pay for the idle server time, and pay only for the time we execute our code on the server. The servers are kept idle or used for other executions.

Elasticity : with the serverless architecture, our application can automatically scale up to accommodate the spike in the application traffic and scales down when there are fewer users. The cloud Vendor will take the responsibility of scaling up/down the application based on the traffic.

Less time and money spent on Management : Since most of the infrastructure work like creating hardware, scaling up/down the service is taken care of by the vendor and with no hassle of managing the hardwares, it will help organizations to spend less money , time and resources giving them time to focus on the business.

Reduces development time and time to market
: Serverless architecture gives enough time for the developers and organizations to focus on building the product. The vendor will take care of hardware, deployment of the services, managing and scaling them leaving organizations to focus on building the product and release it to the market. There are no operating systems they need to select, secure, or patch.

Microservice approach : Microservices are a popular approach where developers build modular software that is more flexible, modular and easier to manage then a monolithic application. With the same approach developers can work in building smaller, loosely coupled pieces of software that can run as functions.

Here are the Cons
Vendor Lock-in & Decreased transparency : this is one of the major concerns in moving to serverless in a cloud. The backend is completely managed by the vendor and once the functions are written, moving them to a different cloud can cause major changes to the application. Not just the code,the other services like Database, Access management , storage that are linked with the functions need a lot of time , money and resources in porting them to different clouds.

Serverless Supported programming Languages : Since the functions are written in certain programming languages not all languages have support. AWS Lambda directly supports Java, C#, Python, and Node.js (JavaScript), Google Cloud Functions operates with Node.js, Microsoft Azure Functions works with JavaScript, C#, F#, Python, and PHP, and IBM OpenWhisk supports JavaScript, Python, Java, and Swift. There are some other programming languages like scala, Goland etc where support for serverless is coming and still work in progress.

Not suitable for Long running Tasks : Since the functions are event based in nature, this can be a best fit for long running tasks. The timeout limit on Lambda used to be 5 minutes, but as it was a major barrier for some applications in using serverless, it was increased and since Oct 2018 Lambda can run up to 15 minutes.

On other serverless platforms, it varies from 9 minutes to 60 minutes maximum. There are many use cases in general for long-running processes, such as video processing, big data analysis, bulk data transformation, batch event processing, very long synchronous request, and statistical computations which can’t be a best fit for serverless computing.

Potentially tough to debug : There are tools that allow remote debugging and some services (i.e. Azure) provides a mirrored local development environment but there is still a need for improved tooling.

Hidden Costs : Auto-scaling of function calls often means auto-scaling of cost. This can make it tough to gauge your business expenses.

Better Tooling : You now have a ton of functions deployed and it can be tough to keep track of them. This comes down to a need for better tooling (developmental: scripts, frameworks, diagnostic: step-through debugging, local run times, cloud debugging, and visualization: user interfaces, analytics, monitoring).

Higher Latency in responding to application events : since the hardware is set idle for quite some time and when we trigger to run a function, the server can take some time to wake up and run the function.

Learning Cure : Serverless does have a learning curve in defining our software in the form of functions. Converting our monolithic application into microservices and then to functions require deep understanding of the architecture and how they work.


Hope this helps in giving the basic understanding of the Serverless. 

No comments :

Post a Comment