Trilogix Cloud

Serverless overview, principles, case example on AWS


Serverless computing POCs will best illustrate what approach to use.  Costs, management, simplicity, concurrency, using micro-services or containers; are some key principles and constraints to be considered.  Since the use of Serverless is ‘flexible’ and each Cloud platform has various offerings, the right architecture with the right mix of costs, scalability, manageability, and performance will only be ascertained through Agile POCs.


2.What is Serverless, Benefits, Demerits

Serverless generally has 2 definitions or applications in Cloud architectures.  The first refers to the creation of small self-contained ‘functions’ and code which implement a particular set of instructions.  On AWS, Lambda is used to provide discreate functionality to perform a task in an event driven architecture for example.  An example might be a code repo being updated and Lambda used to integrate the changed classes into a CI-CD pipeline.

Serverless also refers to a second general category, namely using a PaaS (eg a database) and not having to be concerned with the underlying compute infrastructure.  This is known as DBaaS, a type of scalable server infrastructure and in which auto-scaling is built in and there is no hands-on operational management.

Serverless is usually concerned with a ‘cloud-native’ architecture which enables a firm to offload the operational management of services to the Cloud provider eg. AWS.  This allows firms to build and run applications and services without worrying about the underlying server-compute infrastructure.


Why use it?

Serverless eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning.  The idea is that developers can concentrate on building better code, often times in containers or within a micro-services design pattern.  Serverless is inherently scalable and a great pattern to handle concurrency (eg migrating a Mainframe-Cobol-3270 app to the cloud, means you will have to worry about concurrent access).

Interesting, some firms are also using Serverless to ‘collapse’ complex middleware or middle tiers in their architecture and push business logic to either the client application, or to serverless functions.


Some issues

Serverless can suffer from the following pain points:

  • Costs (API Gateway, Lamba, FaaS managing containers, can all add up)
  • Complexity and management
  • Proper monitoring of the functions
  • Bottlenecks and Latency

POCs will reveal the best mix of serverless and non-serverless services and patterns for your particular use case.


3.Key Principles

1. Develop a single purpose Stateless Function

By default, the recommended practice is to write single-purpose code for the function since functions are stateless and exist for a limited duration only. By doing this, you are limiting the execution time which is directly proportional to the overall costing.

Also, functions with single-purpose code are easier to test, deploy and release. This also improves software agility. The problem with multi-purpose functions running your entire application is when you end up scaling an entire application rather than scaling a particular functionality.

For example, your application has a function which serves two applications, where first serves some thousands of requests whereas the second might serve one hundred requests.  In this case just split the function and make a separate one for both applications and simplify the process.

2. Create a powerful frontend

Move some of the logic and state control to the front-end of the app.  A weak frontend requires lot of traffic from the frontend to backend.  This will create latency.  Best practice is to separate the frontend from the backend.

Executing more complex business functionality at the frontend through a rich client-side application framework can potentially help in reducing the cost by minimizing the function calls and execution times.  The best way to do it would to completely decouple backend logic from the frontend.  This allows more services to be accessed from the frontend resulting in better application performance and user experience.

3. Design a trigger-based event-driven pattern

A decoupled application can provide scalability. It also helps reduce the interdependencies within functions so that failures do not impact other components and enables data synchronization across the web application layer.

Designing a push-based and event-driven architecture patterns assumes a chain of events which are instantiated without user input. This pattern gives the serverless scalability and concurrency management.

4. Incorporating Standard Security Mechanism

Functions are governed by different security policies. This makes it important to build a proper security mechanism for all serverless services such as API Gateway, Lambda, or S3.  An IAM policy will include access controls, identity and access management, authentication management, encryption & decryption and much more.  A guiding security principle is that of Least Privilege.


4.AWS project example

AWS provides a number of Serverless services:

  • Compute:  Lambda, Fargate
  • Storage: S3, EFS
  • Databases:  Dynamo DB, Aurora MySQL
  • API:  API Proxy, API Gateway
  • Application Integration:  SNS, SQS, Eventbridge, Appsync
  • Orchestration: Step Functions
  • Analytics: Kinesis, Athena
  • Development:  SDKs, IDEs

Bustle was originally running on a third-party platform as a service (PaaS) but migrated to using AWS OpsWorks to gain better scale and availability.

Problem 1:  The site was built as a monolith and had trouble scaling.

Problem 2: There was also a fair amount of server management, automation, and monitoring involved in order to keep the website running smoothly; this raised the barrier of entry for new engineers to roll out new code changes.

Problem 3: The engineering team wanted to focus on areas where they thought they brought the customer the most value, which was building front-end features instead of dealing with operations.

The Solution

Bustle started to use AWS Lambda to process high volumes of site metric data from Amazon Kinesis Streams in real time. This allowed the team to get data more quickly so they can understand how new site features affect usage. They can also now measure user engagement, allowing better data-driven decisions.

The team also decided to explore using AWS Lambda and Amazon API Gateway for running an entirely serverless website. To test, Bustle built using Ember.js and Riot.js running on a serverless backend comprising AWS Lambda, Amazon API Gateway, and a self-managed Redis data store.

Bustle also built their own AWS Lambda and Amazon API Gateway software-delivery tool, allowing developers to easily do integration tests and deployments when they are ready to release their code into production.

The serverless back end supports the Romper website and iOS app as well as the Bustle iOS app. Bustle plans to migrate all of to the serverless back end.


The Benefits

With AWS Lambda, the engineering team now puts zero thought into scaling applications. There is an extremely low cost for any engineer to deploy production-ready code that will scale. With no operational maintenance of servers, the team can remain small, with half the people normally required to build and operate sites of Bustle’s scale.  Bustle has also experienced approximately 84% cost savings by moving to a serverless architecture.



Leave a Comment

Your email address will not be published. Required fields are marked *