Serverless Software Development: Focus on Features, instead of the Infrastructure
Serverless, as the name implies, is a technology allowing you to run your backend applications without managing the server infrastructure. You deploy your code and your cloud provider manages the servers for you under the hood. It allows you to focus on your code minimizing the amount of effort spent on the infrastructure.
No need to pay for idle servers. With Serverless, you only pay for the time it takes to run your application. If you aren’t using your system, you don’t incur any runtime costs. It’s ideal for your development, staging, and QA environments which aren’t used 24/7.
Amazon AWS, Google GCP, Microsoft Azure, and many more providers support Serverless technology. Most of them continue to integrate Serverless further into more of their products. We’re already seeing Amazon Aurora Serverless database and Fargate Container engine following the same Serverless principles of paying for only when the resources are used. Serverless is entering the mainstream phase now. There are plenty of resources and libraries online to help you.
Serverless resources grow and shrink automatically with your demand. More Serverless resources are spawned up when more requests are coming. If there are no requests at all, no resources are running.
Running on your own hardware is complicated and unnecessary (unless you’re at a scale of Google or Amazon), so as running on dedicated virtualized servers (like AWS EC2). Over the next 5 years, Serverless will become the norm and a default choice for new development. Working with new technology is exciting for you and your team! Don’t miss the opportunity to be ahead of the game instead of playing catch up.
Serverless functions plugs in directly into your API Gateway or GraphQL endpoints. This means they can power your client applications like mobile apps, websites, IoT devices, and the rest of the backend microservices.
In our day and age, having access to the relevant data is everything. It’s very straightforward to connect your serverless function to a queue, a pub-sub system, a database or even your file storage bucket. In the case of AWS, all of the major data services like S3, DynamoDB, RDS, SNS, SQS all support the ability to trigger a serverless function to take the data that just arrived, transform it and deliver it to some relevant parties. No need to build some brutal long-running ETL jobs introducing some update delays. Process the data as it comes in, and deliver it to your data warehouse close to real-time.
No need to pay for your servers sitting around doing no work, right? Same for your database, if it’s not being used in your QA environment overnight, why pay for it? By moving more and more resources to Serverless, you reduce your costs and make your CFO happy 💸
Business opportunities come and go quickly. Being able to develop your software fast, deploy it, and have it run automatically is key for growing your business. Serverless technology allows you to do exactly that. Just focus on your features and let your cloud provider take care of the infrastructure for you.
Ever got a notification about your servers going down because of a marketing campaign bringing more traffic? With a well-architected Serverless system, with all bottlenecks removed (or moved to Serverless) you’ll be able to support any unexpected spike in traffic. With more traffic coming to the system, more serverless functions are spawned up to support the increasing demand. And when the traffic is tailing off, fewer serverless functions are invoked to keep your system cost-effective.
Putting together some solid architecture goes a long way. To take advantage of Serverless technology, the key is to start thinking about your feature development in a new way.
Monolithic architectures don’t scale well. To take advantage of an Infinitely Scalable Serverless technology, we need to start thinking about our systems as a set of modules. One module for user management, another module for order processing, and so on, and so forth. All modules (aka microservices) are talking to each other forming a robust scalable system where each module can be scaled individually as opposed to scaling the entire monolith.
Software systems are built to automate complex processes. Each process consists of some operations. If all operations are a sequential set of steps built in a monolithic way, it’s hard to scale it, hard to maintain it and develop it because all steps are tightly coupled. To take full advantage of Serverless, we’ll need to break down the complex processes into individual steps triggered by events. An example is triggering a Lambda function when a file gets uploaded to an S3 bucket. Once the file arrives, the Lambda gets triggered to process a file and write it to a data warehouse. Doing so allows us to use AWS Lambda being able to process as many files as we want to lean on the cloud provider to do the scaling work for us.
Automatic Deployments (CI/CD)
By automating away the process of deploying our code to the cloud, we enable our developers to focus on what matters most: writing code for new features. To do so, we build an automated deployment pipeline which consists of the following major steps:
You can read more about CI/CD pipelines in our CI/CD solution.
Efficient Monitoring and Alerting
Each cloud provider has a specific set of metrics associated with each Serverless resource. In the case of AWS Lambda, it reports the number of invocations, logs for each lambda, number of failures, and more metrics into CloudWatch. We can set the alerts on our CloudWatch metrics to report to Slack, Email, and Text when some unusual activity happens. To make our monitoring efficient, our alerts will be precise and specific. So when they arrive, it’s clear what happened and what to do next. Each alert has a severity level. If something is slightly off, it can go to Slack or email. And for some critical component outages, it can go to text or perhaps even a call for an on-call person using a PagerDuty alert.
Manual testing is great with one drawback: you need to do it again and again every time you push new code. Which leads to a costly big bang releases that are no fun. You don’t want to waste your precious engineering time on manual testing. Instead, writing a comprehensive set of unit tests, functional tests, and integration tests will go a long way. You can add them to your CI build pipeline to run every time you deploy your code automatically.
Right Balance is your development partner helping you seamlessly integrate Serverless into your technology stack. The Right Balance team helped many successful businesses to use Serverless to achieve their business goals. Most of the companies are running on the latest and greatest technology stack, producing a lot of features with increased velocity. Companies running on the latest technology stack is a fun and exciting place to work in. This helps with attracting some quality candidates in the current competitive job market.
When it comes to integrating new technology, it can be integrated in the following ways:
When the current system needs to stay in place and it still functions well, a gradual introduction of new technology makes the most sense. The advantage of such an approach is that it’s flexible. It allows for the existing feature development to continue on the current platform while introducing new technology at the same time without any disruption.
In this case, we’ll start with integrating Serverless into one part of the overall system. Writing a new microservice from scratch and connecting it with the rest of the system using some API calls allows putting the architectural patterns in place laying out the groundwork for new development. Once the initial architecture is in place, we can expand further with either migrating the current platform over to the new stack step by step or continue on the new stack for the new feature development.
In the case where the entire system doesn’t perform to its expectations when the current feature development is extremely slow and the existing staff doesn’t want to deal with the system anymore, it’s easier and more effective to rewrite the entire system all at once.
Here, we’ll build an entirely new system on the latest technology stack while having the old system running. Then we’ll start migrating the user data over to the new system. Once the migration is complete, we’ll switch over to the new system and turn off the old one. The users will see a new system as just a regular upgrade next time they log in.
This is also a great opportunity to delight a user with some new features only available on a new system. In addition to it, it’s a great time to update the look and feel of the product on the front end.