With serverless computing, you gain time and efficiency, but you also sacrifice some of the control and transparency. When a serverless computing service is brought it, the company gets locked on a particular vendor for providing technology implementation such as AWS Lambda and Microsoft Azure. Due to a lack of standardisation across cloud platforms, businesses end up stuck with one vendor as it becomes difficult to switch to another vendor’s platform without involving considerable effort, time, and cost. This can be highly problematic if something goes awry or you need to migrate data to a new platform because there are often too many legal, financial, and resource restraints, but also data formatting incompatibility.
Serverless is amongst the topmost desirable skills in 2020 as the trend towards serverless architecture increases. This leads to a high demand but low supply. Serverless is a very hazy, and still relatively new, concept, which means experience, skills, and qualifications are hard to find in the candidate market. This makes it incredibly difficult to hire the right people. Having ‘serverless’ in a job title is likely to decrease the size of your candidate pool, in a market where finding qualified people is already difficult. Even if you’re willing to take developers without serverless experience with the aim to invest in training for them, they may be put off applying because they don’t feel like they fit the specifications.
Often, it’s easier to use a consultancy to help you identify where this talent is, what compromises you can reasonably take, and how to find a great candidate.
When using serverless you will have to deal with cold starts (the time it takes to bring up a new container instance when there are no warm containers available for a request). Serverless containers will stay active as long as they are being used, but after a period of inactivity, your cloud provider will drop the container, and your function will become inactive (cold). When you then execute an inactive function, a cold start happens. The delay comes from your cloud provider provisioning your selected runtime container and then running your function, which increases your overall execution time.
A simple work around for this is to keep functions warm by hitting them at regular points, however this is difficult for more complex workflows or larger functions. Therefore, there are several things to bear in mind with serverless when it comes to reducing cold starts:
1. Architecture: keep your serverless functions small and focused
2. Language: Python & Go can considerably lower cold start times, whereas C# & Java notoriously have the highest cold start times
3. VPCs: cold start times increase due to extra overhead of provisioning networking resources
There are many considerations, and with such a new technology, there is often a lack of skill and knowledge to make informed decisions.
Given that serverless computing is a relatively new technology, many development and security teams struggle with understanding and dealing with the unique security risks it creates. Doug Dooley, COO of Data Theorem explains;
“Many of the current generation security tools rely on being able to attach to the underlying servers, virtual machines, guest operating systems, databases, virtual containers, and virtual network interfaces. Once an application developer chooses to build upon a serverless infrastructure, those underlying components are no longer persistent nor readily accessible. As a result, many enterprise security teams are scrambling to come up with new solutions that will work to secure modern applications and APIs built on serverless frameworks such as Amazon Lambda, Azure Functions, and Google Cloud Functions.”
When a business goes serverless, a large portion of the responsibility for security is passed onto the cloud provider. However, the developer still has a shared responsibility for security in terms of application logic, code, data, and application-layer configurations. We45 have shared the list of ten security risks associated with serverless computing, that they have encountered:
1. Function Event-Data Injection
2. Broken Authentication
3. Insecure Serverless Deployment Configuration
4. Over-Privileged Function Permissions and Roles
5. Inadequate Function Mentoring and Logging
6. Insecure 3rd Party Dependencies
7. Insecure Applications Secret Storage
8. Denial of Service and Financial Resource Exhaustion
9. Functions Execution Flow Manipulation
10. Improper Exception Handling & Verbose Error Messages
However, the National Cyber Security Centre suggests that serverless computing makes it easier to monitor and improve cyber security, stating that “our experiences with Serverless have shown great security benefits.” It is likely that it depends on a business’s security resources.
With all these downsides, why do companies opt to use serverless? Well, despite the disadvantages above, there are many advantages to serverless computing. Plus, as the market matures and develops, mores solutions will start to emerge to lessen the issues discussed above. The biggest advantages of serverless include:
Essentially, serverless computing saves resources and money, whilst enabling scaling and growth, which are often good enough reasons for most businesses to overlook the disadvantages. It’s important to remember that not all advantages and disadvantages will impact businesses in the same way, which is businesses should always consider their specific needs, goals, strengths, and weaknesses before making the decision to go serverless, or not.
I’m really keen to hear your thoughts whether you’ve successfully or unsuccessfully implemented serverless, or if it’s just something your thinking about! Drop me a message if you’re interested in discussing serverless more, or if you’d like some more information on an upcoming event I am working on, all about Serverless Computing.
Sources & Further Reading