Containers or FaaS? Don't worry, you are OK!

I recently withness and participate in intensive debates regarding the superiority of either “serverless”/FaaS or containers in any form in terms of management and orchestration. Often, people are very vocal about an extreme position of the spectrum, while I pursue a much more relaxed, neither less opiniated approach. This post is supposed to briefly summarize my opinion.

I will not dive deeply into the details of containers and FaaS more than neccessary to avoid redundancy and keep the post focused.

Why we originally preferred Containers and FaaS over VMs and Bare Metal

In the early 2000s, we moved away from having a separate server for every usecase. Instead, we made ourself independent from the bare metal by introducing production-grade virtualization as an abstraction layer. It allowed us to dynamically provision computational power with reasonable effort, made scaling easier, and dramatically reduced the lead-time before introducing a new usecase. It didn’t make the machines faster, but much more flexible.

Containers

Containers made the deal even better, and after their breakthrough, significantly caused by the additional comfort and features of Docker, they started to take hold in the early 2010s. They increased the flexibility we were used to from virtual machines, incentived applying the Unix Philosophy even on system architectures, and were much more lightweight. In the regard of lead-time and flexiblity, they were (and are) a massive improvement. Aswell, depending on how good the containerized software was able to cope with that, container engines were capable of spinning up and down instances extremely quickly, even based on rules and metrics, so scalability became something we got out of the box.

Thus, containers were a dealchanger. They brought flexiblity and scalability, and thanks to standards, they were extremely portable.

For instance, my private vServers all use containers. In the event of me having to relocate my infrastructure to another hosting provider, it is (neglecting DNS TTL’s) a matter of bandwidth and mostly a 1-digit minutes to switch my entire infrastructure to another server or even my raspberry pi cluster at home. For important services, such as my eMail server, this happens automatically. Love it.

Flexbility and Resilience

Many people associate containers and FaaS with speed. I don’t completely agree with that, rather I would consider speed as a side-effect of how a good containerized solution is set up.

A single bare metal server with a lean Go API service and a single-page web application can serve thousands of users concurrently without too much struggle. Supposed the server would be replaced with a kubernetes cluster which can dyamically add pods when the existing ones exceed certain metrics and vice-versa, our perceived speed is a result of our flexibility.

So, one might argue that speed never was a real problem we couldn’t solve. Instead, or real challenges derived from our lack of flexibility in the following respects:

  • Getting a new use-case provisioned
  • Adapting to short- and longterm load changes
  • Being less vulnerable to regional failures and disturbancies
  • Reacting to technical changes
  • Interfering with other systems

Containers made it significantly easier to implement scaling, failover and resilience scenarios.

Functions as a Service (FaaS)

Let’s face it: Containers are great, but they leave a tiny gap FaaS systems are able to fill perfectly. Despite their technical excellence, containers still follow the same pattern as a bare-metal installation to a certain amount by spinning up a process that is required to deal with its input, output and integration matters by itself, while FaaS solutions allow the developer to concentrate on business logic only. Aswell, a function, such as Azure Functions or AWS Lambdas, is (theoretically) able to scale down to zero and up to a high amount depending on the load, changing the econimic figures on certain infrequent use-cases.

With very little effort, most usecases can be set up as a completely event/message - driven architecture without having to spin up any service to wait for incoming data. Instead, our handlers are automatically spun up as soon as a new event arrives.

Without digging to deep into FaaS here, I can only argue that I really like them. Furthermore, lets just make an assumption in order to streamline the argumentation: FaaS is made of containers with a very sophisticated and flexible managing engine on top. End of recipe.

On the other hand, lets face the downside: The effort of relocating a container-bases system is very low, relocating FaaS solutions is somewhere between hard and impossible.

The Vendor Lock-in Problem

Warning: This chapter is highly opinionated, your mileage may vary.

Comfort comes at a cost. Lets start with something most people can agree on, because there is a massive controversy on whether it is necessary to consider the vendor lock-in problem or not.

The fact is that containers are extremely portable. As written above, one could relocate an entire containerized infrastructure within minutes. The only requirement for the new home is providing a common denominator, such as a container runtime or a Kubernetes cluster. This means, the exit plan for drastic infrastructure changes is easy.

With FaaS, the project is tied to implementation of a particular FaaS cloud provider. Their libraries go deep into the domain logic code. Given a lambda function on AWS, the code most certainly starts with import * from "aws", some of the lower lines are interacting with AWS to pass the result to another party, and many times in the middle for tasks such as publishing messages or communicating with databases directly. The exit plan is much harder, with lots of abstractions, scripts and tinkering.

In most cases, the exit plan simply is to have none at all. And this is perfectly fine, given that the use-case has a limited lifetime or relevance.

As a “real digital native”, I heavily oppose centralism and consider it as dangerous. The internet should be decentral, this is what made it open, inclusive and successful in the first place. Having additional comfort is fine, but it is important to be able to walk away from any little peer anytime without causing a massive downtime.

Furthermore, unfortunately cloud providers have proven not to be neutral and reliable partners for everybody. Even the big cloud providers have a track record in regards of cancelling customers who did not meet their (and mostly my) narrative. Im my opinion, it is absolutely vital to maintain a technically, culturally and commercially decentralized internet in order to guarantee the visibility and diversity of opinion.

Conclusion: Vendor-Lock in is a real matter as long as you have to care.

Foundational vs temporary usecases

We as developers can agree on a simple value:

Use the right tool for the job.

So, supposed that vendor lock-in is a problem as long as you have to care, why not simply decide on which way to go per usecase, just as we do for any other choice of technology? Cloud never was a business strategy, it simply is an other option to suit implementation requirements.

Lets use some of my projects and infrastructure decisions for illustration:

On some use-cases, it perfectly makes sense to use containers, for instance if the use-case is absolutely essential for the core business. For instance, my eMail infrastructure is containerized. It runs at the same load 24/7, the container engine grants a little more memory just in case spam wave comes in and causes a little more load on the filter software. So, this is either a container usecase or something to be loaded of to an external service provider. Mostly personal choice, opinion and need for flexibility.

My core Software Versioning and CI/CD infrastructure is containerized, too. Just because I really need them to be flexible, available and reliable. In normal times, it runs a (awesome, fast, tiny, written in Go, please check them out) Gitea server and a homemade CI/CD solution in a set of containers. Depending on workload, it spins up more build node (aka build slave) containers when a lot of builds are queued, in heavy-load times I usually attach containers on my server farm at home, and even some more nodes running on AWS Fargate. So, this is a hybrid cloud containerized solution, which, on the other hand, could also be completely offloaded to a SaaS provider if you prefer.

I also operate a set of sensors around my house, recording a lot of real-world event for a machine learning project I am working on. Here, around 50 EP8266 and raspberry PIs emit a high amount of messages around the clock. And even if I could easily set up an event processing architecture for that myself, I didn’t bother, preferred to have an extremely shorter lead time, and offloaded the entire processing to a cloud solution on Azure, just for the sake of playing with it. So, this is a SaaS solution, with some additional services to provide the glue.

Even if I consider myself as cloud and FaaS enthusiast, I also love all the other ways to get the job done, depending on how well they serve me for the problem I am about to solve.

Conclusion: You are all OK!

There is no right or wrong or “silver bullet” for every usecase.

As always, we as developers are responsible for making the right choices on technology, and there is a lot of things to consider before a choice is perfect. It might even be necessary to reconsider those choices down the road. In other words, business as usual.

“Cloud”, and all the technology options tied with that, is just a great, additional toolbox and should not be confused with an overall business strategy or some new stuff that wipes all existing solutions away. It isn’t. Same applies on all solution architecture patterns one can implement by using cloud services only.

So whatever you are working on, just inspect the requirements, environmental factors and every other information you can get, and make a reasonable choice without being biased towards any option in the first place.

You are all OK!

Recent Posts

codefreeze 2024
Fuzz Testing in Golang
Hyères
Gran Canaria
Solingen
Norderney
Scotland
Træfik
Copenhagen | København
Tenerife
Etretat
Lanzarote