Skip to main content
  1. Blog/

Containers or FaaS? Don't worry, you are OK!

6 mins·
Development Cloud
Table of Contents

Here’s a more polished version that maintains the original intent while enhancing the language and flow. I preserved the Markdown format:


I recently witnessed and participated in intense debates regarding the superiority of either “serverless” (FaaS) or containers in any form when it comes to management and orchestration. Often, people take extreme stances on the subject, while I prefer a more nuanced and less dogmatic approach. This post aims to briefly summarize my perspective.

I won’t delve too deeply into the technical details of containers and FaaS, as I want to avoid redundancy and keep the discussion focused.

Why We Originally Preferred Containers and FaaS Over VMs and Bare Metal
#

In the early 2000s, we moved away from maintaining dedicated servers for every use case. Instead, we became independent of bare metal by introducing production-grade virtualization as an abstraction layer. This shift allowed us to dynamically provision computing power with minimal effort, simplified scaling, and drastically reduced the lead time required to introduce new use cases. While virtualization didn’t necessarily make machines faster, it significantly increased their flexibility.

Containers
#

Containers took this evolution even further. Following their breakthrough—largely driven by the convenience and features offered by Docker—they began gaining traction in the early 2010s. Containers enhanced the flexibility we were accustomed to with virtual machines, encouraged the application of the Unix Philosophy even at the system architecture level, and were far more lightweight. In terms of lead time and flexibility, they represented (and still represent) a substantial improvement.

Additionally, depending on how well the containerized software handled it, container engines could rapidly spin up and down instances based on rules and metrics, providing scalability out of the box.

In essence, containers were a game-changer. They delivered flexibility, scalability, and—thanks to standardization—portability.

For instance, my personal virtual servers all utilize containers. If I ever need to migrate my infrastructure to a different hosting provider, it’s merely a matter of bandwidth and, apart from DNS TTLs, takes just a few minutes to move everything to another server or even to my Raspberry Pi cluster at home. For critical services like my email server, this migration happens automatically. I absolutely love this flexibility.

Flexibility and Resilience
#

Many people associate containers and FaaS with speed. I see it differently; I believe speed is more of a byproduct of how well a containerized solution is implemented.

A single bare metal server running a lean Go API service and a single-page web application can easily handle thousands of concurrent users. If that server were replaced with a Kubernetes cluster capable of dynamically adding or removing pods based on performance metrics, the perceived speed would result from enhanced flexibility rather than raw processing power.

One could argue that speed was never the real challenge. Instead, our greatest obstacles stemmed from a lack of flexibility in the following areas:

  • Provisioning new use cases
  • Adapting to short- and long-term fluctuations in load
  • Mitigating the impact of regional failures and disruptions
  • Responding to technological changes
  • Integrating with other systems

Containers significantly simplified scaling, failover, and resilience scenarios.

Functions as a Service (FaaS)
#

Let’s be honest: Containers are fantastic, but they leave a small gap that FaaS solutions fill perfectly. Although containers are technically excellent, they still require spinning up processes to manage input, output, and integration independently. In contrast, FaaS allows developers to focus solely on business logic.

Moreover, functions—like Azure Functions or AWS Lambdas—can theoretically scale from zero to thousands of instances based on demand, making them economically attractive for infrequent use cases.

With minimal effort, most use cases can be configured as entirely event-driven architectures, eliminating the need to maintain persistent services to listen for incoming data. Instead, handlers are automatically triggered by new events.

To keep this concise, I’ll simply say that I genuinely appreciate FaaS solutions. Let’s also make an assumption to streamline the discussion: FaaS is essentially composed of containers managed by an advanced and flexible orchestration engine. End of story.

However, there’s a downside: Migrating a container-based system is relatively straightforward, whereas relocating FaaS solutions is significantly more complex—sometimes even impossible.

The Vendor Lock-In Problem
#

Warning: This section is highly opinionated, and your experience may differ.

Convenience comes at a cost. Let’s begin with a widely accepted fact: There is considerable debate about whether vendor lock-in should be a concern.

Containers are highly portable. As mentioned earlier, one can migrate an entire containerized infrastructure within minutes. The only requirement is that the new environment supports a common runtime, like Docker or Kubernetes. This makes exit strategies for infrastructure changes straightforward.

In contrast, FaaS solutions are deeply tied to specific cloud providers. For example, an AWS Lambda function likely starts with import * from "aws", interacts with AWS services throughout its code, and relies on AWS-specific messaging and database integrations. This tight coupling makes exit strategies complicated, often requiring extensive abstractions, scripts, and workarounds.

In many cases, the exit strategy is simply not to have one—and that’s fine, as long as the use case has a limited lifespan or strategic importance.

As someone who values digital decentralization, I view centralization as inherently risky. The internet’s openness, inclusivity, and success are rooted in its decentralized nature. While convenience is great, the ability to sever ties with any provider at any moment without catastrophic consequences is crucial.

Moreover, cloud providers are not always neutral or reliable partners. They have been known to terminate customer accounts for political or ideological reasons. To ensure visibility and diversity of opinion, maintaining a technically, culturally, and commercially decentralized internet is essential.

Conclusion: Vendor lock-in is a real concern—if it matters to you.

Foundational vs. Temporary Use Cases
#

As developers, we can all agree on a simple principle:

Use the right tool for the job.

If vendor lock-in is a concern, why not evaluate it on a case-by-case basis, just as we do with any other technology decision? Cloud computing is not a business strategy; it’s merely another tool for meeting implementation requirements.

Examples from My Projects
#

  • Containers for Core Services: My email infrastructure is containerized. It maintains a consistent load around the clock, and the container engine dynamically adjusts memory to handle occasional spikes. Containers offer the flexibility and reliability I need for critical systems.
  • Hybrid Cloud for CI/CD: My core versioning and CI/CD setup runs in containers for maximum flexibility and availability. It dynamically scales build nodes depending on workload, sometimes using my home server farm or even AWS Fargate.
  • SaaS for Experimental Projects: I run a machine learning project that collects data from about 50 sensors. To minimize lead time, I opted for an event-processing architecture hosted on Azure. This is purely experimental, so portability wasn’t a priority.

Even as a cloud and FaaS enthusiast, I believe in choosing the best solution for each problem.

Conclusion: There’s No One-Size-Fits-All
#

There is no universally “right” or “wrong” solution. As developers, we are responsible for making well-informed technology choices, and sometimes those decisions need to be revisited as requirements evolve.

Cloud services, containers, and FaaS are just tools—not all-encompassing strategies. They complement existing solutions but do not replace them.

Ultimately, the key is to evaluate each project’s unique requirements and make unbiased decisions without preconceived preferences.

You’re all doing just fine—keep building!

Related

Hugo Continous Integration and Deployment
3 mins
Development Hugo Ci Docker
Data Serialization Frameworks in Java
16 mins
Development Java Avro
Coupling Go Services with Java Using Protocol Buffers over MQTT
4 mins
Development
Alexa Skill development and testing with Java
4 mins
Development Messaging Mqtt
Message Queue Telemetry Transport (MQTT)
6 mins
Development Messaging Mqtt
A brief history of persistence in software engineering
16 mins
Development Java Avro