Traditional compute cloud vs. serverless vs. infraless

A comparison of cloud generations.

Cloud generations and ease of use

All have their merits, but depending on your needs, one may be a better fit than the other. The below table highlights the progressive abstraction and simplification from traditional cloud computing to infraless computing, with serverless computing as an intermediate step. Infraless compute cloud extends the serverless model by abstracting away not just compute resources, but also a wide range of additional services and infrastructure components, providing a development environment that is at the same time both simpler and more comprehensive and integrated.

First, we’ll go through the how the different cloud generations define developers’ involvement in and responsibility for setting up and keeping the cloud running, and then how they affect software delivery performance.

Note: a “←” in a column means the features are identical to the column to the left, and the addition of a "+" means the features are additional to the ones to its left.

Traditional
Serverless
Infraless
Additional info
Level of infrastructure abstraction
You manage the setup and ongoing management of virtual machines, servers, and infrastructure, requiring a lot of effort and expertise. As an example, AWS alone has more than 240 services - and it might seem like an advantage, but in reality it requires years of experience to figure out which are required or relevant for you, and how to set up and maintain them…
The provider provisions and manages the core infrastructure: servers, scaling, and maintenance.

You can focus on your product and non-cloud related tools and applications.
 ← + Provider manages additional infrastructure, such as e.g. authentication, databases, file storage, caching, and job queues, by abstracting away the service-to-service communication and coordination behind a message-passing interface. So you get all the advantages, without having to learn and understand the tools.

You can focus solely on your product.
You can find a brief explanation of serverless here, and of infraless here.
Platform performance
Scalability
Scaling is manual or requires pre-configured auto-scaling rules. You must anticipate demand to avoid over- or under-provisioning, which either hurts your wallet or your software's performance.
Automatic and instantaneous (zero cold-start) scaling based on demand without manual intervention, handled by the platform. Resources are dynamically allocated as needed.
Performance
Potential latency during high traffic, limited by capacity if you have under-provisioned resources or run into a manual scaling limitation.
Faster response times, via distributed requests and automatically adjustment to handle fluctuations in traffic, minimizing latency and performance bottlenecks, while also removing your risk of over-paying for unused resources.
← + optimized performance via AI/machine learning, and integrated caching and workflows.
 
Availability & redundancy
Requires you to configure and maintain backups, failover, and disaster recovery setups.
Redundancy is built-in the platform: Your code is automatically distributed across multiple physical locations, with disaster recovery managed by the platform.
 
Security
Security setup is largely your responsibility, including patching, updates, and monitoring.
Many security responsibilities are handled by the provider (e.g., encryption, patching), but you still manage application-level security.
← + all security related to additionally abstracted infrastructure.
 
Monitoring & logging
You implement solutions.
The provider offers tools, often requiring additional setup.
Integrated monitoring and logging solutions are included.
 
Resulting developer...
 ... maturity
DevOps, as you know it.
DevOps, but slightly lighter on the Ops-part.
Full on NoOps, setting developers free.
 ... productivity
Developers are forced to split their time and attention between both product- and operations -related tasks and deadlines, resulting in reduced time-to-market.
Developers are free from cloud management and the related tools, so they can focus more on their product, but they still have to tediously setup and maintain all the usual non-cloud related infrastructure tools and apps.
Developers can focus their time and attention on their product, while the provider manages everything cloud-related and many other tools and apps in a documented, self-service enabled environment, drastically reducing time-to-market.
 
 ... experience
Developers have a high risk of burnout, particularly due to issues with prioritizing between development and operations.
Developers have less risk of burnout, as cloud operations are no longer a competing priority. Instead, their time and attention can be almost fully focused on their product.
Developers are set free from not only operations, but almost all of the tedious infrastructure tasks, which significantly reduces their cognitive load and allows for deeper focus, resulting in actual developer happiness.
 

Cloud generations and software delivery performance

With that in place, we’ll look at how the different platforms impact the four keys of software delivery performance, cf. DORA’s State of DevOps 2024 report.

Note that for infraless, the answers are based on Merrymake’s offering, as there aren’t many providers of infraless cloud yet. Thus, answers are not restricted to differences between the cloud technologies, but also reflects the different approaches to software development typical providers enable.

With that said, great software delivery performance requires:

Traditional
Serverless
Infraless
Additional info
Low change lead time
As you are not just responsible for your product, but also cloud provisioning and all infrastructure (incl. containers), change lead time will have room for improvement.
Being alleviated from cloud operations, serverless enables you to focus and  decrease lead times.

But you still have to manage your deployment processes.
You focus solely on application code. Merrymake’s platform manages provisioning, deployment pipelines, and additional infrastructure., enabling you to develop changes much faster.
Another common hindrance is a direct communication architecture, because it requires coordination between services and developer teams before changes can be made.

Merrymake has, on top of its abstractions, an indirect, two-tier communication architecture, which makes it significantly easier and faster for developers to implement changes.

Watch a 3min 20sec quick demo, showing in real-time how fast it is to deploy and split a service in two.
High deployment frequency
You setup and manage custom CI/CD pipelines, being responsible for the process and deployment time.

In many cases, this motivates developers to pile changes and only deploy once or twice per day, going against best practice of deploying small batches of changes frequently. This increases the risks of failures and the failure recovery time significantly.
The provider (often) offers simplified deployment processes, with automated CI/CD pipelines.

And the easier it is to deploy, the more often it will be done.
Merrymake has fully automated deployments with built-in CI/CD. Deploy with Git or a single command in the CLO, and be live in 10 seconds.

This fosters a culture of small, frequent deployments, reducing fail rates and recovery times, when something fails.
Low change fail rate
You are responsible for all testing strategies,  deployment processes and handoffs, and worst of all - keeping a test environment up to date with production, all of which increases the risks of production-breaking deploy.
The provider might offer tools that automat testing and deployment, if you set them up.

And you are still dependent on managing a test environment, if you want one, or e.g. testing in production with feature flags, which enables you to reduce fail rates at the expense of increasing change lead time.
Merrymake’s indirect, two-tier communication architecture removes the need for test environments or feature flags - instead, test risk-free directly in production.

Merrymake also includes an automatic “init container” (smoke test) on deploy, further reducing the risks of failures.
 
Low failure recovery time
Failure recovery times depends on the processes and tools, that you have set up and maintained, and not only for failure handling, but also for deployment (remember, large batches of changes make it more difficult to identify the cause of the failure).
Merrymake's offline simulator lets you spin up a local version of you entire system or relevant subsystem, instantaneously. And because the platform is event-based, an identical event to that which caused unexpected behavior can be posted, and you can see what happens at each step, essentially giving you a slow-motion replay (aka event sourcing). You can replay it as many times as you want, and add more logging or literally change the services locally to see how that affects behavior. So you can test solutions in an isolated environment, and once you have proved their effect, you can deploy them to production.
 

But what does it cost, and what can I use it for?

Traditional
Serverless
Infraless
Additional info
Use Cases
Applications with consistent workloads, and/or monoliths.
Microservice architectures with variable workloads (e.g. APIs, real-time data-processing, event-driven applications, recurring data processing, and IoT).
← + complex applications, with multiple integrated services.
Cost Model
You pay for reserving a fixed compute capacity, even during low-usage periods.
You pay-per-use, charged for actual resource consumption, not reservation.

Pricing is typically comparable to traditional cloud, but without wasting money on over-provisioned/unused resources.
← + On top of only paying for what you actually use, you also gain organizational benefits from increased developer productivity and happiness.

Ready for infraless and all its benefits?

Go straight to the tutorial, or ask your questions on Discord or book a meeting!