What is serverless?
The term "serverless" can be a bit misleading. It doesn't mean servers are no longer involved. Instead, it signifies a shift in how users of cloud manage and pay for cloud computing server resources. In a serverless computing architecture, developers can build and run applications without needing to manage the underlying serverless infrastructure.
Think of it this way: in the early days of the web, you had to buy and maintain your own physical servers – a costly and complex undertaking. Then came cloud computing, where you could rent fixed **amounts of server resources, reducing initial costs but not necessarily ongoing costs, as one typically would over-provision to be sure that one would have enough allocated resources to handle traffic spikes and not hit any quotas, bringing down ones system. Over time it also didn’t reduce complexity, due to all the tools you needed to learn, to run your applications on cloud. Serverless takes cloud computing a step further, as you don’t have to specify which and how many server resources you need, and you only pay for the exact compute resources your application uses, when it uses them.
So instead of having to estimate your server capacity requirements upfront and paying for a fixed amount of bandwidth or a set number of compute server instances based on your estimate, you let the cloud provider automatically allocate the resources you need, and you're only charged based on the actual computation consumed. It's like switching from a fixed-rate phone plan to a pay-as-you-go model.