This post is a short opinion piece on one of the key characteristics of serverless architectures: the precise usage-based accounting.
The context: Sam Newman of some microservices fame did an excellent talk called “Confusion In The Land Of The Serverless” at Craft Conference where he digs behind the scenes of this new buzzword. During his talk, Sam mentions five categories for a serverless platform which Mike Roberts has assembled:
The Key Traits of Serverless:
- Does not require managing a long-lived host or application instance
- Self auto-scales and auto-provisions, dependent on load
- Has costs that are based on precise usage, up from and down to zero usage
- Has performance capabilities defined in terms other than host size/count
- Has implicit high availability
You can read more on this in either Mike Roberts blog posts or his free eBook What Is Serverless?
During his talk, Sam comments on these traits and one particular comment caught my attention. On the third trait ("costs are based on precise usage") Sam partly disagrees because inside an organization you probably wouldn’t need/want to do precise cost accounting. This trait seems more applicable for public serverless platform offerings.
I agree with Sam that probably most organizations currently wouldn't want to set it up that way. But maybe they should? I think that the “precise usage accounting” is one of the main innovations that serverless (more precisely the FaaS - Function-As-A-Service part) has brought to the table. And I would argue that this trait is the key to the success of serverless.
Why does this matter?
With this trait developers are now incentivised to look at the runtime characteristics of their application logic: How much memory do I need? And how long will my function be running? Now, developers have done this before when choosing the right EC2 machines. But with FaaS these assessments and decisions can differ for every function. The number of decisions to be made have increased by an order of magnitude. And this results in influencing the architecture. In a FaaS environment developers suddenly ask themselves: How can I split this function, so it falls below a threshold? How can I move from waiting on this callback to an event-trigger?
A hypothesis I would put out there is that the economics of FaaS are driving more architectural decisions than many of the existing best practices in designing these applications. Note that these design decisions aren't necessarily for the better. Eg. I've already heard about teams adding more functionality to a function just because they had 20ms of average computing time left. That said the incentivisation to create small independent functions seems to play nice with many modern design patterns. It will be interesting to see what other implications we see from this precise usage-based accounting and what patterns we see deriving from this.
Beyond architectural concerns
As you might have guessed people a lot smarter than me have already thought about this. The strategist and cloud visionary Simon Wardley has written an extensive piece on “Why the fuss about serverless?”. In that post he describes that “Monitoring by cost of function changes the way we work — well, it changed me and I’m pretty sure this will impact all of you.” On top of that he describes “worth based development” where business metrics of functions are used to drive the development. Very very interesting.
TL;DR
A major trait of serverless / FaaS architectures is precise usage accounting. This influences decisions on the applications architecture. It remains to be seen what concrete patterns are deriving from this.