Amazon to Offer Autoscaling for DynamoDB

Here at ProgrammableWeb, we've been paying very close attention to the explosive growth and interest in serverless technologies (for example Amazon's Lambda, Google's Google Cloud Functions, and Microsot's Azure Functions). Recently, even Twilio got into the game with its highly specialized serverless offering for Twilio developers.

More commonly, these offerings have been referred to as FaaS; Functions-as-a-Service. In a nutshell, a FaaS is pretty straightforward. Instead of writing a whole bunch of source code that's full of re-usable functions, all of which you have host on some full stack of software that you somehow provisioned, you just outsource the hosting of each function to a FaaS. If you want to give it a try, we just published a tutorial on using Google's FaaS.

The approach may not be ideal for reconstituting some sort of single tenant single threaded application. But, for apps that trigger threads, resources, and workflows as new identities or things (as in the Internet of Things) enter active states, a FaaS approach simultaneously redefines how compute capacity is dynamically adjusted to match load while the idea of only paying for the capacity that you use involves even less overhead than before (even with IaaS).

Not surprisingly, Amazon had a bit of a first mover advantage in the FaaS arena, as it traditionally has in most XaaS arenas. Now the company is fine tuning another of its offerings -- DynamoDB -- to help developers further capitalize on the benefits of FaaS. If Lambda is a FaaS, you can think of DynamoDB like rows and tuples as a service (R&TaaS?). Especially now.

According to a blog post by Amazon chief evangelist Jeff Barr, DynamoDB was already a good match for developers looking to power their serverless apps because, like the way Lambda supports functions in FaaS style, "you don’t have to think about things like provisioning servers, performing OS and database software patching, or configuring replication across availability zones to ensure high availability – you can simply create tables and start adding data, and let DynamoDB handle the rest."

Starting today though, DynamoDB is enabled for auto-scaling. Wrote Barr in his blog post, the new feature will "help automate capacity management for your tables and global secondary indexes. You simply specify the desired target utilization and provide upper and lower bounds for read and write capacity. DynamoDB will then monitor throughput consumption using Amazon CloudWatch alarms and then will adjust provisioned capacity up or down as needed. Auto Scaling will be on by default for all new tables and indexes, and you can also configure it for existing ones."

Amazon's long term investments in both cloud infrastructure and APIs are what put the company in position to make such an improvement to DynamoDB in a way that serverless apps could instantly take advantage of the capability. Making this work obviously required DynamoDB in the first place. But then, Amazon also had to have something like CloudWatch in place as well. The services undoubtedly talk to each other across Amazon's fabric of services-oriented interfaces which the company's founder and CEO Jeff Bezos infamously mandated years ago, threatening to fire anyone who didn't comply.

So, not only is there a bit of news for you FaaS buffs in this story, there's a reminder of how transforming to an API-led enterprise will enable all sorts agility, speeding new offerings to market in ways that a big ball of mud would have made impossible.

David Berlind is the editor-in-chief of You can reach him at Connect to David on Twitter at @dberlind or on LinkedIn, put him in a Google+ circle, or friend him on Facebook.