How Microservices Offer Scale, Velocity, and Risk Reduction

"Microservice architecture" is a topic that's hard to get away from in today's technology landscape. It's gotten to the point that the term is overloaded, and some would argue that nobody really knows what a microservice is, even though they're "so hot right now.” I would argue that, no matter the exact definition, there are lessons to be learned from the trend.


Let's start by getting a little linguistic and breaking the term down. Two individual words are hidden in there and unceremoniously smashed together, each with their own meaning. First, we have the "micro-" prefix which clearly means small, but we can do better than that. Lots of small services, taken together, are not small overall, so what’s the advantage?

Separation of concerns (explained here by Netflix vice president of Edge Engineering Daniel Jacobson) is one of the built-in advantages of going "micro", which reduces the risk of any one deployment. Partitioning services by feature follows the Unix philosophy of doing "one [kind of] thing, and doing it well." There’s also the added benefit that organizing services this way helps narrow scope during troubleshooting.

What is the deal, with these services?

Next we have “services,” plural. There are a few of them, maybe even a lot of them, so you’ll need to think about how they interact, and identify each one’s audience. Audience could be a human, whether that’s an external customer, or an internal user on a different team. It could also be another service, or perhaps a stateful system like a database.

Breaking systems down into smaller services reduces cognitive load for the teams that build, maintain, and deploy them. Smaller systems are easier to reason about, and much easier to effectively document. These properties will pay dividends down the line as you grow.

Zero to Microservices: When To Start?

Established Infrastructure

"Monolithic" and "legacy" are words that are often used to put down older systems, but older isn't necessarily a bad thing. When code has been running in production for long enough that it's considered "legacy,” usually it's been directly making money, and/or consistently doing its job. When it comes time to scale a legacy system, that can be a quite a challenging and time-consuming process. What benefits can we get in exchange for the effort of moving towards a microservice architecture?

Speed is an important benefit. Creating a new instance in the cloud service of your choice is quite a bit faster than buying a server, racking it, hooking it up to the network, installing and configuring the OS, and so on. It’s also possible to scale down when a burst of traffic is done. Most traditional vendors would get annoyed quickly if you returned a server every week. Another benefit is lowered complexity. Simple things are easier to scale, which is a win for smaller, perhaps even stateless services.

Starting the migration process can be as simple as copying everything that runs behind the scenes of one feature, for example logins. The vast majority of that type of traffic - session validation - can be handled statelessly and in memory. The key here is to use a signed token, such as a JSON Web Token (JWT), that can be verified without hitting a database, using only a shared secret between the issuer and verifier.

Live sessions wouldn’t need to be affected in this example. Established systems can clone HTTP traffic from prod to a test system. This allows for thorough testing of microservice re-implementations without affecting existing behavior. It also lets you test with real traffic, to see whether horizontal scaling is needed, and flip the switch to migrate with confidence.

Greenfield Project

Starting a new project is great fun. Inevitably, you'll end up venturing down a rabbit hole or two, and shaving a few yaks. Should you use a microservice architecture for your new project, though? In a word, no. At least not right away. Most systems will not see enough traffic to justify it right off the bat. Given the choice, it's much more valuable to spend time implementing features than investing the amount of time to be able to scale with a surge.

David Gray Dave Gray is a Principal Software Engineer at SparkPost ( Throughout his 18-year career, he has had roles in Engineering and Professional Services, working directly with customers, and is currently assisting the Growth team on initiatives to impact and increase interaction with customers. Dave came to SparkPost from the automated publishing industry, where he helped customers increase their agility, since delays could end up grounding fleets of aircraft, until manuals could be updated. Making customers and co-workers more efficient is Dave's focus and passion, and enabling them to be proactive through tools and APIs is a key part of that. You can follow Dave on Twitter @yargevad and via SparkPost @sparkpost or connect with him on LinkedIn