3 MINUTE READ | November 15, 2014
Microservices & Statefulness — Two Big Lessons from AWS re:Invent
Last week, I had the pleasure of attending AWS’s re:Invent conference. In addition to all the announcements of new services — some of which are extremely exciting — there were a multitude of great sessions from people doing amazing things on Amazon Web Services.
In the midst of all the specifics and huge architectures, I think there are two big lessons on the state of cloud computing today that we can take away.
Nearly ever talk attended contained at least some mention of microservices. If the speaker didn’t mention the term directly, they still talked about the architectural decoupling that microservices entail.
Microservices mean decoupling parts of your application into their own small applications exposed by some sort of API. This allows each of those parts to scale independently. With a monolithic application, the only sort of scaling available to you is undifferentiated. Need to scale up? Copy the entire application and run it, in parallel, on another machine in parallel. Or throw more hardware at it and run the same application on a bigger server.
That’s lame. And it might not be cost effective. It’s possible that only a small part of the application is a hotspot and needs to scale. Microservices allow you to do that pull that hotspot out into its own application.
Microservices aren’t a panacea and come with their own set of challenges like service discovery, request fan out, service hotspots, bottlenecks, and reliability.
To go along with microservices it’s important to create applications that run in environments that aren’t snowflakes.
In other words: servers aren’t special. Servers should be have the capability to be created and destroyed as necessary. State is the enemy of that capability. State means that the server has to have some state of the world in order to run — it requires set up and it means that destroying a server is a major event.
Storing file uploads and downloads on the server
Keeping the database and other persistent storage on the same server
Requiring some sort of configuration outside of the application on the server outside of the normal environment
There’s a lot of tools available to do this. These range from obvious ones like offloading file storage to services like S3 or using a separate database server. There’s also some non obvious solutions like using containers, creating server specific packages (like RPM’s or .deb files), and building custom AMI’s for every deploy.
The takeaway here is to think carefully about how disposable your servers are. A resilient architecture is one that can respond to failures quickly, replace servers easily, and scale up and down as necessary. Managing state (and avoiding it?) is part of that architecture.
Interestingly, services like the new AWS lambda seem to reinforce this move away from statefulness.
Sign up for weekly articles & resources.
Photo by Klearchos Kapoutsis.
Posted by Christopher Davis
4 MINUTES READ | November 2, 2021
2 MINUTES READ | February 4, 2020
11 MINUTES READ | October 21, 2019
4 MINUTES READ | September 21, 2019
8 MINUTES READ | September 3, 2019
5 MINUTES READ | August 22, 2019
3 MINUTES READ | June 6, 2019
13 MINUTES READ | March 12, 2019
4 MINUTES READ | March 6, 2019
4 MINUTES READ | December 20, 2018