About “That” Prime Engineering article

Everyone who has worked with managed cloud services has experienced the moment when it made sense to move away from managed services.

Turns out, so does Prime Video.

Amazon Prime Video recently wrote about how changing away from managed services and writing a more integrated application saved them money. Despite being a few months old, this appeared to blow-up this week, and predictably has caused some cries of “SEE, SEE YOU SHOULD JUST RUN EVERYTHING YOURSELF”.

But to those of use who have been building on AWS (and other providers) for many years, it’s not a surprise, and we all have stories where we’ve done similar.

I say this as someone who is an annoying cheerleader for serverless compute and managed services, but despite that, I have home-rolled things, when it made sense.

How do you solve a problem

When you’re solving a problem, you look at what the managed services that you have available, considering factors like:

  • Your teams experience with the service
  • Limitations on the service, and what it was intended for, against what you’re doing
  • What quotas may apply that you hit
  • How the pricing model works

While pricing for managed-services is generally based on usage, sometimes specific costs will apply more to your workload, e.g. if you’re serving small images, you’ll be more impacted by the per-request cost than the bandwidth charges.

I would be surprised if an experienced architect hasn’t faced a situation where “Service X would be perfect for this, if only it didn’t have this restriction Y, or wasn’t priced for Z”.

My example

We’d built out a system that was performing 3 distinct processing steps on large files.

The system had built out incrementally, and we had the 3 steps on three different auto-scale groups, fed by queues.

While some of the requests could be processed from S3 as a stream, one task required downloading the file to a filesystem, and that download took time.

The users wanted to reduce the end-to-end processing time. Some of the tasks were predicated on passing prior steps, and so we didn’t want to make the steps parallel.

Attempt 1: “EFS looks good”

We used the ‘relatively’ new Elastic File System service from AWS… The first component downloaded the file, subsequent operations used this download.

This also had the advantage that the since the ‘smallest’ service was first, you paid for that download on the cheapest instance, and the more expensive instances didn’t have to download it.

We developed, deployed, and for the first morning it was really quick… until we discovered that we were using burst quota, and spent the afternoon rolling back.

Filesystem throughput was allocated based on the amount stored on the filesystem, but as this was a transient process, we didn’t replenish it quickly enough, and didn’t like the idea of just making large random files to earn it.

Now you can just pay for provisioned throughput, perhaps in a small part because of a conversation we had with the account managers.

Attempt 2: “Sod it, just combine them”

The processes varied in complexity, there was effectively a trivial, a medium, and a high complexity task… So the second solution we approached was combining all the tasks onto a single service… the computing power for the highest task would zoom through the other two tasks, and so we combined them into what I jokingly called “the microlith”.

We didn’t touch the other parts of the pipeline, or the database, they remained in other services, but combining the 3 steps worked.

What did we gain

The system was faster, and even more usefully to operators, more predictable.

Once processing had started you could tell, based on the file size, when the items would be ready…

Much like “lower p90 but higher maximum” feels better for user experience, this consistency was great.

What did we lose

Two of the three components had external dependancies, and this did mean this component was one of the less ‘safe’ to deploy, and while tests built up to defend against that… the impact of failed deploy was larger than you’d want.

In Conclusion

There are always times when breaking your patterns makes sense, the key is knowing what you’re both gaining and losing, and taking decisions for the right reasons at the right times.

Prime video refining an architecture to better meet scaling and cost models, making it less “Pure”, isn’t the gotcha against these services that some people would have you believe.

“Pure” design doesn’t win prizes.

Suitable design does.

Monoliths or Microservices: how about a middle way?

Should we deploy as micro-services or monoliths, how about neither.

The latest argument that we’re having again, is how we should deploy our systems, and we’re asking “micro services” or “monolith”.

Now, I’ll try to skip past what we mean by all of those things (because it’s covered better elsewhere), but in essence, we’re asking “does our software live in 80 repos or 1 repo?”.

TL;DR How about we aim for 8?

What does good deployment/development look like

In an ideal world, we’d have the following properties in our deployment:

  • It would have appropriate tests and automation, so deployment is easy and doesn’t feel risky
  • The potential impact to a deployment should be predictable, something over shouldn’t impact something over here
  • It should be clear where to look to change code

Problems with Micro-services

  • If you go properly granular, it can be difficult to know which repo code resides in – if you pick up a ticket, you should need to spend 15 minutes to identify where that codes live
  • Deployment of related services may need to be coordinated more closely than you’d like, ensuring that downstream components are ready to accept any new messages/API calls when they arrive
  • Setting up deployment for each new component can be time-consuming (Although with things like CDK/Terraform etc, it should be possible to template much of this to a config file for the deployment system)

Problems with Monoliths

  • Code can potentially leak into production more easily – requiring more robust feature-flagging to hide non-live code. While this is good practice, it becomes a requirement in larger repos – you can avoid this by ‘dev’ deployments not being in trunk, but that’s a different kind of deployment complexity
  • Spinning up another instance of “the system” for testing of a single component may be more expensive and fiddlier than duplicating an individual component
  • The impact of a deployment may not be known, you may need to assess if other commits included in what you’re putting live could break things, this may increase deployment friction

How about Service Cluster Deployments

In a prior engagement, we built what was really a task management system.

  1. Messages would arrive which could potentially make new tasks for the system, or update existing tasks: these were handled by the task-creator-and-updater
  2. The task-viewer would access the database of tasks, cross reference with other services, and create a unified view of the task list
  3. An automation component would use the output of of task-viewer to initiate actions to resolve the tasks, which would ultimately result in more messages arriving, which then updated the task database

In our deployment, these components were all in 3 different projects, the micro service model. And it worked, but is also an example of where these 3 components could be combined into one functional service repo.

This makes sense to me because the 3 services are closely coupled, especially between the task-creator-and-updater and the task-viewer. So maybe they could have been in a combined repo task-management

With this setup I could still feel safe doing a deployment on a Friday afternoon to one component, because even if the task management system failed entirely, the manual processes were in place to allow recovery until the system could be rolled back.

Meanwhile another one of our components, the cost of a failed deployment was so high, and even if it was recovered the time-critical nature, meant we only deployed during ‘off-peak’ periods of the week. Could it have been made more robust? Probably, but it was also a relatively static system – that effort was better spent on other components that were more ‘active’.

In summary

Your deployment should work for your team. It should be based on templated conventions that allow easy configuration of new deployments, and it should be as granular as makes sense.

Instead of worrying about being “truly micro-services” or “fire & forget monolith” find the smallest number of functional groups to keep your code in. That way you can have scope-limited deployments, without having hundreds of repos.

Finally, please, just-name-your-repos-like-this, it’s funny at first giving things amusing names, but honestly, kitchen-cooking-oven is far more supportable than the-name-of-a-dragon because it gets really hot.