Performance: still hard

Performance is still hard: Artur Bergman of Fastly talks about what you’re doing wrong.

I watched @crucially’s video from the velocity conference, where once again it’s a good talk where he plays the Grumpy Bastard with aplomb. Soon, soon I promise, I will “Buy a Fucking SSD”.

That’s Magic!

If you don’t understand stuff, it’s magic. And if you’re relying on something that’s magic, your platform can disappear in a puff of smoke. This especially true of newer things – I don’t understand MySQL but it’s long in the tooth enough I can (mostly) trust it. Some of the newer NoSQL techs do not have that lineage…

Open Source allows you to get under the hood of all these things, to look behind the curtain and reverse-engineer what is going on. You invariably have to as the documentation is a TODO item. This means that when you do hit these extreme edge cases situations you can fix them, eventually.

But that’s only once you’ve really understood the problem. In black-box situations it’s all too easy to pull the levers you have until it seems the problem has gone away, but all you’ve done is masked, displaced, or deferred it. You have to understand the whole stack and not just “your bit”. (This reminded me a bit of a conversation with a friend who does network security, where decisions not to collect some data for “safety” actually made potential targets more obvious)

There are no gremlins

My favourite point was this: Computers are (mostly) deterministic.

We talk about bugs, issues, intermittent and transient faults – almost resigning ourselves to sometimes “things just happen”.

As Artur points out, computers are deterministic state machines, this randomness doesn’t really exist. Yes, the complex interplay of our interconnected systems can give the appearance of a random system, but that is just the appearance.

There is pattern in there, and when find it, you can fix it. How? Lots of monitoring, lots of measuring, and good old-fashioned investigation.

Stop throwing boxes & sharding at things

The easy availability of horizontal scale-out makes us lazy and complacent: “we’ll just throw another amazon instance at the problem”. That can be a valid approach, but only when your existing instances are actually spending all of their time doing meaningful work and not stuck queuing on some random service. If you’re site is sluggish because of poor code, database performance or tuning, you’re not really solving the problems.

Latency is even more critical(Google PDF), and scaling out a broken system may just let more people use it slowly – not make it faster.

Post-Cloud Call to Arms?

Scaling was hard: ordering servers took ages and it was all confusing. CDNs cost lots of money, were hard to use and only for the big boys.

Then “The Cloud” appeared: people like amazon and others made stuff cheaper and faster to get machines from. For a while we could ignore the complexity and just throw money at it.

But latency isn’t as simple as capacity, and we’re back to the situation that isn’t always about throwing more boxes into the battle.

Anatomy of Ticketing ‘Fail’

Having failed to get tickets for something in an annoying day of pressing reload, I try to write something constructive about scaling for big things

(Or what happens when a company that isn’t eventbrite tries to be eventbrite)

A friend wanted to book some tickets for an event. I had some time today, so I said I’d book them.

For reasons of politeness, I’m not going to name the company. The event was massively over-subscribed, there were always going to be people who were annoyed (kinda like the Olympics). I’m just annoyed because I saw things done generally quite ad-hoc, specific technical bugs hit me.

Tickets were delivered in tranches. This is a sure sign there will be massive peaks in demand…

The hour arrives, and, in your all too typical scenario: www.example.com rapidly stopped responding.

A few minutes later everything went 403’d as they killed all access on the server to get the load down. Not great, but it’s a sign somebody is looking at the problem.

example.com then starts redirecting to http://xxxxxxxxxxxxx.cloudfront.net/url1 with all the individual ticket pages iFramed through to an Amazon EC2 instance (http://ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com/blah)

The IDs for the events were sequential, some had already been released, and you started to think that people had been gaming the system and ordering tickets prior to their availability windows. This was later denied by the company (which I accept), but given the way the scaling was going, at that time it was all too easy to think they were using security by obscurity to prevent access to the events.

Later in the day, when tickets appeared, it was announced via a tweet. The tweet though didn’t link to the site, but to a mailing list post, which again didn’t reference the actual site.

The site had now changed, example.com was redirecting to http://xxxxxxxxxxxxx.cloudfront.net/url2 again passing through to EC2 instance. Later many people complained on Facebook that they were looking the old page and pressing reload.

Anyway, I tapped. I was at the gym. I was on my iPhone. But I know my credit card number, I know my paypal password, I can even use that tiny keyboard. I’ve topped my starbucks card up while in the queue. I can do this.

There was even a mobile site.

Only the mobile site was erroring because it was asking for a non-existent field/table. I had no way to change my user-agent (and wouldn’t have trusted Opera with my credentials), and in the 10 minutes it took me to get back to my laptop all the tickets had been sold.

No tickets for me+mate. Grumbly me having seen things done badly.

As many will say, this is not life and death – but example.com is primarly not a ticketing company, and that showed today.

If you’re going to compete with the likes of eventbrite, you’re going to have to be as good as eventbrite.

The Constructive “What can we learn” Bit

1. Believe it could happen, no matter how unbelievable.

Ask yourself “if we get off-the-scale load how will we fix it”. Working out volumetrics and scaling is hard, so alongside your “reasonable” load calculations of “we can turn off these bits of our site”, have your plans of “how you’ll move to something big, cloudy and scalable if the unbelievable happens”.

Are there components that you should move upfront? You have something like 15 to 30 minutes of goodwill. What do you need to do upfront, so that in that downtime you can come up fully scaled.

If you’re looking a scalable elastic thing, look at how much it costs to start in that state anyway.

2. Architect things to give you agility

If you can’t host all your website on a scalable platform: Subdomains, DNS expiry times and proxy-passes give you room to move, but only if set up ahead of time.

Had tickets.example.com been available, example.com wouldn’t have had to disappear as it has done until tomorrow. You don’t want your website down for that length of time.

DNS changes can take time, much less if you dial down the expiry times, but again you have to do prior to the event. Amazon’s route 53 is cheap so move the domains ahead of time, and set Times to Live appropriately.

While you’re waiting for that propagation, proxy-passing can be a useful technique to bounce the traffic to the new server, while the DNS propagates. Proxy passing also means that example.com/tickets could have been redirected, rather than an entire domain.

Are you caching what you can at an HTTP level with varnish or a service level with memcache?

3. Be careful sending people onto new URLs that won’t update

Taking the ticketing system off their main website was a good move, but the static page should have remained there. The second you redirected to cloudfront, they were then looking at a page that would get stale.

Many people would have pressed reload, expecting it to appear, but they didn’t because as you can see from above above, the URL changed. They could have used the Cloudfront revocation API, but this wasn’t used.

4. Remember data protection issues

This company used the Virginia data centre (which I think is the AWS default). Without going into the whole world of pain that is data-protection and EU borders – Dublin would have less latent and less problematic compliance wise.

5. Testing is good, as is automatic deployemnt

There were not many tickets and the loading was huge, those were not avoidable. I can’t say the same about the erroring mobile site, that should not have occurred.

6. Rehearse

It’s not fun doing disaster recovery, but if you’re receiving catastrophic load then that is what you’re doing.

Write the script. Have someone else test it.

It’s not a valid plan until you have shown it works.