The One Boring Reason Why People Use the AWS Service

One of my clients recently started using a relatively new AWS CI/CD Service, and I just stumbled on a defensive/marketing type post from one of the traditional providers. And it made me realise how much vendors can miss the reason people choose to go with the AWS/GCP/Azure service, even if it’s inferior.

Aside: I’m not going to link to the article because they don’t deserve the clicks.

Back to their post, it went through a familiar structure:

  1. “But it doesn’t have all the features, our lovely features”
  2. “You can’t self-host, you’re LOCKED-IN!”
  3. “Why not buy into our broader platform?”

I’ll go through these in turn, before getting to the actual reasons.

“It doesn’t have the features…”

It doesn’t. It’s version 1 of an AWS product… they always launch very lean and gain new things.

And yes, it only supports 3 integrations while Vendor supports around 30. Turns out though those 3 are the most important ones. Others will be added I’m sure, but only where people will use them.

“You can’t self-host, you’re LOCKED-IN”

Good. I literally don’t want to.

I know that some Ops-Teams feel happier that they can touch a container or an instance, but this is a product that can be replaced quite easily, include by this Vendor should the need arise.

They do have a SaaS offering you can pay for, but it’s relatively expensive for small-teams. (And we’ll come onto legal things later)

“Why not buy into our broader platform?”

Lock-in to your cloud provider is bad, but if you use all of their products you can get a great unified experience… which sounds a little like, erm, lock-in.

The simple reason people choose the service on their Cloud… procurement

Companies generally make buying stuff difficult. Every new vendor is a new round of legal review, potentially procurement exercises. It’s a painful affair.

This Vendor does sell their SaaS platform on the AWS marketplace, but it’s another End User License Agreement (EULA) that needs to be accepted. And that means it has to evaluated by a legal-team: like most other EULAs the lawyers will probably go “Yeah, it’s got a bunch of stuff in it that nobody could ever enforce, so proceed at a tiny risk”.

When you already have a cloud-provider, and the legal/finance agreements are in place, it’s just easier to use the provided service.

The ‘default’ product may well be inferior, have less features, and even be more expensive: but if I can click “use this” without involving legal – it’s the one I’ll likely choose.

My workload is too special for Serverless

A few years back it was “My workload would cost more in the cloud”, which while I’m sure is true for some workloads, it was a small and falling amount. It fell even more when you actually costed in all the admin you were doing for your “cheap” servers.

Now it’s “my workload is cheaper on servers than serverless”. Now, again, this will be true for some workloads, but again, this percentage is falling every month as features increase.

Time for the Horror Story…

With every new technology, we need the horror story to dismiss it.

“bUt wHAT aBOUT tHe COld-StArT PeNalTy, thaT meANS tHiS IS uNusABlE fOr ME”

Serverless Function Refusenik

Yes, cold-starts are clunky, and if you’re on Amazon (at time of writing this), you cannot feasibly start a lambda into a VPC because the startup penalty is too painful. This is apparently on their roadmap for this year.

Microsoft are launching a pricing model that allows you to pay for some pre-warmed functions, which could give you the best combination of easy scaling, if the pricing is acceptable.

Anyway, for a lot of these things, the API-Gateway memory cache, or CDNs in front of your APIs should be offloading a lot of traffic and ensuring that common items are rapidly available

Stop swimming upstream

All the effort in IT infrastructure is heading towards serverless functions, container orchestration, containers without actively running container hosts. The choice of hosted database or database-like storage services we are offered can make it confusing to decide. The answer is almost never I’ll running something myself.

Shunning these modern hosting because you genuinely feel that your service is so special is choosing just to take the hard path for little reason, in nearly all cases. And someone- else will use them, have the advantage of working far more on functional code, and far less on overheads, and could offer a cheap/better product than you.

Yes, I know when you are at the scale of one of the top ten internet giants it can make sense – dropbox moved their storage to their own appliances, but you’re not really Dropbox, are you?

AWS Launches MediaConnect and almost gives us multicast

It’s Re:invent time, and Amazon have launched a new service to make video routing to the cloud reliable and easier to set-up.

A few weeks back I was at the brilliant DPP Leaders Summit, it was under the Chatham House Rule.1 There were some great speakers, and I particularly loved the exec who, to paraphrase, “If it doesn’t work without months of professional-services, THEN IT ISN’T AN ACTUAL PRODUCT.”2

Anyway one of the speakers was facing rebuilding their entire stack due to ownership changes, and wanted to do so in the cloud. They said “We need multicast and Precision Time Protocol”. Which I can understand, for playout or production applications, the need for those two is pretty clear.

It’s now Re:invent season, which is the point in the year when AWS tend to release a lot of their good stuff. And yesterday they unveiled a new media ingest service AWS Elemental MediaConnect.

It’s a managed service to get your video signals to/from/between your Amazon clouds.

This has historically been a pain: back when I was working on the Video Factory project we initially mooted a box in the cloud that we would send the signal to, and then that would fan out to both archiving and live streaming. This was hard to do, so we side-stepped the issue, and just rapidly uploaded the stream to S3 in consistently sized chunks instead. Later something was put in place to do the streaming, using something that I don’t think has been spoke about too much in public, so I shan’t detail here.

Anyway, this new service allows you to send content to/from an endpoint using standard RTP (with/without Forward Error Correction) or the more reliable but commercial Zixi protocol. The video has an Amazon ARN identifier, which then means that external accounts can have permissions to subscribe to the stream, the documentation says a ‘flow’ can have up to 20 outputs.

How are we going to use this?

  1. Contribution to streaming output: fire the video somewhere and you don’t have to know if/where it’s being used
  2. Contribution for programming: using few Amazon regions, broadcasters could very easily build a global contribution network to backhaul outside-broadcasts very easily
  3. Contribution from a Playout appliance, if your cloud playout outputs to an MediaConnect flow, then you can then output that flow to your broader distribution chain, allowing re-routing of things downstream.

It isn’t multicast within a VPC, it’s not PTP, I suspect the latency involved may be too great to allow it to be used to route between different stages in a virtual playout chain3.

MediaConnect does however simplify integrating cloud processing workflows by providing fixed points at the edges in and out of the cloud.

I’ll be interested to see how people use it.

  1. That it is a singular rule is one of those bits of pedantry I cannot let go of
  2. This is probably a topic for another time, but the fact that so many enterprise vendors expect you to pay for their ‘product’ then explain that ‘oh, no, you can’t just use it out of the box even in a basic manner’ is a bit of a joke
  3. I could be very wrong here, I don’t have a one of those hanging around to test

Data collection at the job fair

Last weekend I went to a tech recruitment event, and I was little shocked at how badly some employers did data-collection.

When enquiring about potential employers, people have a vague expectation of privacy. This is lost when:

  1. Data collection is adding your details to a sign-up sheet, with the ability to see the details of everyone who did so before you
  2. Data collection is adding yourself as contact on an iPad. This has all the problems of solution 1, but with the ability to send any contacts you like while you’re entering your data

Finally, don’t collect what you don’t need. Do you need to capture gender? And if you do, consider that for some people the options might not be as simple as “Male/Female”.

Recipe for success

What does a team need to deliver a successful software project, starting to think about what I’ll want in my next engagement.

There’s plenty left to do, but as I approach the end of my current main assignment as a Technical Architect, I’m starting to think what my future engagements should have.

This is my starter for ten five:

  1. Anything but waterfall
  2. Genuine Public Cloud, with a hint of lock-in
  3. Internal users matter just as much
  4. Partnership with your Product Owner
  5. Embedded QA, seen as a benefit, not a drag

Anything but waterfall

Scrum? Kanban? Scrumban? I don’t really care exactly what it is, more that it works for the project, everyone understands and supports it.

I hate designing things entirely upfront, it just seems so conceited that you can genuinely design an entire system without trying to make any of it. While I know this doesn’t apply when you’re building a rocket1 or CERN, you’re not doing that, are you?

Yes, you absolutely need a sense of roughly where you’re heading, and ideally an end goal that you’re heading towards – but you also need the pragmatism to know if you try to build that from the start, you’re going to burn lots of rubber on the road, while making very little progress.

Show your dev teams that you can and do go back to make things better. Build the sense of trust that when you say “Just build the slightly-hacky ‘tactical’ thing, we will fix it later” that you do go back and fix it.

You’ll free everyone up from the performance anxiety of “Must get it right first time, because I can’t go back and fix it”.

Genuine Public Cloud, with a hint of lock-in

I would like to think that cloud is a given, but I still face people who say things like “It’s just someone else’s computer” – yes, but in general they have better capacity planning than you, or the “I could do x for cheaper” – which I’m sure you could, but you’re usually not factoring in the hidden costs.

The main system we built does have an on-premise element, but it’s controlled by the cloud, and deployed in a similar way.

We host the core of the system in the cloud, and that gives us an agility in scale and deployment we don’t have on-premise. Now, could we get that in time, I’m sure we could, but then we lose the benefits of the AWS value-add services…

“we use Amazon, but we only use EC2 and we don’t use any of their special services, so we’re not locked-in”

Speaking of which, when I hear that particular line, I want to congratulate the person on ensuring they’ve deployed their software in a way that will either cost them more, or be less reliable, or both.

At some level, to get the best value out of a cloud provider, you do need to be using their value-add services, meaning you can run bits of your application server-less or other bits as more scalable state-less systems.

Yes, if you write a Lambda, you can’t instantly port that to Google Cloud Functions, but given they both run Node, provided you put the thing that does the work in a scoped module, migrating should mean you write the Google invoking code.

I’m not saying use every service, but to start with the position that you’re just going to use Infrastructure as a Service, is too dogmatic.

Internal users matter just as much

Yes it’s an internal system. Yes it’s not public facing.

Yes it should still be as performant and usable as your public properties.

Facebook probably does more than your system. Facebook is generally fast to use, and yet nobody gets training in how to use it. If your system requires lots of training, are you doing things as well as you could?

Consumer technology and services are good. Very good. Your users expect your system to match that, and when you give people tools that work well, they’re freed from hating the system they are using, and allowed to actually focus on the tasks they’re doing.

Focussing on my current engagement, a partnership with our core users meant they took up some extra manual working, while we ran the extended migration. They only agreed to those once we had earned their trust, and they realised that “could you do this for 3 months” was just that. (granted it was more like 4 months).

Partnership with your Product Owner

Product Management is still a relatively new discipline, so there is no one-true-way, and I hope there doesn’t become one, because not all products are the same.

Regardless, partnership with your Product Owner is crucial, and if they’re technical you want to work hand-in-hand with them on key design decisions. If they’re less so, you need their trust and for them to delegate responsibility.

Embedded QA, seen as a benefit, not a drag

The embedded tester in the team is a key resource. They should ask questions, spot the things we didn’t, and invariably are a first call for “do we know what happens in situation x?”.

For all the frustration that Test Driven Development can cause when doing genuine micro-services, the testing framework that provides means that we never ship the same bug twice. Sometimes when we’ve suspected bugs, modifying an existing test have helped us check our hypotheses quickly.

Easy regression testing make you far more able to build and iterate quickly.

In conclusion

You can’t make a project be a success, but there are things you can do that increase the chances…

 

  1. And talking of rockets, look at what SpaceX have done, which looks pretty like rapid evolution of a rocket platform adding more capabilities…

Re-use more than code?

“You can just re-use the code from x, can’t you..?” is a common call in organisations, but does it always make sense?

I’ve been working on a project recently, and when it started, we were “just going to use the components from <another project>”.

You’ve written many lines before, so why wouldn’t you re-use them? In the abstract it seems a pretty sensible thing, but it rarely works so much in practice.

It’s unlikely your company is writing something as fundamental as a security library where the domain is fixed or as universal the company Active Directory, where you only need one.

What you likely have done is a series of tactical solutions that meet the needs of each silo, which isn’t a bad thing because they’re probably bits of code that were delivered. How often have we waited for the ‘generic’ solution that didn’t really work for anyone.

Now I’m not saying that where it’s genuinely re-usable, you shouldn’t avoid code re-use. If the domain is simple and generic enough, converge on one library. But code isn’t the only thing you can re-use.

Going back to the specific example, I spoke to the architects from the project that we were just going to lift-and-shift; and we discussed how the new things that AWS launched made much of it moot, or far more heavyweight than you’d build if you were starting from today. “You could re-use this, but why don’t you look at doing that” was the outcome.

Instead the value came from, speaking about the things that they couldn’t (feasibly) change now, but would want to, “We have too much data in this account, and we can’t ever move it”. We used those as a basis so we didn’t end up in the same situation.

Experiences and things learned along the way, are just as valuable as avoiding writing some code.

Some great new/newish podcasts

Searching for a new podcast after Serial, there are loads of them to choose from right now.

Podcasting, after many of the UK newspapers pulled out of it, is going through a resurgence, here are some suggestions of additions to your listening list if you’re feeling a bit lost without Serial.

(I’ve still not listened to Serial, please don’t hate me).

NPR’s Invisibilia is from the same stable as RadioLab, but isn’t quite as heavily produced. Delving into the mind the first few episodes have been really enjoyable.

Alex Blumberg’s (formerly of This American Life and Planet Money) meta-podcast Startup about the launch of his podcasting empire (the one about the mistake is great listening to everyone who’s ever made one in business) has already stolen the hosts of internet show TL;DR to given us Reply All. Basically the same format, quirky stories about people and the internet.

Meanwhile back at WNYC TL;DR has a new host, and is still worth a listen.

Finally, Helen Zaltzman from Answer Me This now hosts a show about words, The Allusionist. It’s much shorter than AMT, and the first episode describing her suffering at her family’s puns will be all to real to anyone who listens to The Bugle.

You’ll be literally drowning Mail Chimp mentions and Square Space promo codes, did you know they’ve just launched Square Space 7 which integrates Getty Images… THEY’VE GOT TO ME.

Blogging about your Cloud Tech is only interesting when it’s Novel

If you’re blogging about moving to the cloud, you have to write about the interesting things in your migration, and not just how you did Best-Practice.

So a while back I bitched about Why The Cloud Is Oversold, talking more generally about the supposed other-wordly experience that having Sensibly Flexible Virtualised IT is… well I’ve a new pet-hate: organisations Overselling Their Adoption Of The Cloud.

I know transparency is good. It’s also pragmatic because if the information is on a computer that is even near another computer that’s on the internet, it’s going to be leaked.1

It’s genuinely interesting when people share the unique work they’ve done. Especially when Public Bodies do stuff, look at how much gov.uk open-sourced, and how much of that govt.nz reused. We’ll not mention that Scottish Government developers can’t access the gov.uk repo as GitHub is a blocked “file-sharing” site.

The team I worked with at the BBC have spoken widely about how they turn ongoing streams of video into neatly segmented files, that are uploaded to S3 at more than 1 gigabit a second, and how these are made into the things you see on /iplayer.2

Alongside the stuff that’s of sufficient scale to be interesting, Video Factory also uses a load of standard enterprise patterns: micro-services, communication through queues, separation of concerns, etc… They’ve spoken about these, but very much in a “we’re just doing best-practice after big monolithic system pissed us off too much” way.

Anyway, I just read a blog post, by another public body documenting their transition to the Cloud and a new Responsive Website.3

Turns out sometimes they get a lot of load, and this is a problem they’ve had to solve. I’ll give you a second to think about how you’d solve bursty load on AWS.

Have you guessed?

They’ve only cached the site behind varnish, and are running that in an auto-scale group behind a Elastic Load Balancer.

That’s a pretty standard best-practice. Perhaps the novelty is that they’re a Public Sector body doing a sensible thing.4

But best-practice, by its very definition, just isn’t interesting blog-fodder: “Hey, We Do The Thing That Everyone Else Is Doing”.5

This leaves me wondering what next from this organisation:

  • “Our Windows PC Estate uses Microsoft Update Server to ensure they’re patched”
  • “We make our endpoints run anti-virus and disable USB ports on front-line single-use machines”
  • “We use Active-Directory federation to provide single-signon across all of our desktop applications”

If we’re really lucky maybe they’ll tell us: “How We Use Chaos-Monkey to Simulate Cloud Error-Situations”

I can’t wait.

  1. That is an exaggeration, but not nearly as much as I’d like it to be
  2.  I helped make this bit and I’m still disproportionately proud of it
  3. The kind you hate on the desktop because of all the white-space, and where the custom fonts don’t look quite right
  4. I could link to numerous projects here, so here is a small selection of failure
  5. Netflix get to do it, because they’re the one of the groups setting out best-practice in AWS

Perfect is indeed the enemy of good

The desire to do things well stops us doing them at all.

I re-connected with someone on linked-in the other week. (Yes, I actually use it like that). And he sent a lovely, long detailed reply. One that I was delighted to read. One that I want to reply to.

But I haven’t.

Anytime someone sends me a nice, long, structured message, on pretty much any medium, it falls into the awful silo of “well i need to sit down and write a nice reply”.

And it stays in that silo, along with all the other things like that.

So instead, I’ll write a little blog post about not being able to write, using up some of my daily word-quota in the process, and making the writing of the reply, even less likely.

 

Secret Cinema’s PR Car-crash

Secret Cinema showed how not to communicate after the opening night of their latest event was cancelled.

Lots of modern knowledge-based skills are like Search Engine Optimisation: the first 80% of SEO is “build a decent website” and the last 20% is the ever-changing dark-magic that few people really understand.

I’m adding “communications in a crisis” to this list.

Secret cinema have cancelled their opening shows of Back To The Future, the first show cancelled about 2 hours before it was due to start. The comments on that post are just about as awful for the company as you’d expect.

The company is replying, but with a statement usually of the lines “please address your concerns to us at this email”. Unsurprisingly this isn’t meeting with much understanding from their customers.

As I type this on Friday evening, they’ve just cancelled the weekend shows, and the “situations beyond their control” appear to be the council aren’t satisfied the venue is safe.

Predictably, their Facebook wall has been carnage. People explaining how they’ve travelled far for this event, and are feeling let down. Now if you travel to a faraway place for a pop-up event, by a company who have cancelled opening nights before1, caveat emptor comes to mind. I’m not saying I don’t have sympathy, but I doubt I’d travel myself in the circumstances…

Crisis comms are hard

There are companies who charge you an awful lot of money for just this. The ones you call when things are really bad: like when your product kills people.  But much like SEO, companies can do the simple things to get the first part themselves.

4 Basic Steps to Delivery, You’ll Never Guess What Happens When You Don’t Do Them:

  1. Project Management is your friend: If they didn’t know until the first day that they had these problems, they don’t have a decent project/production management team. This isn’t a hobby, this is a company that take a lot of money from people, they need a decent delivery function that could warn ahead of time.
  2. Honestly within the company: can you delivery team tell you that there are possible problems, or are you stuck in an organisation where the status report has to be green? Or worse, are you in an organisation that denies possible problems until they’ve actually happened.
  3. Run Pilot events. This is the kind of thing you probably want a few preview nights, beyond rehearsing with the cast, but rehearsing with audiences there so you can check things work. You can set expectations for these nights better, with lower tickets prices, and framed as a community rather than a customer experience. Scratch that, apparently their preview on Wednesday was also cancelled.
  4. Prioritise: there will have been things here key to the experience, and things that were icing. Build and get approval for the main stuff first. If you can’t do the other bits that’s a shame, and the pilot/early nights might be impaired. But at least they can run.

The 6 Secrets to Basic Crisis Comms Techniques They Don’t Want You To Know:

  1. Don’t Weasel Word: Be very careful about the phrase “beyond our control”. I watched a documentary about Crossrail last week. The crane they needed one weekend didn’t turn up because only 2 of them are in the country, and the one they’d booked was delayed. That is “beyond their control”.  I say this with no insider knowledge, beyond the news articles, but  Secret Cinema were in control of applying and meeting council safety approvals. Saying it’s “beyond your control” makes an organisation look like it’s in denial.
  2. Appear Open:  They should have published their compensation policy and directed people to that. Telling people to “address concerns” privately makes it look like the organisation has something to hide.
  3. Appear Honest: This isn’t an outage of a complex system that takes time to diagnose. Saying you’ll post “more information later” just makes it look like an organisation in disarray.
  4. Take the Hits upfront: They could have cancelled more shows upfront, still disappointed people, but put them in control earlier. Drip-feeding cancellations just continues the uncertainty, again adding to the appearance of disarray.
  5. Finally, you’ve broken promises: Don’t make any other promises you can’t keep. It seems so minor, but saying you’ll update at 11am and failing to post anything until after 12 just continues the appearance of the organisation in crisis and denial.

I suspect this incident will be a case-study for crisis PR for years to come.