General

SRE Weekly Issue #8


If you only read two articles this week, make it these first two. They’re excellent and exactly the kind of content I’m looking for. If you come across (or write!) anything that would go well in SRE Weekly, I’d love it if you’d toss a link my way.

Articles

Liz Fong-Jones, a Googler and co-chair of SRECon, describes a scale of activities SRE teams engage in, from the basics (keeping the service operating) to having the freedom to improve the service.

This is a really awesome paper. Two Googlers describe in detail the pitfalls of failover-based systems and explain how they design multi-homed active/active services. If Google has learned a lesson, we’d all do well to learn from it, too:

Our experience has been that bolting failover onto previously singly-homed systems has not worked well. These systems end up being complex to build, have high maintenance overhead to run, and expose complexity to users. Instead, we started building systems with multi-homing designed in from the start, and found that to be a much better solution. Multi-homed systems run with better availability and lower cost, and result in a much simpler system overall.

A review of CloudHarmony’s numbers on various cloud providers’ availability in 2015 versus 2014, along with a discussion of how customers deal with outages. I’m a little puzzled by this one:

That’s also partly why most public cloud workloads aren’t used for production or mission-critical applications.

I’m pretty sure plenty of mission-critical stuff is running in EC2, for example.

The team at parall.ax chose Lambda because there are no long-lived servers, and they could offload all the work of scaling their app up and down with demand to Amazon.

Randall Monroe takes on an important question: is it possible to siphon water from a Europa to Earth? Okay, the only relation to SRE is that a team of Google SREs submitted the question, but I really love What If.

VictorOps distilled their Minimum Viable Runbooks series (featured here previously) into a polished PDF in their usual high quality and style.

During an outage this week, Vodafone admitted that they forgot to update their status site. They are looking into an automated system to make updates during outages.

I’ve worked mostly jobs without compensation for on-call, but one with. Compensation is nice, but it was to offset a truly heinous level of pages, so it was small comfort. If you have any good articles about the merits and pitfalls of on-call compensation, please send them my way.

Outages

Lots of downtime this week, including some recurrences and some big names.

SRE Weekly Issue #7

A big thanks to Charity Majors (@mipsytipsy) for tweeting about SRE Weekly and subsequently octupling my subscriber list!

Articles

This article is gold. CatieM explains why clients can’t be trusted, even when they’re written in-house. She describes how her team avoided an outage during the Halo 4 launch by turning off non-essential functionality. Had she trusted the clients, she might not have built in the kill switches that let her shed the excessive load caused by a buggy client.

Facebook recently released a live video streaming feature. Because they’re Facebook, they’re dealing with a scale that existing solutions can’t even come close to supporting (think millions of viewers for celebrity live video broadcasts). This article goes into detail about how they handle that level of concurrency for live streaming. I especially like the bit about request coalescing.

Best. I pretty much only like the parodies of Uptown Funk.

This is a really great little essay comparing running a large infrastructure with flying a plane by instruments. Paying attention to just one or two instruments without understanding the big picture results in errors.

Thanks to Devops Weekly for this one.

An awesome incident response summary for an outage caused by domain name expiration. The live Grafana charts are awesome, along with the dashboard snapshot. It’s exciting to see how far that project has come!

Calculating availability is hard. Really hard. First, you have to define just what constitutes availability in your system. Once you’ve decided how you calculate availability, you’ve defined the goalposts for improving it. In this article, VividCortex presents a general, theoretical formula for availability and a corresponding 3D graph that shows that improving availability involves both increasing MTBF and reducing MTTR.

TechCentral.ie gives us this opinion piece on the frequency of outages in major cloud providers. The author argues that, though reported outages may seem major, they still rarely cause violation of SLAs, and service availability is still probably better than individual companies could manage on their own.

Full disclosure: Heroku, my employer, is mentioned.

An external post-hoc analysis of the recent outage at JetBlue, with speculation on the seeming lack of effective DR plans at JetBlue and Verizon. The article also mentions the massive outage at 365 Main’s San Francisco datacenter in 2007, which is definitely worth a read if you missed that one.

Linden Lab Systems Engineer April wrote up a detailed postmortem of the multiple failures that went into a rough weekend for Second Life users. I worked on recovery from at least a few failures in that central database in my several years at Linden, and it’s pretty tricky managing the thundering herd that floods through the gates when you reopen them. Good luck folks, and thanks for the excellent write-up!

Netflix has taken the Chaos Monkey to the next level. Now their automated system investigates the services a given request touches and injects artificial failures in various dependencies to see if they cause end-user errors. It takes a lot of guts to decide that purposefully introducing user-facing failures is the best way to ultimately improve reliability.

…we’re actually impacting 500 members requests in a day, some of which are further mitigated by retries. When you’re serving billions of requests each day, the impact of these experiments is very small.

Outages

Only a few this week, but they were whoppers!

  • Twitter
    • Twitter suffered a massive outage at least 2 hours long with sporadic availability for several hours after. Hilariously, they posted status about the outage on Tumblr.

  • Comcast (SF Bay area)
  • Africa
    • This is the first time I’ve had an entire continent in this section. Most of Africa’s Internet was cut off from the rest of the world due to a pair of fiber cuts. South Africa was hit especially hard.

SRE Weekly Issue #6

Articles

A discussion of failing fast, degrading gracefully, and applying back-pressure to avoid cascading failure in a service-oriented architecture.

Many times, it’s our own internal services which cause the biggest DoS attacks on ourselves.

A SUSE developer introduces kGraft, SUSE’s system for live kernel patching. Anyone who survived the AWS reboot-a-thon is probably a big fan of live kernel patching solutions.

One thing that is critical is avoiding burnout in on-call. This article is a description of the “urgency” feature in Pagerduty, but they make a generally applicable point: don’t wake someone for something just because it’s critical; only wake them if it needs immediate action.

This is a review/update of the 1994 article. The fallacies still hold true, and anyone designing a large-scale service should heed them. The fallacies:

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn’t change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.

As I get into SRE Weekly, I repeatedly run across articles that I probably should have read long since in my career. Hopefully they’re new to some of you, too.

Every position I’ve held has involved supporting reliability in a 24/7 service, but let’s be realistic: it’s unlikely someone would have died as a result of an outage. In cars, reliability takes a whole new meaning. I first got interested in MISRA and the other standards surrounding the code running in cars when I read some technical write-ups of the investigation surrounding the “unintended acceleration” incidents a few years back. This article discusses how devops practices are being applied in the development of vehicle code.

Evidence has come out that the recent major power outage in Ukraine was a network-based attack (I can’t make myself say “cyber-” anything).

I should have seen this coming.

One blogger’s take on the JetBlue outage.

It’s very hard to create an entirely duplicate universe where you can test plan B.  And it’s even hard to keep on testing it regularly and make sure it actually works. To wit: Your snow plow often doesn’t start after the first snow because it’s been sitting idle all summer.

The SRECon call for participation is now open!

Sean Cassidy has discovered an easy and indistinguishable phishing method for LastPass in Chrome, with a slightly less simple and effective method for Firefox. This one’s important for availability because many organizations rely heavily on LastPass. Compromising the right Employee’s vault could spell big trouble and possibly downtime.

Outages

SRE Weekly Issue #5

Articles

What does owning your availability really mean? Brave New Geek argues that it simply means owning your design decisions. I love this quote:

An SLA is not an insurance policy or a hedge against the business impact of an outage, it’s merely a refund policy.

Apparently last week’s BBC outage was “just a test”. Now we have to defend our networks against misdirected hacktivism?

Increased deployment automation leads to the suggestion that developers can now “do ops” (see also: “NoOps”). This author explains why operations is much more than deployment.

Full disclosure: Heroku, my employer, is briefly mentioned.

Tips on how to move toward rapid releases without drastically increasing your risk of outages. They cite the Knight Capital automated trading mishap as a cautionary example, along with Starbucks and this week’s Oyster outage.

Facebook uses configuration for many facets of its service, and they embrace “configuration as code”. They make extensive use of automated testing and canary deployments to keep things safe.

Thousands of changes made by thousands of people is a recipe for configuration errors – a major source of site outages.

PagerDuty shares a few ideas about how and why to do retrospective analysis of incidents.

Another talk from QCon. Netflix’s Nitesh Kant explains how an asynchronous microstructure architecture naturally supports graceful degradation. (thanks to DevOps Weekly for the link)

One of the fallacies of distributed computing. This ACM Queue article is an informal survey of all sorts of fascinating ways that networks fail.

Outages

SRE Weekly Issue #4

Articles

A nifty-looking packet generator with packets crafted by Lua scripts. If this thing lives up to the hype in its documentation, it’d be pretty awesome! Thanks to Chris Maynard for the link and for the sleepless days and nights we spent mucking with trafgen’s source.

Just as we design systems to be monitored, this article suggests that we should design systems to be audited. Doing the work up front and incrementally rather than as an afterthought can take the pain out of auditing.

A nice intro to structured logging. I’m a big fan of ELK, and especially using Logstash to alert on events that might be difficult to catch otherwise.

I looked at a few “lessons learned from black Friday 2015” articles, but they’re all low on good technical detail. My consolation prize is this article that seems eerily appropriate, given Target’s outage on Cyber Monday.

The strategy of turning away only some requesters to avoid a full site outage is interesting, but I could see it causing a thundering herd problem if not done carefully, where folks just repeatedly hit reload and cause more traffic.

These “predictions” (suggestions, really) about load testing may be review to some, but this article caught my interest because it was the first time I’d heard the term Performance Engineering. Definitely a field worth paying attention to as it becomes more prevalent due to its overlap with SRE.

Modern medicine has been working through very similar issues to SRE, related to controlling the impact of human error through process design and analysis of human factors. We stand to learn a lot from articles such as this one. For example, they’ve been doing the “blameless retrospective” for a long time:

As the attitude to adverse events has changed from the defensive “blame and shame culture” to an open and transparent healthcare delivery system, it is timely to examine the nature of human errors and their impact on the quality of surgical health care.

A speedy and detailed postmortem from Valve on the Steam issue on Christmas.

Outages

This issue covers Christmas and New Year’s, and we have quite a list of outages. Notably lacking from this list is Xbox Live, despite threats reported in the last issue.

A production of Tinker Tinker Tinker, LLC Frontier Theme