General

SRE Weekly Issue #22

Articles

Landon McDowell, my (incredibly awesome) former boss at Linden Lab, wrote this article in 2014 detailing a spate of bad luck and outages they’d suffered. Causes included hardware failures, DDoS, and an integer DB column hitting its maximum value.

I worked on testing the new class of database hardware mentioned in the previous article. In order to be sure the new hardware could handle our specific query pattern, I captured and replayed production queries in real-time using an open source tool written years earlier at Linden Lab called Apiary. This simple but powerful concept (capture and replay) was first introduced to me by one of Apiary’s co-authors, Charity Majors. I’ve since hacked a ton on Apiary and used at two subsequent jobs.

A group calling themselves the Armada Collective has been making DDoS extortion threats to many companies recently. Cloudflare called them out as entirely toothless, with no actual attacks, but apparently some companies have paid anyway.

An excellent deep dive into a performance issue (which really equals a reliability issue), including some good lessons learned.

This is specifically referring to disaster scenarios such as hurricanes, but the general idea of a “resiliency cooperative” intrigues me.

A review of the Fire and Emergency Services response found flaws in the actions and procedures taken by the incident commander who was the active fire chief at the time. The NTSB said the commander had not training on the incident management system that would have prepared him to better command the response.

Matthias Lafeldt goes deeper into chaos engineering in this latest installment of his series. He also introduces his Dockerized version of Netflix’s Chaos Monkey and shows how to automate chaos experiments to gain further confidence in your infrastructure’s reliability.

A great overview of the difficulties inherent in anomaly detection and alerting. Note that this article is written by OpsClarity and the end reads a bit like an ad for their service.

I’m not sure exactly what it is they’re offering now that they weren’t before, but this seems important. I think.

Outages

SRE Weekly Issue #21

This week’s themes seem to be human error and network debugging. If you’re like me, you rarely have time to sit down and listen to podcasts, but if you ever get in the mood, this first link is a must-listen. I really can’t do it justice with my summary, but I’m very glad I listened to it, and I think you’ll like it too.

Articles

We can try to train our workers to avoid error. We can design our systems to make errors less likely. This podcast argues that we go one step further and design our systems to be resilient in the face of inevitable error. Human error is normal and expected. Where are we one error away from a serious adverse event?

In this Velocity keynote, Steven Shorrock discusses human error from his point of view as an ergonomist and psychologist.

My old coworker (and network wizard) at Linden Lab wrote up this fascinating episode of network debugging. Sometimes you have to get really deep into the stack to track down reliability issues.

While we’re on the topic of debugging complicated networking failures, here’s PagerDuty’s analysis of a bug in Zookeeper. It turned out that triggering this bug involved the confluence of 3 other bugs that conspired to deliver a malformed packet to Zookeeper, which causes it to blow up. Yeesh.

If you’re in the mood to read one more really deep and detailed network debugging session, this one’s for you. It goes through the process of gathering enough information to confidently implicate ELB as the source of abrupt connection closures.

John Vincent, featured here last week for his review of the new SRE book, writes this week about the burnout he’s suffering. I think it could best be described as operational risk burnout. I’m not sure what the solution is, but I’m really interested in the problem, and I hope that John considers writing more if he has any useful realizations. Good luck, John.

I couldn’t see anything but the largest configuration because all I could see was places where there was a risk. There were corners I wasn’t willing to cut (not bad corners like risking availability but more like “use a smaller instance here”) because I could see and feel and taste the pain that would come from having to grow the environment under duress.

How do you collaborate remotely during an incident? Some companies use conference bridges, but my former boss (and all-around incredible engineer and manager) Landon McDowell advocates for text-based chat. I started my career as part of the Ops team he describes, so I might be biased, but I totally agree: chat is far superior to phone bridges or VoIP.

This article starts out as a basic introduction to load-balancing, but where it goes next is really interesting. The author discusses how load-balancing can go wrong (think cascading failure as each remaining backend receives increasingly more traffic) and how to combat the pitfalls. Finally the author suggests two very intriguing concepts for smart load balancing systems that really got me thinking.

Outages

SRE Weekly Issue #20

Articles

Here’s a fairly negative review of the new Google SRE book. The author makes some well-articulated points against the tone of the book and its applicability outside Google. I’ve been hearing some talk of a condescending tone in the book, along with a tendency to claim “inventing” things that others also invented elsewhere. My copy arrives next week — should be an interesting read, for better or worse.

Full disclosure: Heroku, my employer, is mentioned.

A discussion of the impact of an outage on a company’s brand. Skip the last bit; it’s an ad. The rest is worth reading, though.

Reputation and customer loyalty suffers dramatically. The Boston Consulting Group reports that over a quarter of users (28%) never return to a company’s web site if it doesn’t perform sufficiently well.

Conflict between “dev” and “ops” (whatever they’re called at a given company) can create reliability problems. SRE is in part an effort to relieve that tension, either through embedding or enacting process changes. This article gathers opinions and ideas from ops and dev engineers and proposes three methods for alleviating the tension.

Another interesting survey-based report.

When asked what is the acceptable “downtime window” to finish migrations to minimize downtime, almost half (44%) of respondents said they cannot afford any downtime or, at most, just for under 1 hour.

I’ve done both kinds, and in my experience, migrations with planned downtime end up being the more painful ones, as one is under pressure to meet a predefined outage window, which inevitably slips.

In practice, there’s a point of diminishing returns after which you’re wasting money to get more availability than you need. That’s at the crux of this article, and it’s an interesting read.

Haven’t gotten your fill from SRE Weekly? Here’s a long list of curated SRE-related links to peruse.

Here’s a classic from the venerable John Allspaw of Etsy on running gameday scenarios in production. The general process is to brainstorm possible failures, improve the system to handle them, and then test by actually inducing the failures in production.

Imagining failure scenarios and asking, “What if…?” can help combat this thinking and bring a constant sense of unease to the organization. This sense of unease is a hallmark of high-reliability organizations. Think of it as continuously deploying a BCP (business continuity plan).

(emphasis mine)

Yup, turns out it was a hoax. Still generated an interesting conversation though.

Outages

SRE Weekly Issue #19

Articles

I just love this story. I heard Rachel Kroll tell it during her keynote at SREcon, and here it is in article form. It’s an incredibly deep dive through a gnarly debugging session, and I can’t recommend enough that you read it. NSFL (not safe for the library), because it’s pretty darned hilarious.

Christine Spang of Nylas shares a story of migrating from RDS to sharded self-run MySQL clusters using SQLProxy. I love the detail here! I’m looking to get more deeply technical articles in SRE Weekly, so if you come across any, I’d love it if you’d point them out to me.

Here’s the latest in Mathias Lafeldt’s Production Ready series. He makes the argument that too few failures can be a bad thing and argues for a chaos engineering approach.

Complacency is the enemy of resilience. The longer you wait for disaster to strike in production — merely hoping that everything will be okay — the less likely you are to handle emergencies well, both at a technical and organizational level.

Timesketch is a tool for building timelines. It could be useful for building a deeper understanding of an incident as part of a retrospective.

Anthony Caiafa shares his take on what SRE actually means. To me, SRE seems to be a field even more in flux than DevOps, and definitions have yet to settle. For example, I feel that there’s a lot that a non-programmer can add to an SRE team — you just have to really think about what it means to engineer reliability (e.g. process design).

Github details Dgit, their new high-availability solution for storing git repos internally. Previously, they used pairs of servers with raid mirroring in each and synchronized using DRDB.

An early review of Google’s new SRE book by Mike Doherty, a Google SRE. He was only peripherally involved in the publication and gives a fairly balanced take on the book. For an outside perspective, see danluu’s detailed chapter-by-chapter notes.

Amazon.com famously runs on AWS, so any AWS outage could potentially impact Amazon. Google, on the other hand, doesn’t currently run any of its external services on Google Cloud Platform. This article makes the argument that doing so would create a much bigger incentive to improve and sustain GCP’s reliability.

However, when Google had its recent 12-hour outage that took Snapchat offline, it didn’t impact any of Google’s real revenue-generating services. […] What would the impact have been if Google Search was down for 12 hours?

Thanks to Charity for this one.

Oops.

Note that there’s been some question on hangops #sre on whether this is a hoax. Either way I could totally see it happening.

I love the fact that statuspage.io is the author of this article. How many of us have agonized over the exact wording of a status site post?

Outages

  • Yahoo Mail
  • Business Wire
  • Google Compute Engine
    • GCE suffered a severe network outage. It started as increased latency and at worst became a full outage of internet connectivity. Two days after the incident, Google released the best postmortem I’ve seen in a very long time. Full transparency, a terrible juxtaposition of two nasty bugs, a heartfelt apology, fourteen(!) remediation items… it’s clear their incident response was solid and they immediately did a very thorough retrospective.

  • North Korea
    • North Korea had a series of internet outages, each of the same length at the same time on consecutive days. It’s interesting how people are trying to learn things about the reclusive country just from this pattern of outages.

  • Blizzard's Battle.net
  • Twitter
  • Misco
  • Two Alt-Coin exchanges (Shapeshift and Poloniex)
  • Home Depot

SRE Weekly Issue #18

SRECon16 was awesome! Sorry for the light issue this week — still recovering from my con-hangover. I had an incredible time, and I enjoyed meeting many of you, both old subscribers and new. Thank you all for your support! When USENIX posts their recordings, I’ll share links to some of my favorite talks.

QotW, from Charity Majors’s day 1 closing keynote (paraphrased):

There are no bad decisions. We make the best decisions we can with the information we have at the time.

Love it. The second QotW was from Rachel Kroll’s day 1 opening keynote, which included a hilarious and cringe-worthy story of investigating a very well-hidden bug with an incredibly bizarre set of symptoms. I can’t recommend enough watching the keynotes, and, well, every talk.

More content next week, after I’ve caught up on my RSS feeds. Thanks again for the huge amount of support you all have shown me — all 250+ of you (and that’s just email subscribers)!

Articles

Telstra exec Kate McKenzie detailed some findings from internal investigations into the recent spate of Telstra incidents. There’s some nice detail here, including possible remediation items and an implication that Telstra is using a blameless retrospective process.

This is a short but excellent template for incident retrospectives in the form of a series of questions. A great place to start if you’re looking to improve your retrospective process.

Etsy’s morgue, a tool for tracking information related to postmortem investigations.

A rockin’ postmortem detailing the failure and recovery of a 1.7 PB filesystem, featuring the creation of a 3 TB ramdisk(!) to speed up the operation.

Thanks to phill-atlassian on hangops #incident_response for this one.

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme