General

SRE Weekly Issue #315

I’m going on vacation, so I’m going to prepare next week’s issue in advance. It’ll look much like most issues, except there won’t be an Outages section. See you all in two weeks!

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging and adding responders, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly lego set):
https://rootly.com/demo/

Articles

In the previous articles in this series, they described a process of interviewing incident responders before a full retrospective meeting. This one discusses what to do if you can’t conduct those interviews, and the particular challenges this will bring and how to deal with them.

  Emily Ruppe — Jeli

Some interesting ideas on potential downsides of circuit breakers and how we might ameliorate them.

  Marc Brooker

GitHub has had a bit of a hard time lately. Here’s an update on what they’re dealing with and how they’re planning to address it.

  Keith Ballinger — GitHub

All sorts of “mean time to” metrics, including 6(!) different MTTR metrics and how they might be used.

  Alex Ewerlöf — InfoQ

This is a huge 100+-page report on the benefits of a model in which development teams own the operation of their systems. There’s a lot in here, with carefully spelled-out pros/cons and cost/benefit analyses. Need to convince someone? Send them this.

We’ve written this playbook for CxOs, product managers, delivery managers, and
operations managers.

  Bethan Timmins and Steve Smith — Equal Experts

It’s easy to miss MTUs, until they sneak up on you and cause really confusing problems.

  Aaron Kalair — Hudl

Should you compensate for on-call? How? I really want to see more articles about this, so send them my way if you see or write any.

  Chris Evans — Incident.io

Some good tips in this article, and I love the case studies.

  Prathamesh Sonpatki — Last9

Outages

SRE Weekly Issue #314

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging and adding responders, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly lego set):
https://rootly.com/demo/

Articles

The first episode of this new podcast answers the question in three ways: what Google says SRE is, what the podcast host thinks it is, and how people seem to be practicing SRE.

  Stephen Townsend — Slight Reliability

This aircraft accident report puts heavy emphasis on the deeper contributing factors rather than a seemingly obvious single root cause.

  Mentour Pilot

Google posted an incident report for the March 8 incident involving Traffic Director.

  Google

This one includes some neat graphs made by showing load and theoretical success rates for various strategies such as no retries, N retries, token buckets, and circuit breakers.

  Marc Brooker

What if your alerting system goes down? These folks set up a dead-switch to handle that situation.

  Miedwar Meshbesher — Nanit

Strategies for creating concise, efficient communication between teams during incidents and operational suprises

[…] communications must be precise and descriptive to minimize confusion and accelerate a responder’s ability to assess and remedy the situation.

  Steve Stevens — Transposit

I really love these articles about hardware errors. They’re more common than we tend to realize.

  Harish Dattatraya Dixit — Facebook

Outages

SRE Weekly Issue #313

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging and adding responders, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly lego set):
https://rootly.com/demo/

Articles

Do you need an incident commander? (Yes.) This article is about how to staff your incident command rotation through a couple of different strategies.

  Ryan McDonald — FireHydrant

What an interesting idea, an insurance plan that pays out automatically when a cloud provider has an outage.

  L.S. Howard — Insurance Journal
Full disclosure: Fastly, my employer, is mentioned.

LaunchDarkly revamped the way that their on-call system works. Learn about the experience through the eyes of a newly-onboarded engineer.

  Anna Baker — LaunchDarkly (via The New Stack)

Catchpoint’s yearly SRE Report is out with four key findings. You have to fill out a form with your email address, and then the link to download the report is presented in your browser.

  Catchpoint

This article shows why one-thread-per-request can be a bottleneck and presents alternatives.

  Ron Pressler — Parallel Universe (via High Scalability)

And this is a truth about incidents: there are always more signals than there is attention available.

It’s so true.

  Fred Hebert — Honeycomb

If you’ve ever even considered running a retrospective, read this article.

This is my favorite piece of advice from this article:

If you think ‘this might be a stupid question,’ ask it.

  Emily Ruppe — Jeli

I’m still not sure how I feel about AIOps. Fortunately, this article takes a measured stance while providing some useful insight.

Conclusion: AI won’t replace SREs – but it can help

  JJ Tang — Rootly
This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

Outages

SRE Weekly Issue #312

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly shirt):
https://rootly.com/demo/?utm_source=sreweekly

Articles

There’s a really great discussion of “pilot error” at the end of this air accident summary video.

  Mentour Pilot

There are some really great names and talks on the agenda for this half-day virtual conference on April 1.

  IRConf

This article is about building a framework, rather than using one off-the-shelf, to ensure that it’s tailored to the needs of your orgnaization.

  Ethan Motion

When are you smarter than your playbooks, and when are your playbooks smarter than you?

  Andre King — Rootly
This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

This one is about piecing together the story of how an incident unfolded. One interviewee might mention something new, and then you can ask later interviewees about it.

  Cory Watson — Jeli

All about alert fatigue: how to recognize it and how to fix it once you notice it.

  Emily Arnott — Blameless

This one includes a summary of their February 2 outage:

[…] a routine deployment failed to generate the complete set of integrity hashes needed for Subresource Integrity. The resulting output was missing values needed to securely serve Javascript assets on GitHub.com.

  Jakub Oleksy — GitHub

Following on last week’s article about the term “postmortem”, this one has even more great reasons to pick a different word.

  Blameless

This article recommends a two-stage approach to writing an incident retrospective report: a “calibration document” and then the final report.

  Thai Wood — Jeli

Outages

  • Tasmania
  • Discord
    • Something’s on fire! We’re looking into it, hang tight.

SRE Weekly Issue #311

I’m dedicating this issue to the people of Ukraine, and also those in Russia that are protesting the invasion.

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly shirt):
https://rootly.com/demo/?utm_source=sreweekly

Articles

In this episode of the podcast Page it to the Limit, they discuss learning how to be an incident commander.

There was major AWS outage and the second day I was incident command.

  Kat Gaines, with guest Iris Carrera — Page it to the Limit

This article discusses three aspects of fully owning your systems: mandate, knowledge, and responsibility. After defining those terms, it goes on to discuss what happens if one of the three is missing.

  Alex Ewerlöf

I really like the “Managing High RPS” section, especially the part about ignoring events if they’re too old to be relevant any longer.

  Ankush Gulati and David Gevorkyan — Netflix

Cool idea! When a process is overloaded, the system drops requests based on heuristics until the overload condition has passed.

  Bryan Barkley — LinkedIn

Here’s another take on incident severity and priority levels. The two terms are different and mean specific things.

  Robert Ross — FireHydrant

Can we please agree to stop calling them “postmortems”?

  Ash P — Cruform Newsletter

The term “service level” goes back to the US highway system maintenance procedures, among others.

  Akshay Chugh and Piyush Verma — Last9

Charity Majors has railed against metrics for years. Now, her company Honeycomb has a metrics product offering. How does she square it?

  Charity Majors — Honeycomb

Despite the December AWS outage, folks aren’t fleeing AWS, and multi-cloud designs for reliability still don’t make sense, according to this cloud consultant. The media angle is fascinating.

  Lydia Leong — Cloud Pundit

This article has a great list of ideas of who to talk to, plus a section on how to prioritize when you’re short on time.

  Daniela Hurtado — Jeli

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme