SRE Weekly Issue #299

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/?utm_source=sreweekly

Articles

Lacking enough incidents to learn from, NASA “borrowed” incidents from outside of their organization and wrote case studies of their own!

  John Egan — InfoQ

In this interview, they hit hard on the importance of setting and adhering to clear work hours when working remotely as an SRE.

  Ben Linders (interviewing James McNeil) — InfoQ

Here’s a clever way to put a price on how much an outage cost the company.

  Lorin Hochstein

This article introduces error budgets through an analogy to feedback loops in electrical engineering.

  Sjuul Janssen — Cloud Legends

[…] saturation SLOs have always been a point of discussion in the SRE community. Today, we attempt to clarify that.

  Last9

Here’s how the GitHub Actions engineering team uses ChatOps. I love the examples!

  Yaswanth Anantharaju — GitHub

This contains some pretty interesting details on their major outage last month.

  GitHub

In the last few weeks, I’ve been working on an extendible general purpose shard coordinator, Shardz. In this article, I will explain the main concepts and the future work.

Lots of deep technical detail here.

  Jaana Dogan

They constructed a set of git commits, one for each environment variable, then used git bisect to figure out which variable was causing the failure. Neat trick!

  Diomidis Spinellis

Outages

SRE Weekly Issue #298

Email subscribers, my apologies for the double-send last week. I upgraded WordPress and subsequently further cemented my distrust of all version upgrades ever.

I carefully tested a fix in staging before rolling it out gradually in preparation for this week’s issue. Just kidding, I hacked on it live until I got it fixed. Sorry about all those testing tweets. #testinproduction #yolo #SREWeeklydoesnotpracticeSRE

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:

https://rootly.com/?utm_source=sreweekly

Articles

This is Google’s detailed report from their outage last week. This one’s really worth a read; I promise you won’t be disappointed!

  Google

I really like this guide and template for writing incident reports. Each section comes with an explanation of what goes there with examples.

  Lorin Hochstein

Booking.com developed their Reliability Collaboration Model to guide the engagement between SRE and product development teams and the responsibilities assigned to each.

  Emmanuel Goossaert — Booking.com

Especially timely now, in the thick of the holiday on-call period.

  James Frost — Ably

Great tips. I hope your Black Friday / Cyber Monday is going well!

  Quentin Rousseau — Rootly

This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

I thought it might be better to try a new approach: defining what SRE was by looking at what it’s not. Or to put it another way, what can you remove from SRE and have it still be SRE?

  Niall Murphy

Instead of asking that question this article urges understanding what happened.

Another reason that imagining future scenarios is better that counterfactuals about past scenarios is that our system in the future is different from the one in the past.

  Lorin Hochstein

Outages

SRE Weekly Issue #297

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/?utm_source=sreweekly

Articles

It’s that time of year again, but maybe it’s time to rethink that code freeze.

Robert Ross — FireHydrant

This article really gets to the heart of why I love a good incident. I mean, obviously, I want to minimize, incidents. I swear.

Lisa Karlin Curtis — incident.io

This article draws on incident reports from The VOID to show how root cause analysis can be problematic.

Courtney Nash — Verica

It’s interesting to read this article after reading the previous one. In the “my car won’t start”, I found myself immediately wondering, why was the vehicle not maintained? What factors contributed to that?

Søren Pedersen — Dzone

These are the “phases”, although they stress that aiming for Visionary doesn’t make sense for all organizations.

  • Absent
  • Reactive
  • Proactive
  • Strategic
  • Visionary

Google

Not the field I would have expected to look to for lessons, but it totally works!

Paul Marsicovetere — Formidable

This article introduces a 3-phased approach for safe database schema changes: Expand, Rollout, and Contract.

Alex Yates — Octopus Deploy

Try to run a program, and you get “No such file or directory”, even though the program is right there. How can this happen?

Julia Evans

Outages

  • Google Cloud Load Balancing
    • Google had a major outage that took down many sites and services. Notably, users of these sites were greeted with a Google 404 page with no branding related to the site they were attempting to access.
  • Grab
  • Tesla
    • Tesla owners were locked out of their cars or unable to start them during the outage.

SRE Weekly Issue #296

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/?utm_source=sreweekly

Articles

WOW! This is the longest, most detailed public incident post I’ve ever seen from any company. I’ve linked to their short(er) summary, but be sure to check out the long version for all the juicy details.

If we operate too far from the edge, we lose sight of it and can’t anticipate when corrective work should be emphasized. If we operate too close to it, we are constantly in high-stakes situations and firefighting.

Fred Hebert — Honeycomb

This article goes through the actual math of creating an alert for an SLO, including how to avoid alerting for the entire sliding window even after the problem is fixed.

Ervin Barta

This reddit thread doesn’t have any firm answers, but the discussion is pretty interesting.

u/faidoc and others — reddit

Good advice for writing resumes in general, with some SRE-specific tips. There are also links to example SRE resumes.

Quentin Rousseau — Rootly

This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

Turns out they have runbooks too — or I guess you could say we have SOPs.

Hugh Brien — Transposit

What do you do about developers that just don’t want to be on call?

Charity Majors — Honeycomb

Before opening their new API up to the public, Ably walloped it with Locust.

Denis Sellu — Ably

Outages

SRE Weekly Issue #295

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/?utm_source=sreweekly

Articles

I love this crystal clear argument based on statistics and research. MTTR as a metric is simply meaningless.

Courtney Nash — Verica

Their steps for better communication during an outage:

  • Provide context to minimise speculation
  • Explain what you’re doing to demonstrate you’re ‘on it’
  • Set some expectations for when things will return to normal
  • Tell people what they should do0
  • Let folks know when you’ll be updating them next

Chris Evans — incident.io

Despite checking in advance to be sure their systems would support the new Let’s Encrypt certificate chain, they ran into trouble.

[…] we discovered that several HTTP client libraries our systems use were using their own vendored root certificates.

Heroku

This is the best case I’ve seen yet against multi-cloud infrastructure. I really like the airline analogy.

Lydia Leong

Roblox had a major, several-day outage starting on October 28. I don’t usually include game outages in the Outages section since they’re so common and there’s not usually much information to learn from, I sure do like a good post-incident report. Thanks, folks!

David Baszucki — Roblox

When you’re sending small TCP packets, two optimizations can conspire to introduce an artificial 40 millisecond (not megasecond…) delay.

Vorner

_Here’s Google’s follow-up report for their October 25-26 Meet outage.

Should you count failed requests toward your SLI if the client retries and succeeds? A good argument can be made on either side.

u/Sufficient_Tree4275 and other Reddit users

Mercari restructured its SRE team, moving toward an embedded model to adapt to their growing microservice architecture.

ShibuyaMitsuhiro — Mercari

There’s a really great discussion in this episode about leaving slack in the system in the form of bits of capacity and inefficiency that can be drawn upon to buy time during an outage.

Courtney Nash, with guests Liz Fong-Jones and Fred Hebert — Verica

Here’s how non-SREs can use SRE principles to improve their systems.

Laurel Frazier — Transposit

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme