SRE Weekly Issue #336

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging and adding responders, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly lego set):
https://rootly.com/demo/

Articles

In this article, I will introduce several improvements being made by the Microservices SRE Team, embedded with other teams.

  MizumotoShota — Mercari

What really stood out to me in this article is the Service Info section. A dashboard will quickly atrophy and lose its meaning without an explanation of what it’s for.

  Ali Sattari

When things go wrong, who is in charge? And what does it feel like to do that role?

This is a summary of a forum discussion about incident command, in case you don’t have time to listen to the whole thing.

  Emily Arnott — Blameless

Complex systems are weird, and a traditional deterministic view such as in older ITIL iterations doesn’t capture the situation. We need to evolve our practices.

  Jon Stevens-Hall

How can you design and interpret metrics for systems optimized for latency or throughput?

  Dan Slimmon

You can optimize for latency or throughput in a given system, but not both, since the two are directly at odds.

  Dan Slimmon

SRE Weekly Issue #335

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging and adding responders, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly lego set):
https://rootly.com/demo/

Articles

I really like that “Missing” section in their incident retrospective template. Gotta be careful with “Missed” though, that sounds like it could slide toward blame.

  Varun Achar — Razorpay

“Unreasonable” is a great way to avoid learning from an incident:

Labeling the responders actions as unreasonable enables us to explain away the failures in the law enforcement response as deficiencies with the individual responders.

  Lorin Hochstein

The author of this post doesn’t argue the fact that Fastly is clearly a single point of failure for many of their customers. But does that really matter?

  Jon Stevens-Hall
Full disclosure: Fastly, my employer, is mentioned.

Small problems can pile up unnoticed and interact weirdly to make a Big Problem that is incredibly hard to untangle. Maybe we should hunt down the small problems before they have a chance to trigger a Big one.

  Dan Slimmon

Apologizing for bugs encourages a lot of problematic thought patterns, much in the same way as blaming people for incidents.

  Dan Slimmon

SRE Weekly Issue #334

I’ll be on vacation starting next Sunday (yay!). That means the next two issues will be prepared in advance, so there won’t be an Outages section.

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging and adding responders, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly lego set):
https://rootly.com/demo/

Articles

Should you go multi-cloud? What should you do during an incident involving a third-party dependency? What about after? Read this one for all that and more.

  Lisa Karlin Curtis — incident.io
Full disclosure: Fastly, my employer, is mentioned.

An introduction to the concept of common ground breakdown, using the Uvalde shooting in the US as a case study.

  Lorin Hochstein

The comments section is full of some pretty great advice, including questions you can ask while interviewing to suss out whether the on-call culture is going to be livable.

  u/dicksoutfoeharambe (and others) — reddit

From the archives, this is an analysis of a report on the 2018 major outage at TSB Bank in the UK.

  Jon Stevens-Hall

You can determine whether backoff will actually help your system, and this article does a great job of telling you how.

  Marc Brooker

I’ve read (and written) plenty of IC training guides, but this is the first time I’ve come across the concept of a “Hands-Off Update”. I’m definitely going to use that!

  Dan Slimmon

This is a really great exlpanation of observability from an angle I haven’t seen before.

a metric dashboard only contributes to observability if its reader can interpret the curves they’re seeing within a theory of the system under study.

  Dan Slimmon

Outages

  • Twitter
  • Google Search
    • Did you catch the Google search outage? I’ve never seen one like it — that’s how rare they are. Google shared a tidbit of information about what went wrong — and it wasn’t the datacenter explosion folks speculated about.

  • Peloton

SRE Weekly Issue #333

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging and adding responders, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly lego set):
https://rootly.com/demo/

Articles

They asked four people and got four answers that run the gamut.

  Jeff Martens — Metrist

How Airbnb automates incident management in a world of complex, rapidly evolving ensemble of microservices.

Includes an overview of their ChatOps system that would make for a great blueprint to build your own.

  Vlad Vassiliouk — Airbnb

Rigidly categorizing incidents can cause problems, according to this article.

From the customer’s viewpoint… well why would they care what kind of technical classification it is being forced into?

  Jon Stevens-Hall

Lots of great advice in this one.

  • If no human needs to be involved, it’s pure automation.
  • If it doesn’t need a response right now, it’s a report.
  • If the thing you’re observing isn’t a problem, it’s a dashboard.
  • If nothing actually needs to be done, you should delete it.

   Leon Adato — New Relic

Using the recent Atlassian outage as a case study, this article explains the importance of communication during an incident, then goes over best practices.

  Martha Lambert — incident.io

My favorite part about this is the advice to “lower the cost of being wrong”. Important in any case, but especially during incident response.

  Emily Arnott — Blameless

There are some interesting incidents in this issue: one involving DNS and another with an overload involving over-eager retries.

  Jakub Oleksy — GitHub

A great read both for interviewers and interviewees.

  Myra Nizami — Blameless

Their main advice is to avoid starting with a microservice architecture, and only transition to one after your monolith has matured and you have a good reason to do so.

  Tomas Fernandez and Dan Ackerson — semaphore

Outages

SRE Weekly Issue #332

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging and adding responders, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly lego set):
https://rootly.com/demo/

Articles

Their notification service had complex load characteristics that made scaling up a tricky proposition.

  Anand Prakash — Razorpay

Coalescing alerts and adding dependencies in AlertManager were the key to reducing this team’s excessive pager load.

  steveazz — GitLab

Lorin Hochstein has started a series of blog posts on what we can learn about incident response from the Uvalde school shooting tragedy in the US. This article looks at how an organization’s perspective can affect their retrospective incident analysis.

  Lorin Hochstein

My claim here is that we should assume the officer is telling the truth and was acting reasonably if we want to understand how these types of failure modes can happen.

Every retrospective ever:

We must assume that a person can act reasonably and still come to the wrong conclusion in order to make progress.

  Lorin Hochstein

How do you synchronize state between multiple browsers and a backend, and ensure that everyone’s state will eventually converge? These folks explain how they did it, and a bug they found through testing.

  Jakub Mikians — Airspace Intelligence

MTTR is a mean, so it doesn’t tell you anything about the number of incidents, among other potential pitfalls.

  Dan Slimmon

Last week, I included a GCP outage in europe-west2. This week, Google posted this report about what went wrong, and it’s got layers.

Bonus: another GCP outage report

  Google

Meta wants to do away with leap seconds, because they make it especially difficult to create reliable systems.

  Oleg Obleukhov and Ahmad Byagowi — Meta

If you’re anywhere near incident analysis in your organization, you need to read this list.

  Milly Leadley — incident.io

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme