General

SRE Weekly Issue #295

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/?utm_source=sreweekly

Articles

I love this crystal clear argument based on statistics and research. MTTR as a metric is simply meaningless.

Courtney Nash — Verica

Their steps for better communication during an outage:

  • Provide context to minimise speculation
  • Explain what you’re doing to demonstrate you’re ‘on it’
  • Set some expectations for when things will return to normal
  • Tell people what they should do0
  • Let folks know when you’ll be updating them next

Chris Evans — incident.io

Despite checking in advance to be sure their systems would support the new Let’s Encrypt certificate chain, they ran into trouble.

[…] we discovered that several HTTP client libraries our systems use were using their own vendored root certificates.

Heroku

This is the best case I’ve seen yet against multi-cloud infrastructure. I really like the airline analogy.

Lydia Leong

Roblox had a major, several-day outage starting on October 28. I don’t usually include game outages in the Outages section since they’re so common and there’s not usually much information to learn from, I sure do like a good post-incident report. Thanks, folks!

David Baszucki — Roblox

When you’re sending small TCP packets, two optimizations can conspire to introduce an artificial 40 millisecond (not megasecond…) delay.

Vorner

_Here’s Google’s follow-up report for their October 25-26 Meet outage.

Should you count failed requests toward your SLI if the client retries and succeeds? A good argument can be made on either side.

u/Sufficient_Tree4275 and other Reddit users

Mercari restructured its SRE team, moving toward an embedded model to adapt to their growing microservice architecture.

ShibuyaMitsuhiro — Mercari

There’s a really great discussion in this episode about leaving slack in the system in the form of bits of capacity and inefficiency that can be drawn upon to buy time during an outage.

Courtney Nash, with guests Liz Fong-Jones and Fred Hebert — Verica

Here’s how non-SREs can use SRE principles to improve their systems.

Laurel Frazier — Transposit

Outages

SRE Weekly Issue #294

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/?utm_source=sreweekly

Articles

The steps are:

  • Know How Much Time Is Spent On Toil
  • Find The Toil
  • Determine The Root Causes Of Toil
  • Find And Prioritize The Low-Hanging Fruit
  • Promote Toil Reduction

Aater Suleman — Forbes

I like how they try to strike a balance and avoid reviewing too far in depth, while still hitting everything important.

Milan Plžík — Grafana Labs

Lots of good stuff in this one about one of my favorite topics, service ownership.

Kenneth Rose — OpsLevel

This is the intro I needed to understand Conflict-Free Replicated Data Types.

Jo Stichbury — Ably

Availability, maintainability and reliability all have distinct—if related—meanings, and they each play different roles in reliability operations.

JJ Tang — DevOps.com

The five Ps come from medicine and understanding medical accidents, but they apply equally well to analyzing incidents in IT.

Lydia Leong

I really love the focus on de-emphasizing finding action items in incident retrospectives, in favor of learning.

Gergely Orosz — The Pragmatic Engineer

Outages

  • AT&T SMS in the US
    • This week, I saw several status pages point to some kind of problem in their ability to send SMS notifications to AT&T phones. I thought this was interesting because usually I don’t learn about an outage solely from other companies’ status pages.
  • Google Meet
  • Tesco
  • Coinbase
  • Zomato
  • Barclays
  • HSBC

SRE Weekly Issue #293

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.io/?utm_source=sreweekly

Articles

It’s one thing to say you accept call-outs of unsafe situations — it’s another to actually do it. This cardiac surgeon shares what it’s like when high reliability organizations get it wrong.

Robert Poston, MD

The game has been a victim of its own success, and the developers have had to put in quite a lot of work to deal with the load.

PezRadar — Blizzard

This includes some lesser-known roles like Social Media Lead, Legal/Compliance Lead, and Partner Lead.

JJ Tang — Rootly

This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

There are a couple of great sections in this article, including “blameless” retrospectives that aren’t actually blameless, and being judicious in which remediation actions you take.

Chris Evans — incident.io

I love the idea that chaos monkey could actually be propping your infrastructure up. Oops.

Lorin Hochstein

I have to say, I’m really liking this DNS series.

Jan Schaumann

What? Why the heck am I including this here?

First, let’s all keep in mind that this situation is still very much unfolding, and not much is concretely known about what happened. It’s also emotionally fraught, especially for the victims and their families, and my heart goes out to them.

The thing that caught my eye about this article is that this looks like a classic complex system failure. There’s so much at play that led to this horrible accident, as outlined in this article and others, like this one (Julia Conley, Salon).

Aya Elamroussi, Chloe Melas and Claudia Dominguez — CNN

Outages

SRE Weekly Issue #292

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.io/?utm_source=sreweekly

Articles

The lessons:

  1. Acknowledge human error as a given and aim to compensate for it
  2. Conduct blameless post-mortems
  3. Avoid the “deadly embrace”
  4. Favor decentralized IT architectures

There have been quite a few of these “lessons learned” articles that I’ve passed over, but I feel like this one is worth reading.

Anurag Gupta — Shoreline.io

Niall Murphy

Could us-east-1 go away? What might you do about it? Let’s catastrophize!

I love catastrophizing!

Tim Bray

When evaluating options, this article focuses on reliability, both of the service itself and the options it provides for building reliable services on it.

Quentin Rousseau — Rootly

This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

This one answers the questions: what are failure domains, and how can we structure them to improve reliability?

brandon willett

It’s a great list of questions, and it covers a lot of ground. SREs wear many hats.

Opsera

I’ve always been curious about how Prometheus and similar time-series DBs compress metric data. Now I know!

Alex Vondrak — Honeycomb

This one has some unconfirmed (but totally plausible!) deeper details about what might have gone wrong in the Facebook outage, sourced from rumors.

rachelbythebay

There’s a really intriguing discussion in here about why organizations might justify a choice of profit at the expense of safety, and how the deck is stacked.

Rob Poston

Outages

SRE Weekly Issue #291

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.io/?utm_source=sreweekly

Articles

Facebook’s outage caused significantly increased load on DNS resolvers, among other effects. Cloudflare also published this followup article with more findings.

Celso Martinho and Sabina Zejnilovic — Cloudflare

Shell (the oil company) reduced accidents by 84% by teaching roughnecks to cry. Listen to this podcast (or check it out in article form to find out how. Can we apply this to SRE?

Alix Spiegel and Hanna Rosin — NPR’s Invisibilia

Don’t have time to read Google’s entire report? Here are the highlights.

Quentin Rousseau — Rootly

I really like how open Facebook engineering has been about what went wrong on Monday. This article is an update on their initial post.

Santosh Janardhan — Facebook

Want to learn about BGP? Ride along as Julia Evans learns. I especially like how she whipped out strace to figure out how traceroute was determining ASNs.

Julia Evans

The Verica Open Incident Database is an exciting new project that seeks to create a catalog of public incident postings. Click through to check out the VOID and read the inaugural paper with initial findings. I’m really excited to see what this project brings!

Courtney Nash — Verica

Printing versus setting a date — they’re only separated by a typo. Perhaps something similar happened with Facebook’s outage.

rachelbythebay

Adopting a microservice architecture can strain your SRE. This article highlights an oft-missed section of the SRE book about scaling SRE.

Tyler Treat

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme