SRE Weekly Issue #293

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.io/?utm_source=sreweekly

Articles

It’s one thing to say you accept call-outs of unsafe situations — it’s another to actually do it. This cardiac surgeon shares what it’s like when high reliability organizations get it wrong.

Robert Poston, MD

The game has been a victim of its own success, and the developers have had to put in quite a lot of work to deal with the load.

PezRadar — Blizzard

This includes some lesser-known roles like Social Media Lead, Legal/Compliance Lead, and Partner Lead.

JJ Tang — Rootly

This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

There are a couple of great sections in this article, including “blameless” retrospectives that aren’t actually blameless, and being judicious in which remediation actions you take.

Chris Evans — incident.io

I love the idea that chaos monkey could actually be propping your infrastructure up. Oops.

Lorin Hochstein

I have to say, I’m really liking this DNS series.

Jan Schaumann

What? Why the heck am I including this here?

First, let’s all keep in mind that this situation is still very much unfolding, and not much is concretely known about what happened. It’s also emotionally fraught, especially for the victims and their families, and my heart goes out to them.

The thing that caught my eye about this article is that this looks like a classic complex system failure. There’s so much at play that led to this horrible accident, as outlined in this article and others, like this one (Julia Conley, Salon).

Aya Elamroussi, Chloe Melas and Claudia Dominguez — CNN

Outages

SRE Weekly Issue #292

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.io/?utm_source=sreweekly

Articles

The lessons:

  1. Acknowledge human error as a given and aim to compensate for it
  2. Conduct blameless post-mortems
  3. Avoid the “deadly embrace”
  4. Favor decentralized IT architectures

There have been quite a few of these “lessons learned” articles that I’ve passed over, but I feel like this one is worth reading.

Anurag Gupta — Shoreline.io

Niall Murphy

Could us-east-1 go away? What might you do about it? Let’s catastrophize!

I love catastrophizing!

Tim Bray

When evaluating options, this article focuses on reliability, both of the service itself and the options it provides for building reliable services on it.

Quentin Rousseau — Rootly

This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

This one answers the questions: what are failure domains, and how can we structure them to improve reliability?

brandon willett

It’s a great list of questions, and it covers a lot of ground. SREs wear many hats.

Opsera

I’ve always been curious about how Prometheus and similar time-series DBs compress metric data. Now I know!

Alex Vondrak — Honeycomb

This one has some unconfirmed (but totally plausible!) deeper details about what might have gone wrong in the Facebook outage, sourced from rumors.

rachelbythebay

There’s a really intriguing discussion in here about why organizations might justify a choice of profit at the expense of safety, and how the deck is stacked.

Rob Poston

Outages

SRE Weekly Issue #291

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.io/?utm_source=sreweekly

Articles

Facebook’s outage caused significantly increased load on DNS resolvers, among other effects. Cloudflare also published this followup article with more findings.

Celso Martinho and Sabina Zejnilovic — Cloudflare

Shell (the oil company) reduced accidents by 84% by teaching roughnecks to cry. Listen to this podcast (or check it out in article form to find out how. Can we apply this to SRE?

Alix Spiegel and Hanna Rosin — NPR’s Invisibilia

Don’t have time to read Google’s entire report? Here are the highlights.

Quentin Rousseau — Rootly

I really like how open Facebook engineering has been about what went wrong on Monday. This article is an update on their initial post.

Santosh Janardhan — Facebook

Want to learn about BGP? Ride along as Julia Evans learns. I especially like how she whipped out strace to figure out how traceroute was determining ASNs.

Julia Evans

The Verica Open Incident Database is an exciting new project that seeks to create a catalog of public incident postings. Click through to check out the VOID and read the inaugural paper with initial findings. I’m really excited to see what this project brings!

Courtney Nash — Verica

Printing versus setting a date — they’re only separated by a typo. Perhaps something similar happened with Facebook’s outage.

rachelbythebay

Adopting a microservice architecture can strain your SRE. This article highlights an oft-missed section of the SRE book about scaling SRE.

Tyler Treat

Outages

SRE Weekly Issue #290

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.io/?utm_source=sreweekly

Articles

Despite carefully testing how they would handle this week’s expiration of the root CA that cross-signed Let’s Encrypt’s CA certificate, they had an outage. The reason? Poor behavior in OpenSSL. See the next article for a deeper explanation of what went wrong with OpenSSL.

Oren Eini — RavenDB

This article explains why some versions of OpenSSL are unable to validate certificates issued by Let’s Encrypt now, even though the certificates should be considered valid.

Ryan Sleevi

This says it all:

It turns out that the path to safety isn’t increased complexity.

Matt Asay — TechRepublic

The thrust of this article is that reliability applies to and should matter to the entire company, not just engineering. I really like the term “pitchfork alerting”.

Robert Ross — FireHydrant

Lesson learned: always make your application server’s timeout longer than your reverse proxy’s.

Ivan Velichko

Who deploys the deploy tool? The deploy tool, obviously — unless it’s down.

Lorin Hochstein

Their approach: group tables into “schema domains”, make sure that queries don’t span schema domains, and then move a schema domain to its own separate database cluster.

Thomas Maurer — GitHub

Groot is about helping figure out what’s wrong during an incident, not about analyzing an incident after the fact. I totally get why they need this tool, since they have over 5000 microservices!

Hanzhang Wang — eBay

SRE is a broad, overarching responsibility that needs a multitude of role considerations to pull off properly.

Ash P — Cruform

Outages

  • Heroku
    • (also this one)Heroku had a major outage that coincided with an Amazon EBS failure in a single availability zone in us-east1. Customers of Heroku such as Dead Man’s Snitch were impacted.
  • Slack
    • Slack had a big disruption related to DNSSEC. Here’s an interesting analysis of what may have gone wrong (link).
  • Let’s Encrypt
    • Let’s Encrypt saw heavy traffic as everyone clamored to renew their certificates, causing certificate issuance to slow down.
  • Microsoft 365
  • Apple’s “Find My” service
  • Signal
  • Xero
    • This one coincided with the same Amazon EBS outage mentioned above. Xero also had another outage on October 1.

SRE Weekly Issue #289

A message from our sponsor, StackHawk:

Semgrep and StackHawk are showing you what’s new with automated security testing on September 30. Grab your spot:
https://sthwk.com/whats-new-webinar

Articles

Here are some things that make SREs a unique breed in software work:

The one about Scrum caught my eye, and I followed the links through to the Stack Overflow post about SRE and Scrum.

Ash P — Cruform

An in-depth explainer on the Linux page cache, full of details and experiments.

Viacheslav Biriukov

There’s some great advice in this reddit thread… and maybe some tongue-in-cheek advice too.

Take production down the first day they give access — then it’s nothing but up from there!

Various — reddit

Using two real-world case studies, this article explains how developer self-service can go wrong, and then discusses how to avoid these pitfalls.

Kaspar von Grünberg — humanitec

What a great idea! I found it especially interesting that only 34% of SRE job postings mention defining SLIs/SLOs/error budgets.

Pruthvi — Spike.sh

For the first time, we’ve created the State of Digital Operations Report which is based on PagerDuty platform data.
[…]
we will walk through some of these findings and share 10 questions teams can ask themselves to improve their incident response.

Hannah Culver — PagerDuty

Incident response so often gets mired in assumptions that need to be re-evaluated. This article uses an incident as a case study

Lawrence Jones — incident.io.

This one lays out clear definitions of SRE and DevOps and compares and contrasts them.

Mateus Gurgel — Rootly

This week, Saleforce released Merlion, a Python library for time series machine learning and anomaly detection. Linked is an in-depth research paper on Merlin, explaining its theory of operation and experimental results.

Bhatnagar et al. — Salesforce

Outages

SRE WEEKLY © 2015 Frontier Theme