SRE Weekly Issue #179

A message from our sponsor, VictorOps:

A good SRE manager can make or break your site reliability engineering team. Learn all about the duties of an SRE manager and the best practices for building a highly-effective SRE program:

http://try.victorops.com/sreweekly/duties-of-effective-sre-managers

Articles

This is an engrossing write-up of the Chernobyl incident from the perspective of complex systems and failure analysis.

Barry O’Reilly

Slack’s Disasterpiece Theater isn’t quite chaos engineering, but it’s arguably better in some ways. They carefully craft scenarios to test their system’s resiliency, verifying (or disproving!) their hypothesis that a given disruption will be handled by the system without an incident. They share three riveting stories of lessons learned from past exercises.

The process each Disasterpiece Theater exercise follows is designed to maximize learning while minimizing risk of a production incident.

Richard Crowley — Slack

The above is the title of this YouTube playlist curated by John Allspaw.

My favorite sentence:

If you think an incident is “too common” to get its own postmortem that’s a good indicator that there’s a deeper issue that we need to address, and an excellent opportunity to apply our postmortem process to it.

Fran Garcia — HostedGraphite

In this post, we’ll share the algorithms and infrastructure that we developed to build a real-time, scalable anomaly detection system for Pinterest’s key operational timeseries metrics. Read on to hear about our learnings, lessons, and plans for the future.

I sure do love a good debugging story.

Eve Harris — Ably

When an incident occurs, your company is faced with a choice: do you seek to learn as much as possible about how it happened, or do you seek to find out who messed up?

Phillip Dowland — Safety Differently

Outages

SRE Weekly Issue #178

A message from our sponsor, VictorOps:

Containers and microservices can improve development speed and service flexibility. But, more complex systems have a higher potential for incidents. Learn how SRE teams are building more reliable services and adding context to microservices and containerized environments:

http://try.victorops.com/sreweekly/container-monitoring-and-alerting-best-practices

Articles

Imagine a database that promises consistency except in the case of a network partition, in which case it favors availability. That’s conditional consistency, and it’s effectively the same as no consistency.

Daniel Abadi

This is a story about distributed coordination, the TCP API, and how we debugged and fixed a bug in Puma that only shows up at scale.

Richard Schneeman — Heroku

Here’s more on the Australian Tax Office outage earlier this month.

Max Smolaks — The Register

Ever experience a total outage while your cloud provider still reports 99.999% availability? This one’s for you.

rachelbythebay

What’s good or bad to do in production? And how do you transfer knowledge when new team members want to release production services or take the ownership of existing services?

Jaana B. Dogan (JBD)

The internet is a series of tubes — the kind that transmit light. Favorite thing I learned: fiber optic cables are sheathed in copper that powers repeaters along their length.

 James Griffiths — CNN

How do you build a reliable network when faced with highly skilled and motivated adversaries?

Alex Wawro — DARKReading

Outages

SRE Weekly Issue #177

A message from our sponsor, VictorOps:

[Free Webinar] VictorOps partnered with Catchpoint to put death to downtime with actionable monitoring and incident response practices. See how SRE teams are being more proactive toward service reliability:

http://try.victorops.com/sreweekly/death-to-downtime

Articles

The point of this thread is to bring attention to the notion that our reactions to surprising events are the fuel that effectively dictates what we learn from them.

John Allspaw — Adaptive Capacity Labs

This article is an attempt to classify the causes of major outages at the big three cloud providers (AWS, Azure, and GCP).

David Mytton

It was, wasn’t it? Here’s a nice summary of the recent spate of unrelated major incidents.

Zack Whittaker — TechCrunch

Calculating CIRT (Critical Incident Response Time) involves ignoring various types of incidents to try to get a number that is more representative of the performance of an operations team.

Julie Gunderson, Justin Kearns, and Ophir Ronen — PagerDuty

There is so much great detail in this followup article about Cloudflare’s global outage earlier this month. Thanks, folks!

John Graham-Cumming — Cloudflare

Outages

  • Statuspage.io
  • NS1
  • PagerDuty
  • Nordstrom
    • Nordstrom’s site went down at the start of a major sale.
  • Twitter
  • Heroku
  • Honeycomb
    • Honeycomb had an 8-minute outage preceded by 4 minutes of degradation. Click through to find out how their CI pipeline surprised them and what they did about it.
  • LinkedIn
  • Australian Tax Office
  • Reddit
  • Stripe
    • […] two different database bugs and a configuration change interacted in an unforeseen way, causing a cascading failure across several critical services.

      Click through for Stripe’s full analysis.

  • Discord

SRE Weekly Issue #176

A message from our sponsor, VictorOps:

[Free Guide] VictorOps partnered with Catchpoint and came up with six actionable ways to transform your monitoring and incident response practices. See how SRE teams are being more proactive toward service reliability.

http://try.victorops.com/sreweekly/transform-monitoring-and-incident-response

Articles

[…] spans are too low-level to meaningfully be able to unearth the most valuable insights from trace data.

Find out why current distributed tracing tools fall short and the author’s vision of the future of distributed tracing.

Cindy Sridharan

If I wanted to introduce the concept of blameless culture to execs, this article would be a great starting point.

Rui Su — Blameless

When we look closely at post-incident artifacts, we find that they can serve a number of different purposes for different audiences.

John Allspaw — Adaptive Capacity Labs

When you meant to type /127 but entered /12 instead

Oops?

The early failure injection testing mechanisms from Chaos Monkey and friends were like acts of random vandalism. Monocle is more of an intelligent probing, seeking out any weakness a service may have.

There’s a great example of Monocle discovering a mismatched timeout between client and server and targeting it for a test.

Adrian Colyer (summary)

Basiri et al., ICSE 2019 (original paper)

Take the axiom of “don’t hardcode values” to an extreme, and you end up right back where you started.

Mike Hadlow

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme