SRE Weekly Issue #304

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo (+ get a snazzy Rootly shirt):
https://rootly.com/demo/?utm_source=sreweekly

Articles

Ably processes a lot of messages, so when they have to redesign a core part of their architecture, it gets pretty interesting.

  Simon Woolf — Ably

If you ask any Site Reliability or DevOps engineer how they feel about a deployment plan with over 300 single points of failure, you’d see a lot of nauseous faces and an outbreak of nervous tics!

Nevertheless, that was the best design. Read on to find out why.

  Robert Barron

Slack had three separate incidents while trying to deploy DNSSEC for slack.com. This article goes into deep detail on what went wrong each time and what they learned.

Yes, it was an oversight that we did not test a domain with a wildcard record before attempting slack.com — learn from our mistakes!

  Rafael Elvira and Laura Nolan — Slack

The specializations outlined in this article include:

  • The Educator
  • The SLO Guard
  • Infrastructure architect
  • Incident response leader

  Emily Arnott — Blameless

If you had to design a WhatsApp today to support its current load, how would you go about it? Here’s one possible design.

  Ankit Sirmorya — High Scalability

Yesterday I asked on Twitter why you might want to run your own DNS servers, and I got a lot of great answers that I wanted to summarize here.

  Julia Evans

In this podcast interview, find out more about why Courtney Nash created the VOID and how posting an incident report can benefit your company. Transcript available.

  Mandy Walls (with guest Courtney Nash) — Page it to the Limit

Drawing on Cynefin, this article explains why debugging by feel and guesswork won’t suffice anymore; we need to be methodical.

  Pete Hodgson — Honeycomb

Outages

SRE Weekly Issue #303

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/demo/?utm_source=sreweekly

Articles

There are way too many gorgeous, mind-blowing ways for incidents to occur without a single change to code being deployed.

That last hot take is the kicker: even if you don’t do a code freeze in December (in the US), you’ll still see a lot of the same pitfalls as you would have if you did.

  Emily Ruppe — Jeli

Ah, IaC, the tool we use to machine-gun our feet in a highly-available manner at scale. This analysis of an incident from back in August tells what happened and what they learned.

  Stuart Davidson — Skyscanner

By establishing a set of core principles (Response, Observability, Availability and Delivery) aka our “ROAD to SRE”, we now have clarity on what areas we expect our SRE team should be focusing on and avoiding a common pitfall of becoming another platform or Ops team.

  Bruce Dominguez

In this blog post, we’ll look at:

  • The advantages of an SRE team where each member is a specialist.
  • Some SRE specialist roles and how they help.

  Emily Arnott — The New Stack

I love these “predictions for $YEAR” posts. What are your predictions?

  Emily Arnott — Blameless

Deployment Decision-Making during the holidays amid the COVID19 Pandemic

A sneak peek into my forthcoming MSc. thesis in Human Factors and Systems Safety, Lund University.

  Jessica DeVita (edited by Jennifer Davis) — SysAdvent

This article covers what to do as an incident commander, how to handle long-running incidents, and how to do a post-incident review.

  Joshua Timberman — SysAdvent

So in this post I’m going to go over what makes a good metric, why data aggregation on its own loses resolution and messy details that are often critical to improvements, and that good uses of metrics are visible by their ability to assist changes and adjustments.

  Fred Hebert

Here’s a great tutorial to get started with eBPF through a (somewhat convoluted) “Hello World” exercise.

  Ania KapuĹ›ciĹ„ska (edited by Shaun Mouton) — SysAdvent

The concept of engineering work being about resolving ambiguity really resonates with me.

  Lorin Hochstein

This appears to have caused a problem with Microsoft Exchange servers. Maybe this belongs in the Outages section…

  rachelbythebay

Outages

SRE Weekly Issue #302

Happy holidays, for those that celebrate! I put this issue together in advance, so no Outages section this week.

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/demo/?utm_source=sreweekly

Articles

This is another great deep-dive into strategies for zero-downtime deploys.

  Suresh Mathew — eBay

How do you make sure your incident management process survives the growth of your team? This article has a useful list of things to cover as you train new team members.

  David Caudill — Rootly
This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

The trends in this article are:

  • AIOps and self-healing platforms
  • Service Meshes
  • Lowcode DevOps
  • GitOps
  • DevSecOps

  Biju Chacko — squadcast

I can’t get enough of these. Please write one about your company!

  Ash Patel

My favorite part is the discussion of Kyle Kingsbury’s work on Jepsen. Would distributed systems have even more problems if Kingsbury did not shed light on them?

  Dan Luu

PagerDuty analyzed usage data for their platform in order to draw inferences about how the pandemic has affected incident response.

  PagerDuty

There’s a ton of interesting stuff in here about confirmation bias and fear in adopting a new, objectively less risky process.

  Robert Poston, MD

SRE Weekly Issue #301

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:

https://rootly.com/demo/?utm_source=sreweekly

Articles

This one perhaps belongs in a security newsletter, but the failure mode is just so fascinating. A CDN bug led to the loss of millions of dollars worth of Bitcoin.

  Badger

Google posted a report for the Google Calendar outage last week.

  Google

Jeli, authors of the Howie post-incident guide has their own “howie”. It’s a great example of a thorough incident report.

  Vanessa Huerta Granda — Jeli

Hopefully not too late, here are some tips as we head into the thick of it.

  JJ Tang — Rootly
This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

Using their own incident retrospective template, Blameless shows us how to write an incident retrospective.

  Emily Arnott — Blameless

Meta has their own in-house-built tool for tracking and reporting on SLIs.

  A Posten, Dávid BartĂłk, Filip Klepo, and Vatika Harlalka — Meta

These folks put everyone on-call by default, and also pay them extra automatically for each shift and even covering for coworkers.

  Chris Evans — incident.io

Code that was deployed under a feature flag inadvertently affected all traffic, even with the flag disabled.

  Steve Lewis — Honeycomb

By creating SLOs for microservices at various levels of the request tree, they ended up with a morass of arbitrary targets that didn’t relate clearly to the user experience.

  Ben Sigelman — Lightstep

Outages

  • AWS us-west-1 and us-west-2
    • Hot on the heels of last week’s us-east-1 outage, AWS had a shorter outage in us-west-1 and us-west-2.

  • PagerDuty
    • PagerDuty alert notifications were affected by the AWS us-west-2 outage, and the impact lasted about twice as long as AWS’s.

  • Slack
  • Cloudflare
  • Solana

SRE Weekly Issue #300

300 issues. 6 years. Wow! I couldn’t have done it without all of you wonderful people, writing articles and reading issues. Thanks, you make curating this newsletter fun!

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/demo/?utm_source=sreweekly

Articles

This is the best thing to hit incident analysis since the Etsy Debriefing Facilitation Guide and the PagerDuty retrospective guide!  This one’s even better because it’s not just about the retrospective, but the whole incident analysis process.

BONUS CONTENT: A preview/introduction by Lorin Hochstein.

  jeli

SysAdvent is back!

When teams only consult briefly on reliability or operational concerns, often the final output doesn’t adequately reflect customer or engineering expectations of reliability of the product or operability of the internals.

  Martin Smith (edited by Jennifer Davis) — SysAdvent

What can Dungeons and Dragons teach us about SRE?

  Jennifer Davis — SysAdvent

It’s so true. Don’t forget to read the alt text.

  Randall Munroe

This talk (with transcript) includes three stories about how incident analysis can be super effective.

  Nora Jones — InfoQ

I know this is SRE Weekly and not Security Weekly, but this vulnerability is so big that I’m sure many of us triggered your incident response process, and some of us may have even had to take services down temporarily.

  John Graham-Cumming — Cloudflare

What a colorful metaphor. This article discusses an effective technique for breaking up a monolith, one piece at a time.

  Alex Yates — Octopus Deploy

This article proposes a method of eliminating the need for a central team of architects, and it strikes me as very similar to the practice of SRE itself.

  Andrew Harmel-Law

More from the VOID, this piece is about the importance of analyzing “near miss” events.

  Courtney Nash — Verica

If you load-test in production, don’t include your load-test traffic in your SLO calculation.

  Liz Fong-Jones — Honeycomb

Outages

  • AWS us-east-1 region (and half the web)
    • Between the AWS outage and log4j, it’s been a busy week. Amazon has already posted a write-up about the incident, which includes the notable tidbit that their circuit-breaker/back-off code failed.

A production of Tinker Tinker Tinker, LLC Frontier Theme