General

SRE Weekly Issue #302

Happy holidays, for those that celebrate! I put this issue together in advance, so no Outages section this week.

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly ๐Ÿš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/demo/?utm_source=sreweekly

Articles

This is another great deep-dive into strategies for zero-downtime deploys.

  Suresh Mathew โ€” eBay

How do you make sure your incident management process survives the growth of your team? This article has a useful list of things to cover as you train new team members.

  David Caudill โ€” Rootly
This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

The trends in this article are:

  • AIOps and self-healing platforms
  • Service Meshes
  • Lowcode DevOps
  • GitOps
  • DevSecOps

  Biju Chacko โ€” squadcast

I can’t get enough of these. Please write one about your company!

  Ash Patel

My favorite part is the discussion of Kyle Kingsbury’s work on Jepsen. Would distributed systems have even more problems if Kingsbury did not shed light on them?

  Dan Luu

PagerDuty analyzed usage data for their platform in order to draw inferences about how the pandemic has affected incident response.

  PagerDuty

There’s a ton of interesting stuff in here about confirmation bias and fear in adopting a new, objectively less risky process.

  Robert Poston, MD

SRE Weekly Issue #301

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly ๐Ÿš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:

https://rootly.com/demo/?utm_source=sreweekly

Articles

This one perhaps belongs in a security newsletter, but the failure mode is just so fascinating. A CDN bug led to the loss of millions of dollars worth of Bitcoin.

  Badger

Google posted a report for the Google Calendar outage last week.

  Google

Jeli, authors of the Howie post-incident guide has their own “howie”. It’s a great example of a thorough incident report.

  Vanessa Huerta Granda โ€” Jeli

Hopefully not too late, here are some tips as we head into the thick of it.

  JJ Tang โ€” Rootly
This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

Using their own incident retrospective template, Blameless shows us how to write an incident retrospective.

  Emily Arnott โ€” Blameless

Meta has their own in-house-built tool for tracking and reporting on SLIs.

  A Posten, Dรกvid Bartรณk, Filip Klepo, and Vatika Harlalka โ€” Meta

These folks put everyone on-call by default, and also pay them extra automatically for each shift and even covering for coworkers.

  Chris Evans โ€” incident.io

Code that was deployed under a feature flag inadvertently affected all traffic, even with the flag disabled.

  Steve Lewis โ€” Honeycomb

By creating SLOs for microservices at various levels of the request tree, they ended up with a morass of arbitrary targets that didn’t relate clearly to the user experience.

  Ben Sigelman โ€” Lightstep

Outages

  • AWS us-west-1 and us-west-2
    • Hot on the heels of last week’s us-east-1 outage, AWS had a shorter outage in us-west-1 and us-west-2.

  • PagerDuty
    • PagerDuty alert notifications were affected by the AWS us-west-2 outage, and the impact lasted about twice as long as AWS’s.

  • Slack
  • Cloudflare
  • Solana

SRE Weekly Issue #300

300 issues. 6 years. Wow! I couldn’t have done it without all of you wonderful people, writing articles and reading issues. Thanks, you make curating this newsletter fun!

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly ๐Ÿš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/demo/?utm_source=sreweekly

Articles

This is the best thing to hit incident analysis since the Etsy Debriefing Facilitation Guide and the PagerDuty retrospective guide!  This one’s even better because it’s not just about the retrospective, but the whole incident analysis process.

BONUS CONTENT: A preview/introduction by Lorin Hochstein.

  jeli

SysAdvent is back!

When teams only consult briefly on reliability or operational concerns, often the final output doesnโ€™t adequately reflect customer or engineering expectations of reliability of the product or operability of the internals.

  Martin Smith (edited by Jennifer Davis) โ€” SysAdvent

What can Dungeons and Dragons teach us about SRE?

  Jennifer Davis โ€” SysAdvent

It’s so true. Don’t forget to read the alt text.

  Randall Munroe

This talk (with transcript) includes three stories about how incident analysis can be super effective.

  Nora Jones โ€” InfoQ

I know this is SRE Weekly and not Security Weekly, but this vulnerability is so big that I’m sure many of us triggered your incident response process, and some of us may have even had to take services down temporarily.

  John Graham-Cumming โ€” Cloudflare

What a colorful metaphor. This article discusses an effective technique for breaking up a monolith, one piece at a time.

  Alex Yates โ€” Octopus Deploy

This article proposes a method of eliminating the need for a central team of architects, and it strikes me as very similar to the practice of SRE itself.

  Andrew Harmel-Law

More from the VOID, this piece is about the importance of analyzing “near miss” events.

  Courtney Nash โ€” Verica

If you load-test in production, don’t include your load-test traffic in your SLO calculation.

  Liz Fong-Jones โ€” Honeycomb

Outages

  • AWS us-east-1 region (and half the web)
    • Between the AWS outage and log4j, it’s been a busy week. Amazon has already posted a write-up about the incident, which includes the notable tidbit that their circuit-breaker/back-off code failed.

SRE Weekly Issue #299

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly ๐Ÿš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/?utm_source=sreweekly

Articles

Lacking enough incidents to learn from, NASA “borrowed” incidents from outside of their organization and wrote case studies of their own!

ย ย John Egan โ€” InfoQ

In this interview, they hit hard on the importance of setting and adhering to clear work hours when working remotely as an SRE.

ย ย Ben Linders (interviewing James McNeil) โ€” InfoQ

Here’s a clever way to put a price on how much an outage cost the company.

ย ย Lorin Hochstein

This article introduces error budgets through an analogy to feedback loops in electrical engineering.

  Sjuul Janssen โ€” Cloud Legends

[…] saturation SLOs have always been a point of discussion in the SRE community. Today, we attempt to clarify that.

  Last9

Here’s how the GitHub Actions engineering team uses ChatOps. I love the examples!

  Yaswanth Anantharaju โ€” GitHub

This contains some pretty interesting details on their major outage last month.

  GitHub

In the last few weeks, Iโ€™ve been working on an extendible general purpose shard coordinator, Shardz. In this article, I will explain the main concepts and the future work.

Lots of deep technical detail here.

  Jaana Dogan

They constructed a set of git commits, one for each environment variable, then used git bisect to figure out which variable was causing the failure. Neat trick!

  Diomidis Spinellis

Outages

SRE Weekly Issue #298

Email subscribers, my apologies for the double-send last week. I upgraded WordPress and subsequently further cemented my distrust of all version upgrades ever.

I carefully tested a fix in staging before rolling it out gradually in preparation for this week’s issue. Just kidding, I hacked on it live until I got it fixed. Sorry about all those testing tweets. #testinproduction #yolo #SREWeeklydoesnotpracticeSRE

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly ๐Ÿš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:

https://rootly.com/?utm_source=sreweekly

Articles

This is Google’s detailed report from their outage last week. This one’s really worth a read; I promise you won’t be disappointed!

  Google

I really like this guide and template for writing incident reports. Each section comes with an explanation of what goes there with examples.

  Lorin Hochstein

Booking.com developed their Reliability Collaboration Model to guide the engagement between SRE and product development teams and the responsibilities assigned to each.

  Emmanuel Goossaert โ€” Booking.com

Especially timely now, in the thick of the holiday on-call period.

  James Frost โ€” Ably

Great tips. I hope your Black Friday / Cyber Monday is going well!

  Quentin Rousseau โ€” Rootly

This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

I thought it might be better to try a new approach: defining what SRE was by looking at what it’s not. Or to put it another way, what can you remove from SRE and have it still be SRE?

  Niall Murphy

Instead of asking that question this article urges understanding what happened.

Another reason that imagining future scenarios is better that counterfactuals about past scenarios is that our system in the future is different from the one in the past.

  Lorin Hochstein

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme