SRE Weekly Issue #303

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/demo/?utm_source=sreweekly

Articles

There are way too many gorgeous, mind-blowing ways for incidents to occur without a single change to code being deployed.

That last hot take is the kicker: even if you don’t do a code freeze in December (in the US), you’ll still see a lot of the same pitfalls as you would have if you did.

  Emily Ruppe — Jeli

Ah, IaC, the tool we use to machine-gun our feet in a highly-available manner at scale. This analysis of an incident from back in August tells what happened and what they learned.

  Stuart Davidson — Skyscanner

By establishing a set of core principles (Response, Observability, Availability and Delivery) aka our “ROAD to SRE”, we now have clarity on what areas we expect our SRE team should be focusing on and avoiding a common pitfall of becoming another platform or Ops team.

  Bruce Dominguez

In this blog post, we’ll look at:

  • The advantages of an SRE team where each member is a specialist.
  • Some SRE specialist roles and how they help.

  Emily Arnott — The New Stack

I love these “predictions for $YEAR” posts. What are your predictions?

  Emily Arnott — Blameless

Deployment Decision-Making during the holidays amid the COVID19 Pandemic

A sneak peek into my forthcoming MSc. thesis in Human Factors and Systems Safety, Lund University.

  Jessica DeVita (edited by Jennifer Davis) — SysAdvent

This article covers what to do as an incident commander, how to handle long-running incidents, and how to do a post-incident review.

  Joshua Timberman — SysAdvent

So in this post I’m going to go over what makes a good metric, why data aggregation on its own loses resolution and messy details that are often critical to improvements, and that good uses of metrics are visible by their ability to assist changes and adjustments.

  Fred Hebert

Here’s a great tutorial to get started with eBPF through a (somewhat convoluted) “Hello World” exercise.

  Ania KapuĹ›ciĹ„ska (edited by Shaun Mouton) — SysAdvent

The concept of engineering work being about resolving ambiguity really resonates with me.

  Lorin Hochstein

This appears to have caused a problem with Microsoft Exchange servers. Maybe this belongs in the Outages section…

  rachelbythebay

Outages

SRE Weekly Issue #302

Happy holidays, for those that celebrate! I put this issue together in advance, so no Outages section this week.

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/demo/?utm_source=sreweekly

Articles

This is another great deep-dive into strategies for zero-downtime deploys.

  Suresh Mathew — eBay

How do you make sure your incident management process survives the growth of your team? This article has a useful list of things to cover as you train new team members.

  David Caudill — Rootly
This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

The trends in this article are:

  • AIOps and self-healing platforms
  • Service Meshes
  • Lowcode DevOps
  • GitOps
  • DevSecOps

  Biju Chacko — squadcast

I can’t get enough of these. Please write one about your company!

  Ash Patel

My favorite part is the discussion of Kyle Kingsbury’s work on Jepsen. Would distributed systems have even more problems if Kingsbury did not shed light on them?

  Dan Luu

PagerDuty analyzed usage data for their platform in order to draw inferences about how the pandemic has affected incident response.

  PagerDuty

There’s a ton of interesting stuff in here about confirmation bias and fear in adopting a new, objectively less risky process.

  Robert Poston, MD

SRE Weekly Issue #301

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:

https://rootly.com/demo/?utm_source=sreweekly

Articles

This one perhaps belongs in a security newsletter, but the failure mode is just so fascinating. A CDN bug led to the loss of millions of dollars worth of Bitcoin.

  Badger

Google posted a report for the Google Calendar outage last week.

  Google

Jeli, authors of the Howie post-incident guide has their own “howie”. It’s a great example of a thorough incident report.

  Vanessa Huerta Granda — Jeli

Hopefully not too late, here are some tips as we head into the thick of it.

  JJ Tang — Rootly
This article is published by my sponsor, Rootly, but their sponsorship did not influence its inclusion in this issue.

Using their own incident retrospective template, Blameless shows us how to write an incident retrospective.

  Emily Arnott — Blameless

Meta has their own in-house-built tool for tracking and reporting on SLIs.

  A Posten, Dávid BartĂłk, Filip Klepo, and Vatika Harlalka — Meta

These folks put everyone on-call by default, and also pay them extra automatically for each shift and even covering for coworkers.

  Chris Evans — incident.io

Code that was deployed under a feature flag inadvertently affected all traffic, even with the flag disabled.

  Steve Lewis — Honeycomb

By creating SLOs for microservices at various levels of the request tree, they ended up with a morass of arbitrary targets that didn’t relate clearly to the user experience.

  Ben Sigelman — Lightstep

Outages

  • AWS us-west-1 and us-west-2
    • Hot on the heels of last week’s us-east-1 outage, AWS had a shorter outage in us-west-1 and us-west-2.

  • PagerDuty
    • PagerDuty alert notifications were affected by the AWS us-west-2 outage, and the impact lasted about twice as long as AWS’s.

  • Slack
  • Cloudflare
  • Solana

SRE Weekly Issue #300

300 issues. 6 years. Wow! I couldn’t have done it without all of you wonderful people, writing articles and reading issues. Thanks, you make curating this newsletter fun!

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/demo/?utm_source=sreweekly

Articles

This is the best thing to hit incident analysis since the Etsy Debriefing Facilitation Guide and the PagerDuty retrospective guide!  This one’s even better because it’s not just about the retrospective, but the whole incident analysis process.

BONUS CONTENT: A preview/introduction by Lorin Hochstein.

  jeli

SysAdvent is back!

When teams only consult briefly on reliability or operational concerns, often the final output doesn’t adequately reflect customer or engineering expectations of reliability of the product or operability of the internals.

  Martin Smith (edited by Jennifer Davis) — SysAdvent

What can Dungeons and Dragons teach us about SRE?

  Jennifer Davis — SysAdvent

It’s so true. Don’t forget to read the alt text.

  Randall Munroe

This talk (with transcript) includes three stories about how incident analysis can be super effective.

  Nora Jones — InfoQ

I know this is SRE Weekly and not Security Weekly, but this vulnerability is so big that I’m sure many of us triggered your incident response process, and some of us may have even had to take services down temporarily.

  John Graham-Cumming — Cloudflare

What a colorful metaphor. This article discusses an effective technique for breaking up a monolith, one piece at a time.

  Alex Yates — Octopus Deploy

This article proposes a method of eliminating the need for a central team of architects, and it strikes me as very similar to the practice of SRE itself.

  Andrew Harmel-Law

More from the VOID, this piece is about the importance of analyzing “near miss” events.

  Courtney Nash — Verica

If you load-test in production, don’t include your load-test traffic in your SLO calculation.

  Liz Fong-Jones — Honeycomb

Outages

  • AWS us-east-1 region (and half the web)
    • Between the AWS outage and log4j, it’s been a busy week. Amazon has already posted a write-up about the incident, which includes the notable tidbit that their circuit-breaker/back-off code failed.

SRE Weekly Issue #299

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly đźš’. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.com/?utm_source=sreweekly

Articles

Lacking enough incidents to learn from, NASA “borrowed” incidents from outside of their organization and wrote case studies of their own!

  John Egan — InfoQ

In this interview, they hit hard on the importance of setting and adhering to clear work hours when working remotely as an SRE.

  Ben Linders (interviewing James McNeil) — InfoQ

Here’s a clever way to put a price on how much an outage cost the company.

  Lorin Hochstein

This article introduces error budgets through an analogy to feedback loops in electrical engineering.

  Sjuul Janssen — Cloud Legends

[…] saturation SLOs have always been a point of discussion in the SRE community. Today, we attempt to clarify that.

  Last9

Here’s how the GitHub Actions engineering team uses ChatOps. I love the examples!

  Yaswanth Anantharaju — GitHub

This contains some pretty interesting details on their major outage last month.

  GitHub

In the last few weeks, I’ve been working on an extendible general purpose shard coordinator, Shardz. In this article, I will explain the main concepts and the future work.

Lots of deep technical detail here.

  Jaana Dogan

They constructed a set of git commits, one for each environment variable, then used git bisect to figure out which variable was causing the failure. Neat trick!

  Diomidis Spinellis

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme