SRE Weekly Issue #400

A message from our sponsor, FireHydrant:

How is FireHydrant building its alerting tool, Signals, to be robust, lightning-fast, and configurable to how YOU work? In this edition, of their Captain’s Log, they dive into CEL and how they’re using it to handle routing and logic.

The network is not reliable. What are the implications and what can we do about it?

  Anadi Misra

Beyond a run-of-the-mill severity levels article, this one goes into a couple of common pitfalls.

  Jonathan Word

Some good tips in here, esp. the one about brevity.

  Ashley Sawatsky — Rootly


Or, Eleven things we have learned as Site Reliability Engineers at Google

   Adrienne Walcer, Kavita Guliani, Mikel Ward, Sunny Hsiao, and Vrai Stacey — Google

Good lessons to learn here that apply more broadly than just EKS.

  Christian Alexánder Polanco Valdez — Adevinta

This article is about project management, but a lot of the skills discussed apply to aspects of SRE at Staff+ levels.

  Sannie Lee — Thoughtworks (via

Now this is more like it: there’s a healthy does of skepticism woven through this article, including things genAI probably won’t be good for, and potential pitfalls.

  Jesse Robbins — Heavybit

There are two different ways of alerting on SLOs, for two very different audiences, as explained in this article. Ostensibly this is a product feature announcement, but you don’t need to be using the product to get a lot out of this.

  Fred Hebert — Honeycomb
  Full disclosure: Honeycomb is my employer.

SRE Weekly Issue #399

A message from our sponsor, FireHydrant:

Severity levels help responders and stakeholders understand the incident impact and set expectations for the level of response. This can mean jumping into action faster. But first, you have to ensure severity is actually being set. Here’s one way.

This research paper summary goes into Mode Error and the dangers of adding more features to a system in the form of modes, especially if the system can change modes on its own.

  Fred Hebert (summary)
  Dr. Nadine B. Sarter (original paper)

Cloudflare suffered a power outage in one of the datacenters housing their control and data planes. The outage itself is intriguing, and in its aftermath, Cloudflare learned that their system wasn’t as HA as they thought.

Lots of great lessons here, and if you want more, they posted another incident writeup recently.

   Matthew Prince — Cloudflare

Separating write from read workloads can increase complexity but also open the door to greater scalability, as this article explains.

  Pier-Jean Malandrino

Covers four strategies for load shedding, with code examples:

  • Random Shedding
  • Priority-Based Shedding
  • Resource-Based Shedding
  • Node Isolation

  Code Reliant

Lots of juicy details about the three outages, including a link to AWS’s write-up of their Lambda outage in June.

  Gergely Orosz

The diagrams in this article are especially useful for understanding how the circuit-breaker pattern works.

  Pier-Jean Malandrino

This one’s about how on-call can go bad, and how to structure your team’s on-call so to be livable and sustainable.

  Michael Hart

Execs cast a big shadow in an incident, so it’s important to have a plan for how to communicate with them, as this article explains.

  Ashley Sawatsky — Rootly

SRE Weekly Issue #398

A message from our sponsor, FireHydrant:

“Change is the essential process of all existence.” – Spock
It’s time for alerting to evolve. Get a first look at how incident management platform FireHydrant is architecting Signals, its native alerting tool, for resilience in the Signals Captain’s Log.

A cardiac surgeon draws lessons from the Tenerife commercial airline disaster and applies them to communication in the operating room.

  Dr. Rob Poston

Creating an incident write-up is an expensive investment. This article will tell you why it’s worthwhile.

  Emily Ruppe — Jeli

The optimism and pessimism in this article are about the likelihood of contention and conflicts between actors in a distributed system, and it’s a fascinating way of looking at things.

  Marc Brooker

Here is a guide for how to be an effective Incident Commander and get things fixed as quickly as possible as part of an efficient Incident Management process.

  Jonathan Word

The four concepts are Rebound, Robustness, Graceful Extensibility, and Sustained Adaptability, and this research paper summary explains each concept.

  Fred Hebert (summary)
  Dr. David Woods (original paper)

Apache Beam played a pivotal role in revolutionizing and scaling LinkedIn’s data infrastructure. Beam’s powerful streaming capabilities enable real-time processing for critical business use cases, at a scale of over 4 trillion events daily through more than 3,000 pipelines.

  Bingfeng Xia and Xinyu Liu — LinkedIn

Meta’s SCARF tool automatically scans for unused (dead) code and creates pull requests for their removal, on a daily basis.

  Will Shackleton, Andy Pincombe, and Katriel Cohn-Gordon — Meta

Netflix built a system that detects kernel panics in k8s nodes and annotates the resulting orphaned pods so that it’s clear what happened to them.

  Kyle Anderson — Netflix

This upcoming webinar will cover a range of topics around resilience engineering and incident response, with two big names we’ve seen in many past issues: Chris Evans ( and Courtney Nash (Verica).

SRE Weekly Issue #397

A message from our sponsor, FireHydrant:

Incident management platform FireHydrant is combining alerting and incident response in one ring-to-retro tool. Sign up for the early access waitlist and be the first to experience the power of alerting + incident response in one platform at last.

The length and complexity of this article hints at the theme that runs throughout: there’s no easy, universal, perfect rollback strategy. Instead, they present a couple of rollback strategies you can choose from and implement.

  Bob Walker — Octopus Deploy

This article delves into enhancing error management in batch processing programs through the strategic implementation of automatic safety switches and their critical role in safeguarding data integrity during technical errors.

  Bertrand Florat — DZone

Part of their observability strategy, which they call “shadowing”, is especially nifty.

  Lev Neiman and Jason Fan — DoorDash

It’s interesting that the DB failed in a way that GitHub’s Orchestrator deployment was unable to detect.

  Jakub Oleksy — GitHub

What exactly is a Senior Staff Engineer? While this article is not specifically about Senior Staff SREs, it’s directly applicable, especially as I’ve seen more Staff+ SRE job postings in the past couple years.

  Alex Ewerlöf

“Blameless” doesn’t mean no names allowed!

Remember—if discussing the actions of a specific person is being done for the sake of better learning; don’t shy away from it.

This series is shaping up to be a great study guide for new SREs.

Each day of this week brings you one step closer to not only acing your SRE interviews but also becoming the SRE who can leverage code & infrastructure to perfect systems reliability.

  Code Reliant

A fascinating and scary concept: a tool for automatically identifying and performing all the changes involved in deprecating an entire product.

  Will Shackleton, Andy Pincombe, and Katriel Cohn-Gordon — Meta

A production of Tinker Tinker Tinker, LLC Frontier Theme