SRE Weekly Issue #464

A message from our sponsor, incident.io:

For years, on-call has felt more like a burden than a solution. But modern teams are making a change. On Feb 26 at 1 PM EST, hear why—and how—they’re moving from PagerDuty to incident.io On-call. Register now.

https://go.incident.io/events/migrating-from-pagerduty

These folks decided that Google Cloud wasn’t for them, and they built and migrated to their own datacenter in 9 months. This article goves over the physical buildout.

  Charith Amarasinghe — Railway

I remember when this incident happened in 2017. It was a huge one, and GitLab was very open with information about what happened. Here’s a look back at what happened.

  Byte-Sized Design

When your distributed system deals in nanosecond precision, an extra second is a big deal.

  Oleg Obleukhov and Patrick Cullen — Meta

Learn how AWS uses formal verification and other techniques.

Alongside industry-standard testing methods (such as unit and integration testing), AWS has adopted model checking, fuzzing, property-based testing, fault-injection testing, deterministic simulation, event-based simulation, and runtime validation of execution traces.

  Marc Brooker and Ankush Desai — ACM Queue

Normally, we rely on the thoughts, decisions, and actions of individuals to create resilizence in our sociotechnical systems, but in some time-critical situations, it can be best for one expert to call the shots.

  Robert Poston, MD

You do not have to choose between gold-plating dressed as craftsmanship or perfectionism and corner-cutting framed as pragmatism or realism. You can have the quality of the former at the speed and focus of the latter. I call this the Best Simple System for Now.

  Dan North & Associates

This is the first I’ve heard of I-PASS, and I like it!

  u/devoopseng — r/sre

This article is a roundup of schools of thought on how systems fail, with a pretty excellent list of links to related articles at the end.

  Evan Smith

SRE Weekly Issue #463

A message from our sponsor, incident.io:

Incidents move fast—so should your response. That’s why we’re building an AI responder that thinks like your team, not a machine. See how we’re doing it, the challenges faced, and what else is on the AI roadmap.

https://www.youtube.com/watch?v=rNpwZPOUhuE

Sometimes, we can harness randomness to improve throughput and reliability.

  Teiva Harsanyi — The Coder Cafe

Not just the “how”, but also the “why”, along with the challenges they found along the way.

  Daniel Paulus and Umut Uzgur — Checkly

It’s a classic problem: how do you detect problems that badly impact a specific set of customers, when the overall percentage affected is tiny?

  Lakshmi Narayan and Joshua Delman — Stripe

This is the clearest and most concise explanation of the Byzantine Generals Problem that I’ve read.

  Sid — The Scalable Thread

Th[is] article describes some different methods and tools that engineers can use to simulate their clusters and what knowledge they can gain from it, and it presents a case study using SimKube, the Kubernetes simulator developed by Applied Computing Research Labs in 2024.

  David R. Morrison — ACM Queue

An IaaC nightmare: when a list went from having IPs to being empty, suddenly the IP block rule was interpreted as “block everything” rather than “block nothing”.

  Jake Cooper — Railway

The incident occurred due to human error and insufficient validation safeguards during a routine abuse remediation for a report about a phishing site hosted on R2.

  Matt Silverlock and Javier Castro — Cloudflare

Along with being blatantly illegal, DOGE’s actions are incredibly risky from a reliability perspective. Thanks, Liz, for putting into words concerns that I also share.

  Liz Fong-Jones — Bulletin of the Atomic Scientists

SRE Weekly Issue #462

A message from our sponsor, incident.io:

On-call shouldn’t feel like a nightmare. With incident.io, you get clear ownership, seamless escalations, and insights that actually help—so you can fix issues fast and get back to what matters. No chaos, just smooth operations.

https://go.incident.io/on-call-as-it-should-be

This article series asks, do you really need ACID consistency?

Well, of course ACID consistency exists – and it is a good thing that it exists. Thus, feel free to call the post title clickbait … ;)

My point here is that it should not exist as functional requirement.

  Uwe Friedrichsen

OpenAI posted this mini report on their outage on January 30.

  OpenAI

It’s never DNS, except when it’s definitely DNS, such as in the case of this probable DNSSEC misconfiguration.

   Wilson Chua — Manila Bulletin

Do you want to prioritize availability or control?

  Teiva Harsanyi — The Coder Cafe

The amount of attention an incident gets is proportional to the severity of the incident: the greater the impact to the organization, the more attention that post-incident activities will get.

The problem is that the severity of a near-miss incident is zero, but it can have significant value for learning even still.

  Lorin Hochstein

This article urges caution in creating alerts that recommend a specific course of action when they fire. It explains why this can be dangerous and suggests alternative methods.

  Fred Hebert — Honeycomb

In this post, I will highlight some crucial Kubernetes best practices. They are from my years of experience with Kubernetes in production. Think of this as the curated “Kubernetes cheat sheet” you wish you had from Day 1.

  Engin Diri — Pulumi

Meta’s profiling system has helped them save thousands of servers’ worth of computing resources, through continuous profiling and centralized symbolization.

  Jordan Rome — Meta

SRE Weekly Issue #461

A message from our sponsor, incident.io:

Effective incident management demands coordination and collaboration to minimize disruptions. This guide by incident.io covers the full incident lifecycle—from preparation to improvement—emphasizing teamwork beyond engineering. By engineers, for engineers.

https://incident.io/guide

Written in 2020 after an AWS outage, this article analyzes dependence on third-party services and the responsibility to understand their reliability.

  Uwe Friedrichsen

When a cache expired, these folks found that their application stampeded the database with expensive queries, so they searched for a solution.

  Punit Sethi

When a high-severity incident happens, its associated risks becomes salient: the incident looms large in our mind, and the fact that it just happened leads us to believe that the risk of a similar incident is very high.

  Lorin Hochstein

These folks landed on a hybrid approach using two vendors, allowing them to avoid sending their entire trace volume to an expensive observability vendor.

  Jakub Sokół — monday

Under heavy load, requests are handled in LIFO order to maximize the chance of successfully completing fresh requests.

LIFO = Last In, First Out

  Teiva Harsanyi

More than just a simple feature comparison, this article also presents two use cases and analyzes which tool is best in each case.

   Josson Paul Kalapparambath — DZone

These folks explain why they use Go for everything: application code, infrastructure as code, tooling, and even as a wrapper around Helm charts for Kubernetes.

  Akhilesh Krishnan — Oodle AI

SRE Weekly Issue #460

A message from our sponsor, incident.io:

See how Netflix scaled their incident management with incident.io. By leveraging intuitive tools like Catalog and Workflows, they built a streamlined, scalable process that empowers teams to handle incidents with ease and consistency—even at Netflix’s scale.

https://incident.io/customers/netflix

So I bombed an incident review this week. More specifically, the facilitating.

I love how candid this article is. This kind of story is invaluable to level up our own retrospective facilitation skills.

  Will Gallego

It turns out that Google Cloud has a distributed tracing offering, and here’s an example of how to set it up.

  Punit Sethi

This article explains how 8 popular database systems use synchronized clocks. The systems covered include Spanner, DynamoDB, CockroachDB, and others.

  Murat

This article introduces the concept of a hot shard in a distributed system and outlines several strategies for alleviating it.

  Sid

Leap seconds can be really dangerous for IT systems! This article explains how the author eased their infrastructure through a leap second by smearing its effect across the preceding day.

  rachelbythebay

This article series revisits the underpinnings of the shift toward microservices, with a critical eye. My favorite bit is the analogy for microservice complexity in part 3.

  Uwe Friedrichsen

Catchpoint is back with their seventh annual SRE report, and you can download the PDF directly without having to register.

  Catchpoint

There are some real gems in here, including my favorite, death by yes.

A production of Tinker Tinker Tinker, LLC Frontier Theme