General

SRE Weekly Issue #475

I haven’t seen this level of detail in an article on anomaly detection in quite awhile. Still, the math is very approachable even if you slept through stats class.

  Ivan Shubin — Booking.com

TL;DR: The Power of Knowledge Overlap in Incident Response

There’s an anecdote in this one that’s really making me think.

  Hamed Silatani — Uptime Labs

One of the criticisms leveled at resilience engineering is that the insights that the field generates aren’t actionable […]

This article argues that we still need the unactionable but good models, otherwise we’ll get actionable but wrong models.

  Lorin Hochstein

Datadog has put a lot of thought and effort into managing their massive Kafka workload. My favorite part of this article was the bit about accidentally zip-bombing themselves with highly compressible data.

  Guillaume Bort — Datadog

This one covers four techniques for rerouting customer traffic after a region failure using AWS’s Route 53… themed after the TV show The Good Place. It’s been quite awhile since I watched the show, but I still found the article pretty useful.

  Seth Elliot — Arpio

This article asks what we’re really looking to get by defining an incident severity scale, and proposes an alternative scale based on incident complexity.

  Dan Slimmon

I love this idea of tracking configuration changes as observability data. I’ve been through plenty of incidents in which I wish I had it.

  Yevgeny Pats — CloudQuery

A short and sweet article packed with some useful nuggets. My favorite is the section near the end on timeouts.

  Hemant Burman — Insights

SRE Weekly Issue #474

A message from our sponsor, incident.io:

We’ve just raised $62M at incident.io to build AI agents that resolve incidents with you. See how we’re pioneering a new era of incident management.

https://go.incident.io/blog/incident.io-raises-62m

This is a truly outstanding article about blameless incident analysis! Beyond just “why”, it covers many of the pitfalls that trip people up when they try to enact a blameless culture, including questions about accountability.

  fgj

Here’s a good reminder that resilience in our systems is all about the humans.

  Stuart Rimell

This article outlines WarpStream’s solution to a common problem in systems based on shared storage (like S3): cleaning up objects that are no longer needed, at scale.

  Richard Artoul — WarpStream

I love learning how companies structure their on-call rota. My favorite part of this one is the emphasis on keeping the manager in the rota as a feedback mechanism.

  Laura de Vesine and David Lentz — Datadog

These folks continuously detect drift by running terraform plan and alerting on changes that have no corresponding commit in git.

   Yugandhar Suthari

It’s a troubleshooting story having nothing to do with tech, but the technique used can easily apply to your next incident.

  Paige Cruz

Some examples you may not have thought of that can lead to Terraform drift, along with an exploration of the problems drift can bring.

  Saijal Shrivastava — Razorpay

Railway had an outage this week related to their control plane database, and they shared this write-up.

  Ray Chen — Railway

SRE Weekly Issue #473

A message from our sponsor, incident.io:

We’ve just raised $62M at incident.io to build AI agents that resolve incidents with you. See how we’re pioneering a new era of incident management.

https://go.incident.io/blog/incident.io-raises-62m

In this final installment of the Scaling Nextdoor’s Datastores blog series, we detail how the Core-Services team at Nextdoor solved cache consistency challenges as part of a holistic approach to improve our database and cache scalability and usability.

I really enjoyed this whole series. Thanks, Nextdoor folks!

  Slava Markeyev — Nextdoor

These folks analyzed a non-production incident like it was production, including retrospective analysis and lessons learned. Best part: they share the juicy details with us!

  Joe Mckevitt — UptimeLabs

This one goes over several different models you can use to implement on-call compensation, with pros and cons for each.

  Constant Fischer — PagerDuty

This article shows that MySQL’s CATS algorithm offers only a small performance gain over FIFO once deadlock logging interference is removed.

My jaw involuntarily opened when I saw the graph after they commented out the logging print statements.

   Bin Wang — DZone

In this article, I’ll walk you through how we implemented chaos engineering across our stack using Chaos Toolkit, Chaos Monkey, and Istio — with hands-on examples for Java and Node.js. If you’re exploring ways to strengthen system resilience, this guide is packed with practical insights you can apply today.

The author does not appear to have a tie to Istio. This article has a ton of code snippets to help you get started.

   Prabhu Chinnasamy — DZone

In this blog, we’ll look at three important facts about serverless reliability that teams often overlook. We’ll explain what they are, what the risks are of not addressing them, and how you can make your serverless applications more fault-tolerant.

  1. Serverless architectures don’t guarantee reliability.
  2. You do have control over serverless reliability.
  3. Serverless reliability practices can benefit all platforms, not just serverless platforms.

  Andre Newman — Gremlin

This Golang debugging story is a really satisfying read.

The heap profiles were very effective at telling us the allocation sites of live objects, but provided no insights into why specific objects were being retained.

  Ella Chao — WarpStream

Zoom had an outage this week when its domain zoom.us was temporarily blocked at the TLD level due to a miscommunication between its registrar and the TLD.

  Zoom

SRE Weekly Issue #472

A message from our sponsor, incident.io:

We’ve just raised $62M at incident.io to build AI agents that resolve incidents with you. See how we’re pioneering a new era of incident management.

https://go.incident.io/blog/incident.io-raises-62m

In this part of the Scaling Nextdoor’s Datastores blog series, we will see how the Core-Services team at Nextdoor keeps its cache consistent with database updates and avoids stale writes to the cache.

  Ronak Shah — Nextdoor

Okay, if we’re not supposed to use MTTR, what metrics in incident response are better?

  Chris Evans — incident.io

  This article is published by my sponsor, incident.io, but their sponsorship did not influence its inclusion in this issue.

This reminds me of the Fallacies of Distributed Computing, and it’s equally important to internalize. Disk I/O isn’t guaranteed.

  Phil Eaton

Here’s a great example of how we can learn a ton from near misses. In this airplane incident, a slight change in the normal takeoff sequence resulted in missing a critical step. As a result of this near miss, the aviation industry still instituted changes to make this kind of problem less likely.

  Mentour Pilot — YouTube

In this second and final post of this little blog series, we will discuss the redundancy fallacy and the 3rd type of coupling, we need to consider in the context of remote communication, which is temporal coupling.

  Uwe Friedrichsen

All of our systems have embedded models of the world. What happens when these models are wrong?

  Lorin Hochstein

This article answers this question:

“If we had to choose just three things to sustain a resilient, healthy reliability culture, what would they be?”

with these three things:

  1. Know what matters to your users, and make it really visible
  2. Create Psychological Safety Around Failure
  3. Let incidents update your mental models

  Busra Koken

Execs intruding in incidents can have a disruptive effect, which this article acknowledges with specific examples. It goes on to list some concrete and useful things execs can do to support incident response.

By the way, massive props to the Uptime Labs folks. They created an RSS feed for their blog at my request with a super-fast turnaround. Incredible!

  Hamed Silatani — Uptime Labs

SRE Weekly Issue #471

A message from our sponsor, incident.io:

We’re building an AI agent that investigates incidents with you—diagnosing the problem and even fixing it. Go behind the scenes with the incident.io engineers rethinking what’s possible with AI, one ambitious idea (and bug) at a time.

https://go.incident.io/building-with-ai

The author of this one draws a line between their two interests of formal methods and resilience engineering, and I’m so here for it.

  Lorin Hochstein

In this part of the Scaling Nextdoor’s Datastores blog series, we’ll explore how the Core-Services team at Nextdoor serializes database data for caching while ensuring forward and backward compatibility between the cache and application code.

  Ronak Shah — Nextdoor

MySQL’s ALTER TABLE INPLACE has limitations and downsides, and INSTANT does too, as explained in this article.

  Shlomi Noach — Planetscale

If you have multiple different types of work in your system, a queue per type of work may be a good choice.

Bonus(?): includes a bathroom-based analogy.

  Marc Brooker

One Lambda function per URL path? Or a monolithic function that handles multiple paths? There are benefits and drawbacks to each.

  Yan Cui

Published on April 1.

The truth is, many incidents move faster when there’s executive oversight — a sense of urgency, pressure, and someone repeatedly asking, “What’s the ETA?”

  Chris Evans — incident.io

  This article is published by my sponsor, incident.io, but their sponsorship did not influence its inclusion in this issue.

I’m seeing a lot of echoes of Bainbridge’s Ironies of Automation in this article about AIOps and AI tooling. If AI handles most coding and incidents, how will humans handle the outliers?

  Hamed Silatani — Uptime Labs

I wasn’t able to make it, so I really appreciate this recap. Sounds like SRECon was, unsurprisingly, heavily focused on AI this time around.

  Niall Murphy

A production of Tinker Tinker Tinker, LLC Frontier Theme