SRE Weekly Issue #509

SRE Weekly is back! My partner is doing well, and thanks for all the kind words and well-wishes.

A message from our sponsor, Costory :

Tracking cloud and AI costs across AWS, GCP, and Datadog shouldn’t require three dashboards and a spreadsheet.

Costory correlates cost, usage, and deployment data. Explains what changed and why. Straight to Slack. Terraform setup.

Try it free → https://www.costory.io/lp/no-time-4-finops?utm_source=sre-weekly&utm_medium=newsletter&utm_campaign=&utm_id=no-time

There’s a lot you miss out on if you get an LLM to write your incident review.

incident reviews are fundamentally a socio-technical process, and they do not provide benefit if people don’t engage with them.

  Fischer

I love this concept of reliability debt.

  Spiros Economakis

This one starts with an insightful comparison of two commercial aviation incidents and the crew’s actions. It goes on to draw broader lessons that we can use as SREs.

  Hamed Silatani — Uptime Labs

What happens now that SQL is being written by LLMs? I love the analogy to the advent of ORMs that abstracted away the generation of SQL.

  Tanmay Sinha — Readyset

What specific kind of bugs is AI more likely to generate? Do some categories of bugs show up more often? How severe are they? How is this impacting production environments?

They did a survey of 470 codebases and share the numbers on the rate of bugs generated by LLMs versus humans.

  David Loker — CodeRabbit

This post looks at ten real status page examples from teams that have dealt with outages at scale. Each example highlights what they communicate well, where they set expectations clearly, and how small details reduce confusion during incidents.

  Laura Clayton — UptimeRobot

If you don’t explicitly state your expected level of reliability, your customers will infer one and hold you to it anyway. “Disappoint” them early by telling them what to expect.

  Dave O’Connor

Humans exhibit variation in how we respond to a given situation, and this article argues that it’s one of our strengths. LLMs intentionally also exhibit variability.

  Lorin Hochstein

SRE Weekly Issue #508

SRE Weekly will be going on hiatus for 6 weeks, while I’m on leave caring for my partner after her kidney transplant surgery this week. It’s incredible that the National Kidney Registry’s Paired Exchange program allowed me to donate a kidney to help her even though we don’t have matching blood types!

A message from our sponsor, Costory:

Tired of manually explaining your cloud & LLM bills?
Check our live preview to see how Costory links every cost spike to deployments, infra changes, and usage patterns. And delivers a clean summary straight in Slack.

Explore the demo

What do we miss when we have LLMs write our code for us? This article explains that one thing we can miss out on is building a mental model.

  Shayon Mukherjee

I really love this explanation of the concept of compensation.

Compensation is a very interesting mechanism in software systems because it can keep complex systems alive, but also because it can be a factor in how they quickly and unexpectedly collapse.

  Fred Hebert — Resilience in Software Foundation

When you investigate an incident and tell the story about what you found, but no one believes you because there’s no smoking gun or bad actor…

  Lorin Hochstein

To build and maintain reliable systems, organizations must align responsibility with control. This is where the Ownership TrioMandate, Knowledge, and Accountability—comes in.

  Spiros Economakis

I love when an article goes through the designs they passed over (and why) before reaching their final design, as in this one.

  Julianne Walker — Tines

If you’re unfamiliar with Docker image lazy loading like I was, this is a great primer on two options, Estargz and SOCI.

   Huong Vuong and Joseph Sahayaraj — Grab

But don’t let MTTR become the thing you’re optimising for. The goal is to build systems and processes where you’re constantly learning and improving, not systems where you’re just really efficient at fighting the same fires over and over.

  Dave O’Connor

I watched a supposedly “resilient” Multi-Region setup completely implode recently. The architecture diagram looked great – active workloads in US-East, cold standby in US-West. But when the provider had a global IAM service degradation, the whole thing became a brick.

  u/NTCTech on Reddit

SRE Weekly Issue #507

A message from our sponsor, incident.io:

incident.io lives inside Slack and Microsoft Teams, breaking down emergencies into actionable steps to resolution. Alerts auto-create channels, ping the right people, log actions in real time, and generate postmortems with full context.
Move fast when you break things and stay organized inside the tool your team already uses everyday.

https://fandf.co/4pRFm4d

There’s a lot you can get out of this one even if you don’t happen to be using one of the helm charts they evaluated. Their evaluation criteria are useful and easy to apply to other charts — and also a great study guide for those new to kubernetes.

  Prequel

This is the best explanation I’ve seen yet of exactly why SSL certificates are so difficult to get right in production.

  Lorin Hochstein

An article on the importance of incident simulation for training, drawing from external experience in using simulations.

  Stuart Rimell — Uptime Labs

I especially like the discussion of checklists, since they are often touted as a solution to the attention problem.

  Chris Siebenmann

This is a new product/feature announcement, but it also has a ton of detail on their implementation, and it’s really neat to see how they built cloud provider region failure tolerance into WarpStream.

  Dani Torramilans — WarpStream

It’s interesting to think of money spent on improving reliability as offsetting the cost of responding to incidents. It’s not one-to-one, but there’s an argument to be made here.

  Florian Hoeppner

An explanation of the Nemawashi principle for driving buy-in for your initiatives. This is not specifically SRE-targeted, but we so often find ourselves seeking buy-in for our reliability initiatives.

  Matt Hodgkins

The next time you’re flooded with alerts, ask yourself: Does this metric reflect customer pain, or is it just noise? The answer could change how you approach reliability forever.

  Spiros Economakis

SRE Weekly Issue #506

A message from our sponsor, Costory:

You didn’t sign up to do FinOps.
Costory automatically explains why your cloud costs change, and reports it straight to Slack.
Built for SREs who want to code, not wrestle with spreadsheets.
Now on AWS & GCP Marketplaces.

Start your free trial at costory.io

I didn’t know that some resolvers care about the order of some DNS records in a response, but I’m not surprised. The DNS spec, despite its age and multiple revisions, has a number of ambiguities like this.

  Sebastiaan Neuteboom — Cloudflare

Severity isn’t always the best indicator of the incidents we can learn the most from. What if we rate our incidents on their potential for learning?

  Lorin Hochstein

This one discusses three ways you can lose time in incidents and ideas for what you can do about it.

  Hrishikesh Barua — Uptime Labs

An interesting discussion of a bias: we tend to solve problems by adding things to our systems, and that increases complexity. AI can amplify this bias.

  Uwe Friedrichsen

Ever wondered how OTel auto-instrumentation works? This article explains it in detail (with code examples) for Python, Java, and Go.

  Elizabeth — Observability Real Talk

This article stands out from others about AI SRE agents because it goes into some detail on their method for evaluating whether their agent works. I’d love to see more of the actual evaluation results, and examples of it getting things right vs wrong.

  Daniel Shan and Tristan Ratchford — Datadog

I recently got an error from GitHub saying I’d exceeded a rate limit (when I definitely didn’t), and this article explains why.

See why observability and lifecycle management are critical for defense systems.

  Thomas Kjær Aabo — GitHub

Poor telemetry makes us want to add more telemetry, which can decrease our telemetry quality and make us add more, yikes! How can we fix the feedback loop?

Note for blind or low-vision readers: there’s a pretty important diagram in this one without a caption or alt text.

  Ash Patel

SRE Weekly Issue #505

A message from our sponsor, Hopp:

Paging at 2am? 🚨 Make incident triage feel like you’re at the same keyboard with Hopp.

  • crisp, readable screen-sharing
  • no more “can you zoom in?”
  • click + type together
  • bring the incident bridge into one session

Start pair programming: https://www.gethopp.app/?via=sreweekly

An incident write-up from the archives, and it’s a juicy one. An update to their code caused a crash only after some time had passed, so their automated testing didn’t catch it before they deployed it worldwide.

  Xandr

This article covers an independent review of the Optus outage.

I personally find it astounding that somebody conducting an incident investigation would not delve deeper into how a decision that appears to be astounding would have made sense in the moment.

  Lorin Hochstein

Cloudflare needed a tool to look for overlapping impact across their many maintenance events in order to avoid unintentionally impairing redundancy.

  Kevin Deems and Michael Hoffmann — Cloudflare

Another great piece on expiration dates. I especially like the discussion of abrupt cliffs as a design choice.

  Chris Siebenmann — University of Toronto

It’s not always easy to see how to automate a given bit of toil, especially when cross-team interactions are involved.

  Thomas A. Limoncelli and Christian Pearce — ACM Queue

How do resilience and fault tolerance relate? Are they synonyms, do they overlap, or does one contain the other?

  Uwe Friedrichsen

After unexpectedly losing their observability vendor, these folks were able to migrate to a new solution within a couple days.

  Karan Abrol, Yating Zhou, Pratyush Verma, Aditya Bhandari, and Sameer Agarwal — Deductive.ai

A great dive into what blameless incident analysis really means.

Blameless also doesn’t mean you stop talking about what people did.

  Busra Koken

A production of Tinker Tinker Tinker, LLC Frontier Theme