SRE Weekly Issue #397

A message from our sponsor, FireHydrant:

Incident management platform FireHydrant is combining alerting and incident response in one ring-to-retro tool. Sign up for the early access waitlist and be the first to experience the power of alerting + incident response in one platform at last.
https://firehydrant.com/signals/

The length and complexity of this article hints at the theme that runs throughout: there’s no easy, universal, perfect rollback strategy. Instead, they present a couple of rollback strategies you can choose from and implement.

  Bob Walker — Octopus Deploy

This article delves into enhancing error management in batch processing programs through the strategic implementation of automatic safety switches and their critical role in safeguarding data integrity during technical errors.

  Bertrand Florat — DZone

Part of their observability strategy, which they call “shadowing”, is especially nifty.

  Lev Neiman and Jason Fan — DoorDash

It’s interesting that the DB failed in a way that GitHub’s Orchestrator deployment was unable to detect.

  Jakub Oleksy — GitHub

What exactly is a Senior Staff Engineer? While this article is not specifically about Senior Staff SREs, it’s directly applicable, especially as I’ve seen more Staff+ SRE job postings in the past couple years.

  Alex Ewerlöf

“Blameless” doesn’t mean no names allowed!

Remember—if discussing the actions of a specific person is being done for the sake of better learning; don’t shy away from it.

  incident.io

This series is shaping up to be a great study guide for new SREs.

Each day of this week brings you one step closer to not only acing your SRE interviews but also becoming the SRE who can leverage code & infrastructure to perfect systems reliability.

  Code Reliant

A fascinating and scary concept: a tool for automatically identifying and performing all the changes involved in deprecating an entire product.

  Will Shackleton, Andy Pincombe, and Katriel Cohn-Gordon — Meta

SRE Weekly Issue #396

A message from our sponsor, FireHydrant:

DevOps keeps evolving but alerting tools are stuck in the past. Any modern alerting tool should be built on these four principles: cost-efficiency, service catalog empowerment, easier scheduling and substitutions, and clear distinctions between incidents and alerts.
https://firehydrant.com/blog/the-new-principles-of-incident-alerting-its-time-to-evolve/

Using 3 high-profile incidents from the past year, this article explores how to define SLOs that might catch similar problems, with a special focus on keeping the SLI close to the user experience.

   Adriana Villela and Ana Margarita Medina — The New Stack

Microservices can have some great benefits, but if you want to build with them, you’re going to have to solve a whole pile of new problems.

  Roberto Vitillo

To protect your application against failures, you first need to know what can go wrong. […] the most common failures you will encounter are caused by single points of failure, the network being unreliable, slow processes, and unexpected load.

  Roberto Vitillo

I love how this article keeps things interesting by starting with a fictional (but realistic) story about the dangers of over-alerting before continuing on to give direct advice.

  Adso

I especially enjoy the section on the potential pitfalls and challenges with retries and how you can avoid them.

  CodeReliant

This reddit thread is a goldmine, including this gem:

I actively avoid getting involved with software subject matter expertise, because it robs the engineering team of self-reliance, which is itself a reliability issue.

  u/bv8z and others — reddit

There’s a pretty cool “Five Whys”-style analysis that goes past “dev pushed unreviewed code with incomplete tests to production” and to the sociotechnical challenges underlying that.

  Tobias Bieniek — crates.io

SRE Weekly Issue #395

A message from our sponsor, FireHydrant:

Incident management platform FireHydrant is combining alerting and incident response in one ring-to-retro tool. Sign up for the early access waitlist and be the first to experience the power of alerting + incident response in one platform at last.
https://firehydrant.com/signals/

This article gives an overview of database consistency models and introduces the PACELC Theorem.

  Roberto Vitillo

A primer on memory and resource leaks, including some lesser-known causes.

  Code Reliant

How can you troubleshoot a broken pod when it’s built FROM scratch and you can’t even run a shell in it?

  Mike Terhar
  Full disclosure: Honeycomb is my employer.

This article explains why reliability isn’t just a one-off project that you can bolt on and move on.

  Gavin Cahill — Gremlin

DoorDash wanted consistent observability across their infrastructure that didn’t depend on instrumenting each application. To solve this, they developed BPFAgent, and this article explains how.

  Patrick Rogers — DoorDash

Mean time to innocence is the average elapsed time between when a system problem is detected and any given team’s ability to say the team or part of its system is not the root cause of the problem.

This article, of course, is about not having a culture like that.

  John Burke — TechTarget

It was the DB — more specifically, it was a DB migration with unintended locking.

  Casey Huang — Pulumi

The incident stemmed from a control plane change that worked in some regions but caused OOMs in others.

  Google

SRE Weekly Issue #394

A warm welcome to my new sponsor, FireHydrant!

A message from our sponsor, FireHydrant:

The 2023 DORA report has two conclusions with big impacts on incident management: incremental steps matter, and good culture contributes to performance. Dig into both topics and explore ideas for how to start making incremental improvements of your own.
https://firehydrant.com/ebook/dora-2023-incident-management/

This article gives an example checklist for a database version upgrade in RDS and explains why checklists cam be so useful for changes like this.

  Nick Janetakis

The distinction in this article is between responding at all and responding correctly. Different techniques solve for availability vs reliability.

  incident.io

Latency and throughput are inextricably linked in TCP, and this article explains why with a primer on congestion windows and handshakes.

  Roberto Vitillo

Tail latency has a huge impact on throughput and on the overall user experience. Measuring average latency just won’t cut it.

  Roberto Vitillo

Is it really wrong though? Is it?

  Adam Gordon Bell — Earthly

I’ve shared the FAA’s infographic of the Dirty Dozen here previously, but here’s a more in-depth look at the first six items.

  Dr. Omar Memon — Simple Flying

It’s often necessary to go through far more than five whys to understand what’s really going on in a sociotechnical system.

  rachelbythebay

I found the bit about the AWS Incident/Communication Manager on-call role pretty interesting.

  Prathamesh Sonpatki — SRE Stories

SRE Weekly Issue #393

A message from our sponsor, Rootly:

Rootly is proud to have been recognized by G2 as a High Performer and Enterprise Leader in Incident Management for the sixth consecutive quarter! In total, we received nine G2 awards in the Summer Report. As a thank-you to our community, we’re giving away some awesome Rootly swag. Read our CEO’s blog post and pick up some free swag here:
https://rootly.com/blog/celebrating-our-nine-new-g2-awards

This repo contains a path to learn SRE, in the form of a list of concepts to familiarize oneself with.

  Teiva Harsanyi

How can we justify the (sometimes significant) expense of instilling observability into our systems?

  Nočnica Mellifera — SigNoz

It was DNS. Cloudflare’s 1.1.1.1 recursive DNS service failed this week, stemming from failure to parse the new ZONEMD record type.

  Ólafur Guðmundsson — Cloudflare

Rather than just dry theory, this article helps you understand what the CAP theory means in practice as you choose a data store.

Note: this link was 504ing at time of publishing, so here’s the archive.org copy.

  Bala Kalavala — Open Source For U

A “blameless” culture can get in the way if it means you’re not allowed to make any mention of who was at the pointy-end of your system when things blew up.

  incident.io

In this post, we will share how we formalized the LinkedIn Business Continuity & Resilience Program, how this new program helped increase our customers’ confidence in our operations, and the lessons that we learned as we attained ISO 22301 certification.

  Chau Vu — LinkedIn

This is the start of a 6-article series, with each going through one week along a path to prepare for SRE interviews.

We’ll spend each week focusing on building up your expertise in the key areas SREs need to know, like automation, monitoring, incident response, etc.

  Code Reliant

Beyond the CAP theorem, what actually happens during a partition?

“ if there is a partition (P), how does the system trade off availability and consistency (A and C); else (E), when the system is running normally in the absence of partitions, how does the system trade off latency and consistency (L and C)” [Daniel J. Abadi]

  Lohith Chittineni

A production of Tinker Tinker Tinker, LLC Frontier Theme