SRE Weekly Issue #357

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒.

Rootly automates manual tasks like creating an incident channel, Jira ticket and Zoom rooms, inviting responders, creating statuspage updates, postmortem timelines and more. Want to see why companies like Canva and Grammarly love us?:

https://rootly.com/demo/

Articles

Panic takes time and energy away from swift incident response, leading to second-guessing, a higher likelihood of mistakes, and analysis paralysis. Here are three tips to minimize it.

  Malcolm Preston — incident.io

A great explanation of why we need to wait for more details on the FAA NOTAM outage. My favorite part is the list of clues to whether an incident report might be useful: Time, Artifacts, Jargon, and Narrative.

  Thai Wood — Resilience Roundup

Lots of juicy details about a large SRE organization and how they work.

  Ash Patel — SREPath

A deploy accidentally wiped authentication tokens for some internal Cloudflare services, causing an outage for those services.

   Kenny Johnson and Sam Rhea — Cloudflare

eBay thought about adopting “test in production” and eliminating staging, but they determined that their use case really does require a staging environment. They carefully selected and anonymized real production data to use as test cases in staging.

   Senthil Padmanabhan — eBay

This article has a really great section explaining the pitfalls of full system dashboards.

  Boris Cherkasky

The first one is my favorite:

Economic factors will force companies to look for more efficient ways of managing reliability

I’m not sure if that will happen, but it’s an interesting theory.

  Emily Arnott

This author shares what they learned in adapting to running incidents remotely once the pandemic hit.

  Emily Ruppe — Jeli

SRE Weekly Issue #356

Thanks to all of you that took the time to share your ideas about choosing incidents to investigate! I got some great answers and I’m looking forward to pulling them together into an article.

I decided to give this GPT-3 thing a spin. It turns out that it absolutely can assemble a newsletter with links to the week’s top SRE stories, each with a short description. It even includes authors. The authors are even real people. The URLs, though… well, they look real, but they’re mysteriously all 404s, and the articles don’t actually exist. Guess you’re stuck with me for now!

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒.

Rootly automates manual tasks like creating an incident channel, Jira ticket and Zoom rooms, inviting responders, creating statuspage updates, postmortem timelines and more. Want to see why companies like Canva and Grammarly love us?:

https://rootly.com/demo/

Articles

This article takes the idea of “internal customers” to its logical conclusion, by treating the platform in the same way as a startup company.

  Adam Buggia — Sym

This article uses nifty probability formulas to show that blaming an engineer for an incident may well result in diminished reliability and efficiency.

  Dan Slimmon

Here’s a report on the CircleCI security incident at the start of the year. There’s some good stuff in there about not blaming the specific engineer whose device was attacked.

  Rob Zuber — CircleCI

A hot take on how not to measure your incident response process.

  Fred Hebert — Honeycomb
  Full disclosure: Honeycomb is my employer.

eBay’s notification platform team built a fault-tolerant, resilient system by injecting faults in the application level.

  Wei Chen — eBay

This one succinctly sums up why I haven’t covered the NOTAM outage much yet.

If a small mistake was sufficient to take down a complex system, then our systems would be crashing all of the time.

  Lorin Hochstein

Don’t you love when merely running strace fixes the problem?

  Oren Eini

This air accident seems at its face to be a clear-cut story of negligence. There’s far more to it, and the author goes into detail on why blaming the captain can damage air safety industry-wide.

  Admiral Cloudberg

SRE Weekly Issue #355

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒.

Rootly automates manual tasks like creating an incident channel, Jira ticket and Zoom rooms, inviting responders, creating statuspage updates, postmortem timelines and more. Want to see why companies like Canva and Grammarly love us?:

https://rootly.com/demo/

Articles

I’m trying something new: I’m looking for input from you, dear readers!

This link is a Google Form where I’m asking for ideas that I might turn into a blog post or conference talk. If you’re game, I’d love to hear what you think.

Here’s the panel for this webinar:

  • Vanessa Huerta Granda (Jeli)
  • Emily Ruppe (Jeli)
  • Liz Fong-Jones (Honeycomb)
  • Fred Hebert (Honeycomb)

Honestly, with that set of names, I’d listen even if they were just discussing the weather.
  Full disclosure: Honeycomb, my employer, is mentioned.

This week saw an outage of the NOTAM system which disseminates important information to aircraft pilots in the US. As a result, all flights in the US were grounded.

There’s not much in the way of interesting detail available yet, but I did see a mention of this air incident in which NOTAMs played a significant part. Mentour Pilot also covered this one

  Admiral Cloudberg

In essence, this new reliability is:

  1. The health of your system
  2. Weighed based on customer expectations and happiness
  3. Prioritized based on your current capabilities

This article focuses on the sociotechnical aspects of reliability.

  Jim Gochee — The New Stack

Here are some guidelines for what kind of alerting works best for services at various stages of maturity.

  Ali Sattari

The actions we take to avert a potential problem can introduce their own risks.

  Will Gallego

This one’s from the incident.io folks.

  incident.io

I often meet with skepticism when I say that server monitoring systems should only page when a service stops doing its work.

Read on to find out why.

  Dan Slimmon

SRE Weekly Issue #354

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒.

Rootly automates manual tasks like creating an incident channel, Jira ticket and Zoom rooms, inviting responders, creating statuspage updates, postmortem timelines and more. Want to see why companies like Canva and Grammarly love us?:

https://rootly.com/demo/

Articles

This episode of DisasterCast discusses what happens when attempts to make things safer backfire.

by trying to suppress small problems, we create a reservoir of danger waiting to burst out

  Drew Rae

These images offer a glimpse into the visual patterns that appear in our variables and time-series, and the beauty that emerges from chaos. Some of the images in these galleries appeared during difficult rollouts, and some even during production incidents. All come from graphs generated by Google’s monitoring systems.

  Google

The popular slogan says “test in production”, but what if your business simply doesn’t allow it?

For any scenario where I expect to be causing client impact, I’d rather test in non-production than not test at all, since production is clearly off the table.

  Christina Yakomin — InfoQ

There’s been a trend toward narrating our engineering work on company blogs, without which this newsletter probably wouldn’t exist.

  Jordan Teicher — New York Times

My team recently moved databases from local files in the codebase to an online Database.

It didn’t go quite as planned, but they got there in the end.

  Kaustubh Hiware — Mercari

In Product Analytics we wanted to support our colleagues in SRE, so we created a model to predict the monetary costs of incidents affecting our conversion funnel.

  Enrique Hernani Ros — HelloFresh

There’s some interesting detail here about multiple failed UPSes and an accidental voltage mismatch exacerbating the situation.

  Laura Dobberstein — The Register

SRE WEEKLY © 2015 Frontier Theme