General

SRE Weekly Issue #290

A message from our sponsor, Rootly:

Manage incidents directly from Slack with Rootly 🚒. Automate manual admin tasks like creating incident channel, Jira and Zoom, paging the right team, postmortem timeline, setting up reminders, and more. Book a demo:
https://rootly.io/?utm_source=sreweekly

Articles

Despite carefully testing how they would handle this week’s expiration of the root CA that cross-signed Let’s Encrypt’s CA certificate, they had an outage. The reason? Poor behavior in OpenSSL. See the next article for a deeper explanation of what went wrong with OpenSSL.

Oren Eini — RavenDB

This article explains why some versions of OpenSSL are unable to validate certificates issued by Let’s Encrypt now, even though the certificates should be considered valid.

Ryan Sleevi

This says it all:

It turns out that the path to safety isn’t increased complexity.

Matt Asay — TechRepublic

The thrust of this article is that reliability applies to and should matter to the entire company, not just engineering. I really like the term “pitchfork alerting”.

Robert Ross — FireHydrant

Lesson learned: always make your application server’s timeout longer than your reverse proxy’s.

Ivan Velichko

Who deploys the deploy tool? The deploy tool, obviously — unless it’s down.

Lorin Hochstein

Their approach: group tables into “schema domains”, make sure that queries don’t span schema domains, and then move a schema domain to its own separate database cluster.

Thomas Maurer — GitHub

Groot is about helping figure out what’s wrong during an incident, not about analyzing an incident after the fact. I totally get why they need this tool, since they have over 5000 microservices!

Hanzhang Wang — eBay

SRE is a broad, overarching responsibility that needs a multitude of role considerations to pull off properly.

Ash P — Cruform

Outages

  • Heroku
    • (also this one)Heroku had a major outage that coincided with an Amazon EBS failure in a single availability zone in us-east1. Customers of Heroku such as Dead Man’s Snitch were impacted.
  • Slack
    • Slack had a big disruption related to DNSSEC. Here’s an interesting analysis of what may have gone wrong (link).
  • Let’s Encrypt
    • Let’s Encrypt saw heavy traffic as everyone clamored to renew their certificates, causing certificate issuance to slow down.
  • Microsoft 365
  • Apple’s “Find My” service
  • Signal
  • Xero
    • This one coincided with the same Amazon EBS outage mentioned above. Xero also had another outage on October 1.

SRE Weekly Issue #289

A message from our sponsor, StackHawk:

Semgrep and StackHawk are showing you what’s new with automated security testing on September 30. Grab your spot:
https://sthwk.com/whats-new-webinar

Articles

Here are some things that make SREs a unique breed in software work:

The one about Scrum caught my eye, and I followed the links through to the Stack Overflow post about SRE and Scrum.

Ash P — Cruform

An in-depth explainer on the Linux page cache, full of details and experiments.

Viacheslav Biriukov

There’s some great advice in this reddit thread… and maybe some tongue-in-cheek advice too.

Take production down the first day they give access — then it’s nothing but up from there!

Various — reddit

Using two real-world case studies, this article explains how developer self-service can go wrong, and then discusses how to avoid these pitfalls.

Kaspar von Grünberg — humanitec

What a great idea! I found it especially interesting that only 34% of SRE job postings mention defining SLIs/SLOs/error budgets.

Pruthvi — Spike.sh

For the first time, we’ve created the State of Digital Operations Report which is based on PagerDuty platform data.
[…]
we will walk through some of these findings and share 10 questions teams can ask themselves to improve their incident response.

Hannah Culver — PagerDuty

Incident response so often gets mired in assumptions that need to be re-evaluated. This article uses an incident as a case study

Lawrence Jones — incident.io.

This one lays out clear definitions of SRE and DevOps and compares and contrasts them.

Mateus Gurgel — Rootly

This week, Saleforce released Merlion, a Python library for time series machine learning and anomaly detection. Linked is an in-depth research paper on Merlin, explaining its theory of operation and experimental results.

Bhatnagar et al. — Salesforce

Outages

SRE Weekly Issue #288

A message from our sponsor, StackHawk:

Want to see what’s new with automated security tooling? Tune in on September 30 to see how StackHawk and Semgrep are making it possible to embed security testing in CI/CD.
https://sthwk.com/whats-new-webinar

Articles

Faced with a difficult hiring market for SREs, they embarked on a well-designed, carefully thought out program to hire and train entry-level folks as SREs — and it worked!

Thomas Betts — InfoQ

No matter how good your tooling is, how experienced you are, or how much you’ve prepared, incidents can still be hard.

Five people share about what they find hardest during incident response.

Chris Evans — incident.io

This one has a lot of ideas about how to guide developers toward full ownership of their services in production.

Ambassador

In this post, I will cover the following modes of system resilience:

  • Adaptive Response
  • Superior Monitoring
  • Coordinated Resilience
  • Heterogenous Systems
  • Dynamic Repositioning
  • Requisite Availability

Ash P — Cruform

Root cause of success: unpatched security vulnerability

TMW a security vulnerability allows you to break into your infrastructure, averting disaster during an incident.

Lorin Hochstein, with incident story by Eric Dobbs

A migration didn’t go as planned, and customer traffic lost its way.

Heroku

I’m a big believer in human-in-the-loop automation. My favorite part of this article was this:

A further problem is that full automation — which aims to take the human out of the picture — requires a complete, nuanced understanding of a system and all potential outcomes, paradoxically resulting in heightened system complexity.

Tina Huang — Transposit

Outages

SRE Weekly Issue #287

A message from our sponsor, StackHawk:

Trying to figure out how to keep your APIs secure? You’re not the only one. See how DataRobot is automating API security testing with StackHawk.
https://sthwk.com/DataRobot

Articles

Lots of details about how Slack does incident response in this one.

Stephen Whitworth — incident.io

This list also gives an interesting insight into the way this company does SRE.

Mayank Gupta and Merlyn Shelley — Squadcast

Oh BGP, you rascally little routing protocol.

Alessandro Improta and Luca Sani — Catchpoint

A comprehensive definition of SREs and Site Reliability Engineering, including what SREs do and what makes SREs different from other roles.

The article covers various facets of SRE and acknowledges that SREs can perform many roles.

JJ Tang — Rootly

Another really excellent air accident story with lots of great talk about mental models and confirmation bias. The crew saw lots of disparate indications that each didn’t point to anything in particular and each wasn’t a huge problem on its own. That, coupled with confirmation bias, helped them miss what might seem obvious in hindsight.

Mentour Pilot

Outages

SRE Weekly Issue #286

A message from our sponsor, StackHawk:

Trying to scale AppSec across engingeering is no joke. Check out the 3 main reasons developers struggle with AppSec and how to make it better.
https://sthwk.com/3-reasons

Articles

This is a review of Marianne Bellotti’s Kill It With Fire a book about modernizing legacy systems. It focuses heavily on operational concepts and “the system around the system”, with a heavy SRE influence.

Laura Nolan — ;login:

Originally drafted in 2016, this blog post is even more relevant now. Beyond just the “why”, it has several ideas for interview questions to get you started.

Charity Majors

Tell a good story, and you can make things happen.

As SREs, we often know what needs to be done, but convincing others is a hard-won skill.

Lorin Hochstein

In this video report of a commercial aviation accident, there’s a neat discussion of resiliency toward the end. There were several other layers of protection that (probably) would have caught and prevented this incident if the A320 captain hadn’t intervened. And even though no accident occurred, there was still a “near miss” investigation.

Mentor Pilot

Although conversation about observability often ignores SREs, SREs have a central role to play in observability success.

Quentin Rousseau — Rootly

In a microservice architecture, having retries several levels deep can be a recipe for nastiness.

Oren Eini — RavenDB

This report has some detail on two major incidents experienced by GitHub last month.

Scott Sanders — GitHub

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme