General

SRE Weekly Issue #287

A message from our sponsor, StackHawk:

Trying to figure out how to keep your APIs secure? You’re not the only one. See how DataRobot is automating API security testing with StackHawk.
https://sthwk.com/DataRobot

Articles

Lots of details about how Slack does incident response in this one.

Stephen Whitworth — incident.io

This list also gives an interesting insight into the way this company does SRE.

Mayank Gupta and Merlyn Shelley — Squadcast

Oh BGP, you rascally little routing protocol.

Alessandro Improta and Luca Sani — Catchpoint

A comprehensive definition of SREs and Site Reliability Engineering, including what SREs do and what makes SREs different from other roles.

The article covers various facets of SRE and acknowledges that SREs can perform many roles.

JJ Tang — Rootly

Another really excellent air accident story with lots of great talk about mental models and confirmation bias. The crew saw lots of disparate indications that each didn’t point to anything in particular and each wasn’t a huge problem on its own. That, coupled with confirmation bias, helped them miss what might seem obvious in hindsight.

Mentour Pilot

Outages

SRE Weekly Issue #286

A message from our sponsor, StackHawk:

Trying to scale AppSec across engingeering is no joke. Check out the 3 main reasons developers struggle with AppSec and how to make it better.
https://sthwk.com/3-reasons

Articles

This is a review of Marianne Bellotti’s Kill It With Fire a book about modernizing legacy systems. It focuses heavily on operational concepts and “the system around the system”, with a heavy SRE influence.

Laura Nolan — ;login:

Originally drafted in 2016, this blog post is even more relevant now. Beyond just the “why”, it has several ideas for interview questions to get you started.

Charity Majors

Tell a good story, and you can make things happen.

As SREs, we often know what needs to be done, but convincing others is a hard-won skill.

Lorin Hochstein

In this video report of a commercial aviation accident, there’s a neat discussion of resiliency toward the end. There were several other layers of protection that (probably) would have caught and prevented this incident if the A320 captain hadn’t intervened. And even though no accident occurred, there was still a “near miss” investigation.

Mentor Pilot

Although conversation about observability often ignores SREs, SREs have a central role to play in observability success.

Quentin Rousseau — Rootly

In a microservice architecture, having retries several levels deep can be a recipe for nastiness.

Oren Eini — RavenDB

This report has some detail on two major incidents experienced by GitHub last month.

Scott Sanders — GitHub

Outages

SRE Weekly Issue #285

A message from our sponsor, StackHawk:

Check out the latest from StackHawk’s Chief Security Officer, Scott Gerlach, on why security should be part of building software, and how StackHawk helps teams catch vulns before prod.
https://sthwk.com/cloudnative

Articles

What’s so great about this incident write-up is the way that entrenched mental models hampered the incident response. There’s so much to learn here.

Ray Ashman — Mailchimp

The parallels between this and the Mailchimp article are striking.

Will Gallego

This includes a review of the four golden signals and presents three areas to go further.

JJ Tang — Rootly

This one thoughtfully discusses why “root cause” is a flawed concept, approaching the idea from multiple directions.

Lorin Hochstein

Check it out, a new SRE conference! This one’s virtual and the CFP is open until October 1.

Robert Barron — IBM

To be clear, this article is about static dashboards that just contain pre-set graphs of specific metrics.

every dashboard is an answer to some long-forgotten question

Charity Majors

Public incident posts give us useful insight into how companies analyze their incidents, but it’s important to remember that they’re almost never the same as internal incident write-ups.

John Allspaw — Adaptive Capacity Labs

In this incident from July 7, front-line routing hosts exceeded their file descriptor limits, causing requests to be delayed and dropped.

Heroku

.io, assigned to the British Indian Ocean Territory is almost exclusively used by annoying startups for content completely unrelated to the islands.

Remember, it’s all fun and games until the random country you’ve attached your business to has an outage in their TLD DNS infrastructure.

Jan Schaumann

If you’re curious about just what a columnar data store is like I was, this article is a good introduction.

Alex Vondrak — Honeycomb

Outages

SRE Weekly Issue #284

Like last week, I prepared this week’s issue in advance, so no Outages section.  Have a great week!

A message from our sponsor, StackHawk:

Trying to automate application and API security testing? See how StackHawk and Burp Suite Enterprise stack up:
https://sthwk.com/burp-enterprise

Articles

Soundcloud is very clear on the fact that they are not at Google scale. It’s interesting to see how they apply SRE principles at their scale.

Björn “Beorn” Rabenstein — SoundCloud

Here’s why Target set up their ELK stack, and how they used it to troubleshoot a problem in ElasticSearch itself.

Dan Getzke — Target

A key point in this article is that calculating your error budget as just “100% – SLO” goes about things backward.

Adam Hammond — Squadcast

They periodically scale up their systems just to test and be sure they’ll be ready for big events like Black Friday / Cyber Monday.

Kathryn Tang — Shopify

In this post, we’ll focus on service ownership. Why is service ownership important? How should teams self-organize to achieve it? Where’s the best place to start?

Cortex

This fun troubleshooting story hinges around the internal details of how PostgreSQL’s sequences work.

Pete Hamilton — incident.io

SRE Weekly Issue #283

I’m on vacation enjoying the sunny beaches in Maine with my family, so I prepared this week’s issue in advance.  No outages section, save for one big one I noticed due to direct personal experience.  See you all next week!

A message from our sponsor, StackHawk:

StackHawk is now integrated with GitHub Code Scanning! Enginners can run automated dynamic application and API security when they check-in code, with results available directly in GitHub.
https://sthwk.com/GitHub-CodeScanning

Articles

We needed a way to deploy our new service seamlessly, and to roll back that deploy should something go wrong. Ultimately many, many, things did go wrong, and every bit of failure tolerance put into the system proved to be worth its weight in gold because none of this was visible to customers.

Geoffrey Plouviez — Cloudflare

I especially like the idea of tailoring retrospective documents to disparate audiences — you may have more than you realize.

Emily Arnott — Blameless

An analysis of two incidents from the venerable John Allspaw.  These are from 2012 back when he was at Etsy, and yet there’s still a ton we can learn now by reading them.

John Allspaw — Etsy

An account of how Gojek responds to production issues, and why the RCA is a critical part of the process.

Sooraj Rajmohan — Gojek

Type carefully… or rather, design resilient systems.

JJ Tang — Rootly

Requiring development teams to fully own their services can lead to siloing and redundancy. Heroku works to ameliorate that by embedding SREs in development teams.

Johnny Boursiquot — Salesforce (presented at QCon)

I’ve shared some articles here suggesting doing away with incident metrics like MTTR entirely. This author says that they are useful, but the numbers must be properly ccontextualized.

Vanessa Huerta Granda — Learning From Incidents

Everything could be fine, or we could failing to report or missing problems altogether — we’re flying blind.

Chris Evans — incident.io

Outages

A production of Tinker Tinker Tinker, LLC Frontier Theme