SRE Weekly Issue #113


Grafana and VictorOps help teams visualize time series metrics across incident management. Here’s what you need to know:


The best kind of engineer is one that understands not only their specialty, but at least something about the fields adjacent to theirs. The empathy this confers allows one to work incredibly effectively across the company. For SREs, this is even more important.

[…] many of us are finding that the most valuable skill sets sit at the intersection of two or more disciplines.

Charity Majors — Honeycomb

GitLab held a session about recognizing and preventing burnout at their recent employee summit. They share the best tips in this article, and true to their radically open culture, they also added what they learned to their employee handbook, which is publicly available.

Clement Ho — GitLab

Here’s a post-analysis for a Travis CI incident early last year. Despite a couple of easy targets that could have been labelled as “root cause”, they instead skillfully laid out all of the contributing factors and left it at that.

Travis CI

What indeed? The same thing, just organized differently. There’s a lot of great analysis here about how ops roles can adapt to a serverless infrastructure, and how teams can best make use of ops folks.

Tom McLaughlin — ServerlessOps

Charity Majors wants you to look forward to on-call. This superb write-up of her recent conference talk explains why folks should think of on-call as an enjoyable privilege and how to shape your on-call to get there.

Jennifer Riggins

The Canary Analysis Service is Google’s internal tool that automatically analyzes canary runs and decides whether performance has been negatively impacted. My favorite section is the Lessons Learned.

Štěpán Davidovič with Betsy Beyer — ACM Queue


  • Snapchat
  • 123 Reg (hosting provider)
    • Customers lost files added since 123 Reg’s last valid backup from August, 2017.
  • partypoker
  • eBay
  • Signal and Telegram (messenger apps)
  • Alexa
    • I missed this one last week — it was apparently due to the AWS outage I reported on.
  • TD Bank
  • Oculus Rift
    • A code-signing certificate expired, rendering some existing VR headsets non-functional. Oculus is issuing a $15 store credit to affected customers.

      Because of the particulars of what expired and how it happened, the company wasn’t able to simply push an update out to users because the expired certificate was blocking Oculus’ standard software update system.

SRE Weekly Issue #112


Are your monitoring and incident management tools integrated? You shouldn’t be monitoring your infrastructure and code in an old-school fashion.


an outage of a provider that we don’t use, directly or indirectly, resulted in our service becoming unavailable.

I don’t think I even need to add anything to that to make you want to read this article.

Fran Garcia — Hosted Graphite

The big story this week is the memcached UDP amplification DDoS method, used to send 1.3 Tbps (!) toward our friends at GitHub. Their description is linked above.

Sam Kottler — GitHub

The internet was alight with related discussions:

An excellent template that you can use as a basis for writing runbooks.

Catie McCaffrey

This author of an upcoming O’Reilly book is looking for small contributions for a crowd-sourced chapter:

In two paragraphs or less, what do you think is the relationship between DevOps and SRE? How are they similar? How are they different? Can both be implemented at every organization? Can the two exist in the same org at the same time? And so on…

David Blank-Edelman

Bandaid started as a reverse proxy that compensated for inefficiencies in our server-side services.

I’m intrigued by the way it handles its queue in last-in first-out order, on the theory that a request that’s been waiting for a long time is likely to be cancelled by its requester.

Dmitry Kopytkov and Patrick Lee — Dropbox

This is an amusing recap of five major outages of the past few years. If you’ve been subscribed for awhile, it’ll be review, but I still enjoyed the reminder.

Michael Rabinowitz

This article summarizes a new research paper on “fail-slow” hardware failures. When hardware only kind of fails, it can often have more disastrous consequences than when it stops working outright.

Robin Harris — Storage Bits

This is an awe-inspiring way to make a point about designing systems to be resilient to human error.

If it’s possible for a human to hit the wrong button and set off an entire fireworks display by accident, then maybe the problem isn’t with the human; it’s with that button.

If it’s possible to mix up minutes and fractions of a second like we’ve done deliberately, then maybe the system isn’t clear, or maybe the pre-launch checklist isn’t thorough enough.

Tom Scott

There are some really great ideas in this article around preventing and ameliorating the technical debt that can be inherent in the use of feature flags. Ostensibly this article is about using, but the ideas are broadly applicable.

Adil Aijaz — Split


SRE Weekly Issue #111

I’m trying an experiment this week: I’ve included authors at the bottom of each article.  I feel like it’s only fair to increase exposure for the folks that put in the significant effort necessary to write articles.  It also saves me having to mention names and companies, hopefully leaving more room for useful summaries.

If you like it, great!  If not, please let me know why — reply by email or tweet @SREWeekly.  I feel like this is the right thing to do from the perspective of crediting authors, but I’d like to know if a significant number of you disagree.

Hat-tip to Developer Tools Weekly for the idea.


Gain visibility throughout your entire organization. Visualize time series metrics with VictorOps and Grafana.


Conversations around compensation for on-call. What has worked or not for you? $$ vs PTO. Alerts vs Scheduled vs Actual Time?1 x 1.5 or 2x?

The replies to her tweet are pretty interesting and varied.

Lisa Phillips, VP at Fastly
Full disclosure: Fastly is my employer.

This thread is incredibly well phrased, explaining exactly why it’s important for developer to be on call and how to make that not terrible. Bonus content: the thread also branches out into on-call compensation.

if you aren’t supporting your own services, your services are qualitatively worse **and** you are pushing the burden of your own fuckups onto other people, who also have lives and sleep schedules.

Charity Majors — Honeycomb

This week, Blackrock3 Partners posted an excerpt from their book, Incident Management for Operations that you can read free of charge. If you enjoy it, I highly recommend you sign up for their first-ever open enrollment IMS training course. I know I keep pushing this, but I truly believe that incident response in our industry as a whole will be significantly improved if more people train with these folks.

“On-call doesn’t have to suck” has been a big theme lately, with articles and comments on both sides. Here’s a pile of great advice from my favorite ops heroine.

Charity Majors — Honeycomb

An interesting little debugging story involving unexpected SSL server-side behavior.

Ayende Rahien — RavenDB

In this post, I’m going to take a look at a sample application that uses the Couchbase Server Multi-Cluster Aware (MCA) Java client. This client goes hand-in-hand with Couchbase’s Cross-Data Center Replication (XDCR) capabilities.

Hod Greeley — Couchbase

Tips for how to go about scaling your on-call policy and procedures in order to be fair and humane to engineers.

Emel Dogrusoz — OpsGenie


SRE Weekly Issue #110


Learn how to accelerate your path to full-stack monitoring and alerting in this webinar. Register now:


Facebook goes in-depth on their preparations for New Year’s Day 2018 in their live streaming infrastructure. They used forecasting based on last year and various kinds of load testing to develop the right kind of scaling strategy to meet demand.

Cindy Sridharan went and blew up the internet with an excellent and controversial tweet about on-call. She took to Medium to address all of the discussion that followed, and the result is a pretty excellent article about on-call and work/life balance.

A discussion about how RavenDB handles resource exhaustion, and just how resource exhaustion can be defined and detected.

Honeycomb on using observability tooling to precisely analyze how a change actually affects your users. Did the new feature/bugfix have the effect you expected?

Pusher is obsessed with low latency, and for good reason. When they saw high long-tail latency, they discovered that Haskell’s garbage collector is optimized for throughput, rather than latency.

Facebook’s Project Waterbear seeks to improve resiliency across many of their services through a combination of chaos engineering, cultural changes, and improvements to, their common REST framework.

As SREs, we measure, analyze, and provide best practices to help improve the resilience of each application for the application owners and engineering teams.

The tradeoff for more resilient, soft-failing software systems is more complex debugging when things go wrong. As these problems are now more likely to reside deep in application code — which wasn’t the case not along ago — observability tooling is playing catchup.

OpsGenie analyzes AWS’s new DynamoDB Global Tables, a cross-region multi-master NoSQL datastore. They share the upsides and the pitfalls and include a discussion of how to transition to a global table.

A Netflix manager shares his reasons for still being on-call even though he’s a manager, and they’re pretty great. A lot of it has to do with keeping in tune with what it’s like being a developer on his team, especially with regard to on-call burden and operability.


SRE Weekly Issue #109


Asking five (or more) whys is outdated. So is trying to find a Root Cause Analysis. Take a look at the case against RCA.


Pusher had a problem: their service was being bombarded by connections from rogue clients, and they needed to enforce limits. This article is highly polished, with beautiful diagrams and well-constructed explanations.

This is the story of how we quelled the biggest threat to our service uptime for several years.

Structured logging can bring a lot of uniformity to your infrastructure, as lovingly explained in this article. Snyk explains how that uniformity allows for a standardized troubleshooting methodology that helps them get to the bottom of most problems in minutes.

Instead of focusing on the individual intricacies of each part of our system, we train on the common tools to be used for almost every kind of problem.

Feature flags are awesome! But there’s a downside: adding lots of conditional handling to your code can significantly increase code complexity, which can in turn decrease maintainability and increase risk.

Following up on her appearance in the New York Times last week, Charity Majors posted this excellent Twitter thread about the importance of vendor relationship management and generating business value, as any kind of engineer. I’d argue especially as an SRE.

Here’s the latest in Google’s CRE Life Lessons series. Previously, they explained how to build an Escalation Policy, and in this article, they analyze how it would be applied to several fictitious scenarios.

LinkedIn needed a way to test their HDFS cluster against real-world traffic patterns. The existing solutions didn’t meet their needs (for reasons they explain toward the end), so they created Dynamometer.

PagerDuty released a report this week entitled, “The State of IT Work-Life Balance”, which contains the results of their recent survey. This article is an overview, along with some related tidbits about alert fatigue.

Through an anecdote, Baron Schwartz cautions against the use of counter-factuals (“you should have…”) in analyzing the decisions leading up to an outage.

What it says on the tin. This article would make for a great checklist for deploys.


  • Singpass (Singapore ID system)
  • Uber
  • Fortnite
    • Fortnite hit a new peak of 3.4 million concurrent players last Sunday… and that didn’t come without issues!

      They suffered 6 different outages over two days, and they posted this highly-detailed incident analysis just 5 days later. Normally I tend not to include outages for MMO games because they have so many and rarely post in-depth analyses, but this one is worth a read.

  • Binance (cryptocurrency exchange)
  • Google App Engine
  • US stock brokerages
    • The US stock market had a rough week, and so did several brokerage websites as they dealt with the high trading volume.
  • Super Bowl Advertisers
    • Several companies that purchased expensive commercial slots during the SuperBowl (an american sportsball thing, for you folks outside the US) were unable to handle the web traffic they brought in.
  • Super Bowl
    • NBC had a 45-second blackout in their broadcast of the Superb Owl.
SRE WEEKLY © 2015 Frontier Theme