General

SRE Weekly Issue #53

SPONSOR MESSAGE

The “2016/17 State of On-Call” report from VictorOps is now available to download. Learn what 800+ respondents have to say about life on-call, and steps they’re taking to make it better. Get your free copy here: https://victorops.com/state-of-on-call

Articles

Without explicit limits, things fail in unexpected and unpredictable ways. Remember, the limits exist, they’re just hidden.

AWS gives us this in-depth explanation of their use of shuffle sharding in the Route 53 service. This is especially interesting given the Dyn DDoS attack a couple of months ago.

How does container networking work? Julia Evans points her curious mind toward this question and shares what she learned.

[…] it’s important to understand what’s going on behind the scenes, so that if something goes wrong I can debug it and fix it.

More on the subject of percentiles and incorrect math this week from Circonus. The SLA calculation stuff is especially on point.

And speaking of SLAs, here’s an excellent article on how to design and adopt an SLA in your product or service.

A summary of a few notable Systems We Love talks. I’m so jealous of all of you folks that got to go!

PagerDuty added #OnCallSelfie support to their app. Amusingly, that first picture is of my (awesome) boss.  Hi, Joy!

A post-analysis of an Azure outage from 2012. The especially interesting thing to me is the secondary outage caused by eagerness to quickly deploy a fix to the first outage. There’s a cognitive trap here: we become overconfident when we think we’ve found The Root Cause and we rush to deploy a patch.

Outages

SRE Weekly Issue #52

Merry Decemberween, all!  Much like trash pickup service, SRE Weekly comes one day late when it falls on a holiday.

SPONSOR MESSAGE

The “2016/17 State of On-Call” report from VictorOps is now available to download. Learn what 800+ respondents have to say about life on-call, and steps they’re taking to make it better. Get your free copy here: https://victorops.com/state-of-on-call

Articles

Percentiles are tricky beasts. Does that graph really mean what you think it means?

The math is just broken. An average of a percentile is meaningless.

Thanks to Devops Weekly for this one.

There’s that magical “human error” again.

ChangeIP suffered a major outage two weeks ago and they posted this analysis of the incident. Thanks, folks! Does this sound familiar?

We learned that when we started providing this service to the world, we made design and data layout decisions that made sense at the time but no longer do.

Shuffle sharding is a nifty technique for preventing impact from spreading to multiple users of your service. A great example is the way Route 53 assigns nameservers for hosted DNS zones:

sreweekly.com. 172800 IN NS ns-442.awsdns-55.com.
sreweekly.com. 172800 IN NS ns-894.awsdns-47.net.
sreweekly.com. 172800 IN NS ns-1048.awsdns-03.org.
sreweekly.com. 172800 IN NS ns-1678.awsdns-17.co.uk.

Fastly has a brilliant, simple, and clever solution to load balancing and connection draining using a switch ignorant of layer 4.

Incurring connection resets on upgrades has ramifications far beyond disrupting production traffic: it provides a disincentive for continuous software deployment.

Heroku shared a post-analysis of their major outage on December 15.

Full disclosure: Heroku is my employer.

Outages

  • NTP server pool
    • Load on the worldwide NTP server pool increased significantly due to a “buggy Snapchat app update”. What was Snapchat doing with NTP? (more details)
  • Zappos
    • Zappos had a cross-promotion with T-Mobile, and the traffic overloaded them.Thanks to Amanda Gilmore for this one.
  • Slack
    • Among other forms of impairment, /-commands were repeated numerous times. At $JOB, this meant that people accidentally paged their coworkers over and over until we disabled PagerDuty.
  • Librato
    • “What went well” is an important part of any post-analysis.
  • Tumblr
  • Southwest Airlines

SRE Weekly Issue #51

SPONSOR MESSAGE

The “2016/17 State of On-Call” report from VictorOps is now available to download. Learn what 800+ respondents have to say about life on-call, and steps they’re taking to make it better. Get your free copy here: https://victorops.com/state-of-on-call

Articles

This is a big moment for the SRE field. Etsy has distilled the internal training materials they use to teach employees how to facilitate retrospectives (“debriefings” in Etsy parlance). They’ve released a guide and posted this introduction that really stands firmly on its own. I love the real-world story they share.

And here’s the guide itself. This is essential reading for any SRE interested in understanding incidents in their organization.

Slicer is a general purpose sharding service. I normally think of sharding as something that happens within a (typically data) service, not as a general purpose infrastructure service.  What exactly is Slicer then?

Click through to find out. It’ll be interesting to see what open source projects this paper inspires.

The second in a series, this article delves into the pitfalls of aggregating metrics. Aggregation means you have to choose between bloating your time-series datastore or leaving out crucial stats that you may need during an investigation.

I thought this was going to be primarily an argument for reducing burnout to improve reliability. That’s in there, but the bulk of this article is a bunch of tips and techniques for improving your monitoring and alerting to reduce the likelihood that you’ll be pulled away from your vacation.

The title says it all. Losing the only person with the knowledge of how to keep your infrastructure running is a huge reliability risk. In this article, Heidi Waterhouse (who I coincidentally just met at LISA16!) makes it brilliantly clear why you need good documentation and how to get there.

Here’s another overview of implementing a secondary DNS provider. I like that they cover the difficulties that can arise when you use a provider’s proprietary non-RFC DNS extensions such as weighted round-robin record sets.

Outages

SRE Weekly Issue #50

I’m back! The death plague was pretty terrible. A–, would not buy from again. I’m still catching up on articles from the past couple of weeks, so if I missed something important, please send a link my way!

I’m going to start paring down the Outages section a bit. In the past year, I’ve learned that telecom providers have outages all the time, and people complain loudly about them. They also generally don’t share useful postmortems that we can learn from. If I see a big one, I may still report it here, but for the rest, I’m going to omit them.

SPONSOR MESSAGE

The “2016/17 State of On-Call” report from VictorOps is now available to download. Learn what 800+ respondents have to say about life on-call, and steps they’re taking to make it better. Get your free copy here: https://victorops.com/state-of-on-call

Articles

Gabe Abinante has been featured here previously for his contributions to the Operations Incident Board: Postmortem Report Reviews project. To kick off this year’s sysadvent, here’s his excellent argument for having a defined postmortem process.

Having a change management process is useful, even if it’s just a deploy/rollback plan. I knew all that, but this article had a really great idea that I hadn’t thought of before (but should have): your rollback plan should have a set of steps to verify that the rollback was successful.

Let’s be honest: being on-call is kind of an ego boost. It makes me feel important. But not getting paged is way better than getting paged, and we should remember that. #oncallselfie

It’s that time of year again! In a long-standing (1-year-long) tradition here at SRE Weekly, I present you this year’s State of On-Call report from my kind sponsor, VictorOps.

Storing 99th and 95th percentile latency in your time-series DB is great, but what if you need a different percentile? Or if you need to see why those 1% of requests are taking forever? Perhaps they’re all to the same resource?

Orchestrator is a tool for managing a (possibly complex) tree of replicating MySQL servers. This includes master failure detection and automatic failover, akin to MHA4Mysql and other tools. GitHub has adopted Orchestrator and shares some details on how they use it.

A few notable brands suffered impaired availability on and around Black Friday this year. Hats off to AppDynamics for giving us some hard numbers.

Looks like I missed this “Zero Outage Framework” announcement the first time around. I love the idea of information-sharing and it’ll be interesting to see what they come up with. We can all benefit from this, especially if the giants like Microsoft join up.

All IT managers would do well to heed this advice. Remember, burnout very often directly and indirectly impacts reliability.

“If you’re a manager and you are replying to email in the evening, you are setting the expectation to your team – whether you like it or not – that this is normal and expected behaviour”

Signifai has this nice write-up about setting up redundant DNS providers. My favorite bit is how they polled major domains to see who had added a redundant provider since October 21, and they even shared the source for their polling tool!

I’ve featured a lot of articles lately about reducing the amount of code you write. But does that mean that it’s always better to contract with a SaaS provider? This week’s Production Ready delves into the tradeoffs.

Outages

SRE Weekly Issue #49

My vacation visiting the Mouse was awesome!  I had a lot of time to work on a little project of mine.  And now I’m back just in time for Black Friday and Cyber-whatever. Good luck out there, everyone!

SPONSOR MESSAGE

2016/17 State of On-Call Webinar (with DevOps.com): Register to learn what 800+ survey respondents have to say about life on-call. http://try.victorops.com/2016_17_stateofoncall

Articles

Another issue of BWH’s Safety Matters, this time about a prescribing accident. The system seems to have been almost designed to cause this kind of error, so it’s good to hear that a fix was already about to be deployed.

This is a great article on identifying the true root cause(s) of an incident, as opposed to stopping with just a “direct cause”. I only wish it were titled, Use These Five Weird Tricks to Run Your RCA!

Etsy describes how they do change management during the holidays:

[…] at Etsy, we still push code during the holidays, just more carefully, it’s not a true code freeze, but a cold, melty mixture of water and ice. Hence, Slush.

This issue of Production Ready is about battling code rot through incrementally refactoring an area of a codebase while you’re doing development work that touches it.

Shutterstock shares some tips they’ve learned from writing postmortems. My favorite part is about recording a timeline of events in an incident. I’ve found that reading an entire chat transcript for an incident can be tedious, so it can be useful to tag items of interest using a chat-bot command or a search keyword so that you can easily find them later.

The “Outage!” AMA happened while I was on vacation, and I still haven’t had a chance to listen to it. Here’s a link in case you’d like to.

My favorite:

If something breaks in production, how do you know about it?

Weaver is a tool to help you identify problems in your microservice consumers by doing “bad” things like responding slowly to a fraction of requests.

Barclays reduced load on their mainframe by adding MongoDB as a caching layer to handle read requests. What the heck does “mainframe” mean in this decade, anyway?

We’d do well to remember during this holiday season that several seconds of latency in web requests is tantamount to an outage.

Tyler Treat gives us an eloquently presented argument for avoiding writing code as much as possible, for the sake of stability.

Outages

So far, no big-name Black Friday outages. We’ll see what Cyber Monday has in store.

  • Everest (datacenter)
    • Everest suffered a cringe-worthy network outage subsequent to a power failure. Power came off and on a couple of times, prompting their stacked Juniper routers to assume they’d failed in booting and go into failure recovery mode. Unfortunately, the secondary OS partitions on the two devices contained different JunOS versions, so they wouldn’t stack properly.

      I’d really like to read the RFO on the power outage itself, but I can’t find it. If anyone has a link, could you please send it my way?

  • Argos
A production of Tinker Tinker Tinker, LLC Frontier Theme