SRE Weekly Issue #82

SPONSOR MESSAGE

The definitive guide for DevOps Post-Incident Reviews (AKA – Postmortems). Learn why traditional methods don’t work – and why fast incident response isn’t enough. Download your free copy of the 90+ page eBook from O’Reilly Media and VictorOps.
http://try.victorops.com/post_incident_review/SREWeekly

Articles

Increment issue #2 is out! Want to hear what it was like for these three big companies to move to the cloud? Read on.

This article covers a lot of ground, from general strategy to specific methods for estimating capacity needs. I love this:

Perhaps surprisingly for engineers who work in mission-critical business applications, occasional spikes of 90%+ of our users being entirely unable to use the sole application of our company was an entirely acceptable engineering tradeoff versus sizing our capacity against our peak loads.

I love the insight this article gives me into the huge networks of big CDNs.

Key point: don’t count your chickens before they’ve recovered.

The MTTR time should be stopped when there is verification that all systems are once again operating as expected and end users are no longer negatively affected

Scalyr explains how to move beyond specific playbooks to create a renewal incident response plan.

Here’s a nice little how-to:

A recent challenge for one of the teams I am currently involved was to find a way in AWS CloudWatch:

  1. To alert if the metric breaches a specified threshold.
  2. To alert if a particular metric has not been sent to CloudWatch within a specified interval.

And another short how-to, this on developing Prometheus with HA.

Self-care is critical in tech, not only for us as individuals, but for the health and reliability of the entire organization. Overstretched engineers make mistakes. This article introduces a new resource: selfcare.tech, which is a curated, open-source repository of self-care resources.

Outages

SRE Weekly Issue #81

SPONSOR MESSAGE

The definitive guide for DevOps Post-Incident Reviews (AKA – Postmortems). Learn why traditional methods don’t work – and why fast incident response isn’t enough. Download your free copy of the 90+ page eBook from O’Reilly Media and VictorOps.
http://try.victorops.com/post_incident_review/SREWeekly

Articles

PagerDuty shared this timeline of their progress in adopting Chaos Engineering through their Failure Friday program. This is brilliant:

We realized that Failure Fridays were a great opportunity to exercise our Incident Response process, so we started using it as a training ground for our newest Incident Commanders before they graduated.

I’m a big proponent of having developers own their code in production. This article posits that SRE’s job is to provide a platform that enables developers to do that more easily. I like the idea that containers and serverless are ways of getting developers closer to operations.

These platforms and the CI/CD pipelines they enable make it easier than ever for teams to own their code from desktop to production.

This reads less like an interview and more like a description of Amazon’s incident response procedure. I started paying close attention at step 3, “Learn from it”:

Vogels places the blame not on the engineer directly responsible, but Amazon itself, for not having failsafes that could have protected its systems or prevented the incorrect input.

Jonathan is a platform engineer at VictorOps, responsible for system scalability and performance. This is Part 1 in a 3-part series on system visibility, the detection part of incident management.

This article is published by my sponsor, VictorOps, but their sponsorship did not influence its inclusion in this issue.

This article is about a different kind of human factor than articles I often link to: cognitive bias. The author presents a case for SREs as working to limit the effects of cognitive bias in making operational decisions.

Outages

  • OVH
    • OVH suffered a major outage in a datacenter, taking down 50,000 websites that they host. The outage was caused by a leak in their custom water-cooling system and resulted in a painfully long 24-hour recovery from an offsite backup. The Register’s report (linked) is based on OVH’s incident log and is the most interesting datacenter outage description I’ve read this year.
  • Google Cloud Storage
    • Google posted this followup for an outage that occurred on July 6th. As usual, it’s an excellent read filled with lots of juicy details. This caught my eye:

      […] attempts to mitigate the problem caused the error rate to increase to 97%.

      Apparently this was caused by a “configuration issue” and was quickly reverted. It’s notable that they didn’t include anything about this error in the remediations section.

  • Melbourne, AU’s Metro rail network
    • A network outage stranded travelers, and switching to the DR site “wasn’t an option”.
  • Somalia

SRE Weekly Issue #80

SPONSOR MESSAGE

New eBook for DevOps pros: The Dev and Ops Guide to Incident Management offers 25+ pages of essential insight into building teams and improving your response to downtime.
http://try.victorops.com/SREWeekly/IM_eBook

Articles

I had no idea there were so many tracing systems in Linux! Fortunately Julia Evans did, and she learned all about them so that she could explain them to us.

There’s strace, and ltrace, kprobes, and tracepoints, and uprobes, and ftrace, and perf, and eBPF, and how does it all fit together and what does it all MEAN?

What do you get when a high school teacher switches careers, goes to boot camp, and becomes an SRE? In this case, we get Krishelle Hardson-Hurley, who wrote this really great intro to the SRE field. She also included a set of links to other SRE materials. Thanks for the link to SRE Weekly, Krishelle!

This issue of Production Ready is a transcript (with slides) of Mathias’s talk at ContainerDays on doing chaos engineering in a container-based infrastructure. I really like the idea of attaching a side-car container to inject latency using tc.

Here’s an interesting side-effect from an IPO: Redfin was obliged to mention the fact that its website runs out of a single datacenter.

This article, part of a series from Honeycomb.io on structured event logging, contains some tips on structuring your events well to get the most out of your logs.

I’d never thought about what IT systems must exist on a cruise ship before. This article left me wanting to know more, so I found this ZDNet article with pictures and descriptions of another cruise ship datacenter layout.

Outages

SRE Weekly Issue #79

SPONSOR MESSAGE

New eBook for DevOps pros: The Dev and Ops Guide to Incident Management offers 25+ pages of essential insight into building teams and improving your response to downtime.
http://try.victorops.com/SREWeekly/IM_eBook

Articles

Asking “what failed?” can point an investigation in an entirely different and more productive direction.

[…] the power you have is not in the answer to your question; it’s in the question […]

If you’re planning to write reliable, well-performing server code in Linux, you’ll need to know how to use epoll. Here’s Julia Evans to tell you what she learned about epoll and related syscalls.

Tyler Treat rectifies Kafka 0.11’s exactly-once semantics with his classic article, “You Cannot Have Exactly-Once Delivery”.

A “refcard” from Dzone covering a wide range of SRE basics, including load balancing, caching, clustering, redundancy, and fault tolerance.

A PagerDuty engineer applies on-the-job expertise to labor, delivery, and parenting. Lots of concepts translate pretty well. Some… not so much.

As an SRE, I want “quality” code to be shipped so that our system is reliable. But what am I really after? Sam Stokes says we should avoid using the term “quality” in favor of finding common ground and understanding the whole situation.

The reality is that doing anything in the real world involves difficult decisions in the face of constraints.

The value of logs is in what questions you can answer with them.

A sample rate of 20 means “there were 19 other events just like this one”. A sample rate of 1 means “this event is interesting on its own”.

The Signiant team previously had no dedicated solution for incident communication. As a result, any hiccup in service resulted in a flooded queue for service agents and a stuffed inbox of “what’s going on here” notes from internal team members.

In practice, a message broker is a service that transforms network errors and machine failures into filled disks.

Queues inevitably run in two states: full, or empty.

You can use a message broker to glue systems together, but never use one to cut systems apart.

Outages

  • Fastly
  • Rackspace
  • Pinboard.in
    • Pinboard.in experienced a bit of feature degradation as its admin replaced a disk. I’m only including this because it meant that I couldn’t post this issue on time. ;)

      Pinboard‘s really awesome, and I wouldn’t be able to put together this newsletter without it. The API is super-simple to use, and I’m able to save and classify links right on my phone. A+, would socially bookmark with again.

SRE Weekly Issue #78

SPONSOR MESSAGE

New eBook for DevOps pros: The Dev and Ops Guide to Incident Management offers 25+ pages of essential insight into building teams and improving your response to downtime.
http://try.victorops.com/SREWeekly/IM_eBook

Articles

This Master’s thesis by Crista Vesel seeks to answer the question, “How does the language used in the U.S. Forest Service’s Serious Accident Investigation Guide bias accident investigation analysis?” It’s an awe-inspiring analysis, drawing on Dekker, Woods, Cook, and other authors I’ve linked here repeatedly.

The most exciting part for me was the confirmation of some vague thoughts I’ve had around the use of passive versus active voice in retrospectives. By using passive voice, we can seek to reduce the kind of blaming that is inherent in active/agentive language.

It’s by Julia Evans. Just read it.

Being responsible for my programs’ operations makes me a better developer

PagerDuty again draws on ITIL, this time to outline an example system for classifying incident impact and urgency in order to determine priority.

PagerDuty’s take on automating chaos includes a chat-bot that lets folks trigger one-off host failures, along with running periodically, of course.

Unfortunately, ChaosCat is significantly tied into our internal infrastructure tooling. For the moment this means we won’t be open-sourcing it.

This article is an overview of Microsoft’s DRaaS offering, Azure Site Recovery. Protip: you can just scroll past the signup-gate if you don’t feel like entering your email address.

Grab evaluated a couple of existing solutions but went with a simple custom sharding layer as a method to scale out their Redis usage.

Outages

  • Rollbar
  • LinkedIn
  • Skype
    • Suspected DDoS.
  • ATO (Australian Tax Office)
  • Dyn
    • Dyn suffered a long outage, and they posted an amazing 28 detailed updates to their status site before all was said and done. That’s something to aspire to.
  • Heroku
    • Heroku posted a followup for their series of incidents early this month. Sorry for missing posting those outages when they happened!Full disclosure: Heroku is my employer.
A production of Tinker Tinker Tinker, LLC Frontier Theme