General

SRE Weekly Issue #74

This is the first issue sent to over 2000 email subscribers (not to mention the 500+ Twitter followers and an unknown number of RSS subscribers!).  Wow!  Thank you all so much for reading and for all the great feedback you’ve sent over the past year and a half.  You make this fun.

SPONSOR MESSAGE

Upcoming devops.com webinar: Top 10 Practices of Highly Successful DevOps Incident Management Teams. Learn more and register: http://try.victorops.com/SRE_Weekly/IncidentMgmtWebinar

Articles

The holy grail of high availability is a multi-datacenter (or cloud) active/active architecture. This article goes into why, including examples of common pitfalls of traditional disaster recovery solutions.

Neat idea: here’s a Stack Overflow question asking for critique of a proposed outline for a post-incident analysis. It’s a great start already, and the answers include some pretty top-notch suggestions.

A tutorial on setting up multi-region failover for an S3-hosted website, written in response to February’s major S3 outage in us-east.

Last week, I linked to an article about debugging an overloaded ELB node. This week we have the sequel, a deep dive into the intricate details behind the problem, complete with a trip into the glibc source code.

Netflix uses data science to figure out how to fill the limited space on their edge content delivery nodes with the videos that people will request, all while (hopefully) avoiding hot nodes.

Zayna Shahzad, a PagerDuty software engineer, did customer support for a day, and she learned a ton. As SREs, we have the customer experience directly in our sights, so this kind of thing sounds like a really great idea.

Charity Majors does not want to be an SRE. Find out why by watching this 5-minute video interview between her and Rob Hirschfeld. I don’t often link to videos, because who has time to watch stuff? But this one is pretty intriguing.

Server Density originated the term “humanops”, and now they share 12 parts of how they practice it.

A Malaysian doctor writes about how to ensure that the national health system’s on-call policy is safe for doctors.

The passing of a paediatrician-to-be involved in a road traffic accident (motor-vehicle accident) recently is indeed a heart-breaking news to the whole medical fraternity. With the incident, a persistent recurring issue also resurfaced – work-related commuting accident ie road traffic accidents involving exhausted doctors after on-calls.

Do what better? Prevent and end illegal and unethical actions like discrimination, harassment, and retaliation. This article is by Susan Fowler, featured here a bunch, and while it’s not directly related to SRE, it’s so important that I urge you to read it.

Outages

  • Monitorama 2017 PDX
    • Monitorama (and a swathe of Portland) suffered a power outage last week. The organizers created a status site post (linked) and quickly organized a disaster recovery site: an entirely separate conference venue. Seriously amazing work, and oddly appropriate given the conference subject matter.

      If you didn’t make it to Monitorama, here’s a summary from LinkedIn SRE Michael Kehoe.

  • Sacramento Airport (CA, USA)
  • British Airways

SRE Weekly Issue #73

SPONSOR MESSAGE

Concerned about downtime? VictorOps helps you prepare, respond, and recover from IT and DevOps Incidents. Swing by our product center to learn how and start your trial. http://try.victorops.com/SREWeekly/ProductCenter

Articles

ELBs (Amazon’s Elastic Load Balancers) depend on clients properly respecting DNS round-robin record sets. This article follows a debugging session in excellent detail as they try to answer the question: why are our clients preferring (and overloading) just one ELB IP?

Sarah Schieffer Riehl shares her take on ServerlessConf Austin 2017. She’s got a healthy dose of skepticism that I like, concluding that “serverful and serverless architectures don’t do the same things.” I like this bit:

For processes that require polling or any kind of server wakefulness, converting to a serverless architecture can be an exercise in “serverless for serverless’ sake”.

Wow, this dovetails so well into the Todd Conklin’s “Safety Moment” from last week, on imagining all the possible things that could go wrong.  I’d love to hear more thoughts along these lines: is it possible to design a reliable system without envisioning the majority of things that could go wrong?

PagerDuty outlines an incident lifecycle management policy based on ITIL.

DropBox created Cape for “asynchronous processing of billions of events a day, powering many Dropbox features”. Example: you upload a text file, and a Cape job indexes it immediately for full-text searching. I’d love to hear more on why existing solutions didn’t fit the bill, although they do cover their requirements in depth.

When I signed on for my first SRE position, I had no idea how huge a part vendor relations would play in ensuring reliability.

Initially, LinkedIn’s SRE team hired engineers only based on technical skill. As they’ve grown, they’ve discovered the importance of collaboration skills as well.

StatusPage.io explains the reasons for having a solid incident communication policy and guides you through setting one up.

As the title suggest, this ACM Queue article goes into some depth on the kinds of calculations one might make when designing a reliable system. Specifically, they focus on service dependencies and introduce Google’s “rule of the extra 9”: a dependency should have one more nine of reliability than the thing that critically depends on it.

At the next conference, when somebody tries to sell you a circuit breaker talk, tell them that this is only the starter and ask for the main course.

Outages

SRE Weekly Issue #72

SPONSOR MESSAGE

Concerned about downtime? VictorOps helps you prepare, respond, and recover from IT and DevOps Incidents. Swing by our product center to learn how and start your trial. http://try.victorops.com/SREWeekly/ProductCenter

Articles

Idempotence is a critically important tool in building a reliable system. Stripe explains the concept and shows how they wrap theoretically non-idempotent actions like charging a credit card into safely idempotent API calls.

Here’s an account of an effort to move from server-based paging (this server is down) to functional-based alerting (this user action isn’t working), with a resulting impressive reduction in out-of-hours paging.

It pays to study up and deeply understand what a simple metric like “cpu utilization” really means.

Why am I linking to AWS’s status site? Look closely, and you’ll see that the “green checkmark i” symbol has been replaced with a far more noticeable blue circle with a white diamond. Check out the old icon here for comparison. End of an era, or just another way of presenting the same information?

The author introduces a new Ruby gem, grpc-commons that makes it easy to add circuit breaker and statsd support to a grpc client.

Along with being a tutorial on setting up Zipkin with Python, this article also explains some basic Zipkin concepts.

PagerDuty is apparently trying to position itself as more than just a paging service, with a few new features around the entire incident lifecycle. I’m especially interested in checking out the new postmortem tooling.

I included this article last week, but my link was outdated and returned a 404. Here’s the corrected link — sorry about that!

I put a call out for a review of Elastic’s new beta anomaly detection feature last week, and here one is! Thanks to an Elastic employee for forwarding this link to me.

This article cautions one to be careful to look past an obvious root cause, because a deeper systemic or policy problem may be lurking behind it.

Serverless / FaaS abstract away traditional provisioning, and they make it really easy to ignore planning for resource usage.

Wow, what a concept:

you can think of […] reliable systems […] as successfully imagining all of the potential things that could go wrong

This 2.5-minute podcast from Todd Conklin has a really great question: to achieve reliability, do we have to try to imagine in advance all of the possible ways our systems could fail?

A patient was given an incorrect syringe resulting in a 5x insulin overdose. Brigham and Women’s Hospital reports on the accident and what they’re doing to prevent mistakes of this sort in the future.

Consumers today have increasingly high expectations for digital applications and service performance, but do IT personnel feel equipped to rise to the occasion? In this survey, we uncover the extent of the digital services expectation gap between consumers and IT teams as well as top strategies teams are using to solve digital disruption challenges.

Outages

  • Our First Kubernetes Outage – Saltside Engineering
    • Kudos to the Saltside folks for sharing a public postmortem for an internal, non-customer-impacting outage!

      This is public postmortem for an a complete shutdown of our internal Kubernetes cluster. It’s shared with you all so everyone may learn.

  • “Re-experience the fun of customizing your Place Page!” A Tale of Oops from Ops
    • Ouch. Linden Lab’s ops team discovered the hard way that they didn’t have a working backup copy of some customer data. The best part of this article is the discussion of the “Shrek Ears” tradition at Linden. It’s one of the things I remember most fondly from my time there, and having worn the ears a few times in my day, I can attest to the fact that it’s a great way to handle the psychological impact of making a mistake.
  • Chase (bank)
  • Facebook

SRE Weekly Issue #71

SPONSOR MESSAGE

Resolving DevOps and IT incidents is not enough. Download the eBook: “Blameless Post Mortems (and how to do them)”, and start learning from them. http://try.victorops.com/BlamelessPostMortems/SREWeekly

Articles

The interesting bit in this story is that upgrading to 5.7 requires a full table rewrite (<tt>ALTER TABLE</tt>) for any table that has time-related columns. Their initial test-run took months and still hadn’t finished.

AdStage made the move from Heroku to running their service directly on EC2, and in this article they explain why and how.

We were officially only getting about 2 ECUs per dyno, but the reality was that we were getting something closer to 6 since our neighbors on Heroku were not using their full share. This meant that our fleet of AWS instances was 3 times too small, […]

Language Warning: contains the word “sexy” used to describe new or interesting technology.

Full disclosure: Heroku, my employer, is mentioned.

I’ve featured many articles from Mathias Lafeldt as part of his series, Production Ready. Now that he’s moved to Gremlin Inc (a SaaS helping customers run chaos experiments), Mathias reintroduces the history and theory of Chaos Engineering.

The folks behind Mail.ru implemented their own master-master replication system on top of Tarantool, a DBMS I’d never heard of. Their implementation is based on some details of their use-case that may not apply more broadly, but the design discussion is interesting nonetheless.

Facebook rewrote their tool, OnlineSchemaChange in Python (from the original PHP). OSC is a tool for doing DDL in MySQL without downtime.

The original open sourced OSC was more like an engine than a tool. Users needed to write PHP code wrapping to run the schema change, and, with PHP becoming less popular in the operations world, OSC.php wasn’t widely adopted by the community.

From PagerDuty, an article on the incident management data to gather, how to gather it, and how to analyze it.

A basic introduction to structured logging, including rationale on why you’d want to use it. With infrastructures growing more and more complicated, I find structured logging indispensable in keeping everything up and running and debugging difficult problems.

For the network nerds, Facebook details their new inter-datacenter network topology.

New in the latest version of Elastic Stack (think ElasticSearch, Logstash, Kibana, etc) is built-in anomaly detection using machine learning, based on technology from Prelert (acquired by Elastic in 2016). “Machine Learning” — they might as well say it’s powered by “Lasers™”. If you try this out and have any success, please write up your results and send me a link!

Outages

SRE Weekly Issue #70

SPONSOR MESSAGE

Resolving DevOps and IT incidents is not enough. Download the eBook: “Blameless Post Mortems (and how to do them)”, and start learning from them. http://try.victorops.com/BlamelessPostMortems/SREWeekly

Articles

GitHub has released OctoDNS, their tool for synchronizing DNS across multiple providers. Shortly after the Dyn outage last fall (covered here), they still only had one DNS provider (source: direct observation). I suspected that this may have had to do with complication in keeping records synched across two providers – perhaps that’s why they created OctoDNS.

Bolt is Netflix’s “event driven diagnostic and remediation platform”, although it actually seems like a full-blown remote execution system for large fleets of servers.

A Google SRE takes us through their first on-call shift including running incident command for a production incident. I like the emphasis on a blameless postmortem.

Pete Shima received some questions about onboarding SREs, and lucky us, he decided to answer them publicly. My favorite section is the one about connecting a new SRE to people across the company. I find that solid connections to folks in various positions are vital to getting my job done well. Thanks to Pete for the SRE Weekly mention!

Salesforce has a humongous infrastructure, and they needed a tool to help visualize data from lots of monitoring systems. They created Refocus to serve that need, and they open sourced it. They had three goals: gather data from all of the monitoring systems, on-board new services quickly, and visualize data in a way that makes sense for each service.

Full disclosure: Salesforce (parent company of my employer, Heroku), is mentioned.

Tcpdump is a critical tool for debugging thorny network issues. Julia Evans created a new zine to help you learn the basics, although if her other zines are any indication, even a pro may learn a new trick or two. The zine is $10 now and will be available for free at some point in the future.

Turns out that sharks are a reliability risk. And not just those WFLB.

From their Global Developer Survey, GitLab learned that it’s common for developers to release code before it’s production-ready in response to organizational pressures.

Code released before it’s ready might be good for meeting deadlines, but that’s about all it’s good for.

Here’s a pretty excellent analysis of why adopting the cloud can be difficult for banks. Just skip past the bit with the incorrect uptime calculation, since four nines of uptime actually equates to about 53 minutes’ downtime per year, not 9 hours.

Outages

  • London Marathon Donations
    • Ebay and Virgin Money Giving both went down under the load as many flocked to place donations before the London Marathon.
  • CARLI
    • CARLI is the Consortium of Academic Research Libraries in Illinois. I included this outage because of the short but sweetly personal postmortem from their network engineer.
  • Instagram
  • Reddit
    • Sorry for the extended outage there. We failed back the maintenance performed earlier tonight. We’ll provide a post-mortem at a later date.

A production of Tinker Tinker Tinker, LLC Frontier Theme