3 Asymptotes of Engineer Happiness
An asymptote, mathematically, is a line that a curve function tries to meet but never does given any finite input value. The sigmoid function, for instance, has a horizontal asymptote at y=1, but no value of x, however large, will ever give you a value of 1. Rephrased, if you make the input 5 or 100 or 100,000,000,000 you will never get to the ultimate value of 1.
There are things that, as a software developer, seem to be “golden end-states”. Things that, if I could achieve them in my projects, in my day-to-day productive hours, continually as my company grows, it will make everything just fine. Kind of like brushing your teeth, flossing and mouth washing perfectly twice a day will guarantee your teeth are perfect forever.
Except they don’t, because they’re not perfect. Nothing will keep your teeth healthy forever. And none of the golden end-states, or asymptotes in engineering, are ever achievable completely. But accepting them as non-finite goals means you will end up challenging yourself, your code and your company to be better all the time. As far as I can tell, that’s how great company cultures are formed - never accepting the status quo.
Push for Continuous Deployment
The concept of continuous deployment is pretty fantastic. Developers can release any kind of change to the production system, at any time, only limited by the amount of time it takes to run through your automated test suite. Such a system is engineered to have tremendous resilience in the face of bugs. It comes as the logical next step to continuous integration - if your code is fully tested by the time it hits your trunk, what are you doing waiting? So it goes. I applaud efforts to give developers more control over their code for many reasons; there are so many happy side effects to just letting a developer do a (safe) `git push` to the live environment.
- If you’re pushing your own code, you *will* be paying attention to what happens to your product before and after the release. Seeing something you’ve built go live is one of the great joys of being an engineer.
- Less time waiting for someone else to release your code keeps your brain attuned to the risks and caveats of the release.
- If you have access to an internal Feature Flipping framework, the fine-grained control of a particular feature allows for tons of flexibility and - again - results in even more attention paid to the state of your product.
But for the happiness you get from being able to quickly push, the task of creating a completely resilient system is, mildly, a daunting one. Even with extremely sophisticated flagging, daily deployments and a karma system, Facebook still has challenges with its release quality. That quality problem results in a world where, realistically, it’s fundamentally impossible to reduce the window of terrible things happening on your production environment to zilch.
The best approach towards continuous deployment I’ve seen has a healthy tension with the need for extraordinarily high quality. In a model that acknowledges CD as an end state for releasing code, it is understood that we might never be able to get all the way to “purely continuous” deployment, but that there is a ongoing forcing function to try and get there anyway.
In other words, plan for continuous deployment, but be flexible about issues that may come up along the way. This works for Facebook as well as for more conservative companies, like Box, where I work now. Even in enterprise software, the guiding values of CD work, and we are constantly trying to work in more - unless we can’t.
Strive for 100% Code Coverage
Nothing ruins my day like a serious bug (or stream of bugs) caused by a change someone else made in code that I wrote. If pure continuous deployment makes software engineers happy during the release process now, then creating test coverage makes them happy when others release later. In the same vein as CD, it’s about creating confidence and removing anxiety when making a change.
I have never, ever actually seen a serious project with 100% code coverage.
In practice, getting to 100% code coverage is inanely hard with any significant amount of code or a serious product backlog. Actually, when 100% code coverage is explicitly made mandatory, I’ve seen it be one of the first requirements to get the boot. In large part this has to do with how hard it is, but also how the returns perceptibly diminish over time. You find yourself testing functions for testing’s sake, even when the code has some seemingly negligible level of risk (think testing a function that just returns a constant or blindly delegates to some library function).
However, getting 80-90% code coverage is far from impossible. In fact, it’s quite doable most of the time, especially if you can identify dead code and have the freedom to remove it. Even getting just a fraction of my active code hooked into a regression testing framework gives me tremendous confidence that basic aspects of my product won’t flake out.
Code coverage really drives home the asymptote metaphor - you pay exponentially more effort to get marginally higher value as you get closer to the end. So should we be striving for it?
In fact, there is an argument to always try to get there. In medicine, aviation fault analysis, and general risk management, catastrophic failures are caused by lots of tiny issues and the stars aligning in exactly the worst way possible. This is commonly called the Swiss Cheese Model of errors, a term I learned from the brilliant Noah Sussman. The argument goes - the higher your code coverage, regardless of how minuscule, reduces the likelihood of a swiss cheese failure.
Again, the methodology that has succeeded the most in projects I’ve worked on strive for 100% code coverage and balance that goal with other realistic needs. It’s acknowledged that there is some level of coverage that’s fine for now, but never enough.
Know everything, all the time
This one is probably the least measurable, and the most obvious, but it’s worth pointing out. Whether you’re reading HN or building out A/B analytics in a new feature, having concrete knowledge is fundamental to any kind of decision making.
It probably seems trite to even bring it up, but I do want to highlight a very real, related problem. I’ve encountered people who think they’ve hit some acceptable amount of knowledge, as a result of past experience or otherwise. Those folks have been incredibly hard to talk about experiments, next steps in a project or even basic strategies, let alone getting them to agree with you. In the worst case, it is a battle of intuitions and reason, but without any basis in ground truth.
The short of it is: given some amount of knowledge, I can produce a valid solution to a problem. Therefore, if I keep learning, all the time, day-in-day-out, it becomes much more likely to not only be able to solve some problems but *many* problems. It also means I can come up with some mechanism for proving that my solutions are correct.
Consequently, I have become a far happier engineer and coworker by constantly striving for new information. Perfect information is generally impossible, but to the extent that I can, I do not want to be unable to answer a question, approached with a problem I can’t create a reasonable solution to, for lack of information. Being knowledgeable means debates don’t become meaningless arguments, especially if everyone is playing the knowledge game.
To be sure, there are other asymptotes besides these. Having long ranging goals like these give more concrete sets of values than morale boosting company values (like “Honesty” or “Innovation”) generally do.