Overconfidence and future-proofing
Kevin Kelly points out that trying to future-proof your technology stack is often a bad investment. Since it’s hard to predict how technology will evolve, you’re better off buying just-in-time. Similarly, since it’s hard to predict how your requirements will change, you might be better off building just-in-time. It’s an interesting question why future-proofing is so pervasive then, and if indeed we do too much of it.
Future-proofing makes sense if you can predict the future in some way. If you have an idea of how your needs will change, it makes sense to invest today to reduce or avoid the cost of making changes later. Future-proofing can also make sense as a hedge. If the impact of a change in requirements is catastrophic, it’s sensible to take measures now, even if you don’t have an idea of what the probability is of that change happening. For example, turning customers away because of stock-outs (→ manage your inventory more conservatively), losing user trust by turning off a service when it’s most needed (→ provision more compute than you need), or trading off of stale information due to a data feed outage (→ have multiple redundant sources).
In writing software, future-proofing takes many forms:
Make an interface more general than it needs to be
Decouple components more than you need to
Provision more storage and compute than you need
Make scalability the primary design goal, even if you’re nowhere near scale
Make performance a primary design goal, even if it’s not important to your value prop
Implement speculative features instead of improving the core of your product
“Just to future-proof things” is perhaps the most common argument I hear to defend questionable design decisions. And conversely, “over-engineering” is the most common counterargument.
I think there are two reasons why we future-proof too much:
We’re overconfident in our ability to predict the future — Even with a lot of relevant experience and information, people routinely fail to anticipate future needs correctly. One aspect of this is over-optimism. We tend to overweigh positive scenarios (e.g. we will grow and scale rapidly), and underweigh negative ones (e.g. we will fail to find product-market fit and run out of money). Future-proofing tends to be concerned only with the former.
We’re ignorant of the tradeoffs — Arguments in favor of future-proofing align with our inherent bias for risk aversion (it’s the safe thing to do), so they’re easy to get on board with. Understanding the costs and tradeoffs is much more subtle and harder to get intitution for.
The second error — ignoring the tradeoffs — is especially detrimental in early stage ventures where opportunity cost can be infinite. If things don’t ship in time, the whole venture can fail. Not only was the effort that went into future-proofing wasted, the entire effort was wasted. Shipping something that works reasonably well today is often a far better outcome than shipping something late that works in all future scenarios. While this seems to be well-understood, it’s amazing how few founding teams manage to internalize this. A huge amount of great technology has been built that never made it into customers’ hands.
This is not to say we should never future-proof. The key is to appreciate the tradeoffs of doing so — and our biases. Great founders — great engineers — understand the constraints the business as a whole is under. A rough expected ROI analysis can go a long way. While it’s impossible (and pointless) to try to get an exact answer, it forces you to think about costs and the parts of the distribution you’d typically ignore. How much is doing X going to cost us in engineering time (NB: multiply by 3)? If we don’t do X, will the product still work? What’s the impact on our value prop? If we don’t do X, what else could we do? These are the kinds of questions you need to ask. Don’t future-proof because it’s the right thing to do in isolation. Understand the context.