In defense of the community distribution
Ever since the mass adoption of Agile development techniques and devops philosophies that attempt to eradication organizational silos, there’s been a welcome discussion on how to optimize development for continuous delivery on a massive scale. Some of the better known adages that have taken root as a result of this shift include “deploy in production after checking in code” (feasible due to the rigorous upfront testing required in this model), “infrastructure as code”, and a host of others that, taken out of context, would lead one down the path of chaos and mayhem. Indeed, the shift towards devops and agile methodologies and away from “waterfall” has led to a much needed evaluation of all processes around product and service delivery that were taken as a given in the very recent past.
In a cloud native world, where workloads and infrastructure are all geared towards applications that spend their entire life cycle in a cloud environemnt, One of the first shifts was towards lightning fast release cycles. No longer would dev and ops negotiate 6 month chunks of time to ensure safe deployment in production of major application upgrades. No, in a cloud native world, you deploy incremental changes in production whenever needed. And because the dev and test environments have been automated to the extreme, the pipeline for application delivery in production is much shorter and can be triggered by the development team, without needing to wait for a team of ops specialists to clear away obstacles and build out infrastructure – that’s already done.
Let me be clear: this is all good stuff. The tension between dev and ops that has been the source of frustration over the centuries has left significant organizational scar tissue in the form of burdensome regulations enforced by ops teams and rigid hierarchies which serve to stifle innovation and prevent rapid changes. This is anathema, of course, to the whole point of agile and directly conflicts with the demands of modern organizations to move quickly. As a result, a typical legacy development pathway may have looked like this:

In the eyes of agile adherents, this is heretical. Why would you waste effort creating release branches solely for the purpose of branching again and going through another round of testing? This smacks of inefficiency. In a cloud native world, developers would rather cut out the middle step entirely, create a better, comprehensive testing procedure, and optimize the development pipeline for fast delivery of updated code. Or as Donnie Berkholz put it: this model implies waterfall development. What a cloud native practitioner strives for is a shortened release cycle more akin to this:

Of course, if you’ve read my series about building business on open source products and services, you’ll note that I’m a big advocate for the 3-step process identified in figure 1. So what gives? Is it hopelessly inefficient, a casualty of the past, resigned to the ash heap of history? I’ll introduce a term here to describe why I firmly believe in the middle step: orthogonal innovation.
Orthogonal Innovation
In a perfect world, innovation could be perfectly described before it happens, and the process for creating it would take place within well-defined constructs. The problem is that innovation happens all the time, due to the psychological concept of mental incubation, where ideas fester inside the brain for some indeterminate period of time, until finding its way into a conscious state, producing an “Aha!” moment. Innovation is very much conjoined with happenstance and timing. People spawn innovative ideas all the time, but the vast majority of them never take hold.
As I wrote in It Was Never About Innovation, the purpose of communities created around software freedom and open source was never to create the most innovative ecosystems in human history – that was just an accidental byproduct. By creating rules that mandated all parties in an ecosystem were relative equals, the stage was set for massively scalable innovation. If one were to look at product lifecycles solely from the point of view of engineering efficiency, then yes, the middle piece of the 3-stage pathway looks extraneous. However, the assumption made is that a core development team is responsible for all necessary innovation, and none more is required. That model also assumes that a given code base has a single purpose and a single customer set. I would argue that the purpose of the middle stage is to expose software to new use cases and people that would have a different perspective from the primary users or developers of a single product. Furthermore, once you expose this middle step to more people, they need a way to iterate on further developments for that code base – developments that may run contrary to the goals of the core development team and its customers. Let’s revisit the 3-stage process:
In this diagram, each stage is important for different reasons. The components on the left represent raw open source supply chain components that form the basis for the middle stage. The middle stage serves multiple entities in the ecosystem that springs up around the code base and is a “safe space” where lots of new things are tried, without introducing risk into the various downstream products. You can see echoes of t his in many popular open source-based products and services. Consider the Pivotal Cloud Foundry process, as explained by James Watters in this podcast with Andreessen Horowitz: raw open source components -> CloudFoundry.org -> Pivotal Cloud Foundry, with multiple derivatives from CloudFoundry.org, including IBM’s.
As I’ve mentioned elsewhere, this also describes the RHEL process: raw components -> Fedora -> RHEL. And it’s the basis on which Docker is spinning up the Moby community. Once you’ve defined that middle space, there are many other fun things you can do, including building an identity for that middle distribution, which is what many open source communities have formed around. This process works just as well from an InnerSource perspective. Except in that case, the downstream products’ customers are internal, and there are multiple groups within your organization deriving products and services from the core code base in the middle stage. Opening up the middle stage to inspection and modification increases the surface area for innovation and gives breathing room for the more crazy ideas to take shape, possibly leading to their becoming slightly less crazy and useful for the other downstream participants.
For more on this, come to our meetup at the Microsoft NERD center in Cambridge, MA on May 23, where I’ll present on this subject.
Addendum: none of the above necessitates a specific development methodology. It could be agile, waterfall, pair programming or any other methodology du jour – it’s immaterial. What matters is constructing a process that allows for multiple points of innovation and iterative construction, even – or especially – where it doesn’t serve the aims of a specific downstream product. You want a fresh perspective, and to get that, you have to allow those with different goals to participate in the process.