Why Linux isn’t yet ready for synchronized release cycles

Ubuntu founder Mark Shuttleworth has again called for the developers of major open-source software programs and Linux distributions to synchronize their development and release cycles. He argues that consistent and universal adherence to a specific time-based release model would promote more collaboration between projects, ensure that users have access to the latest improvements to popular applications, and make the Linux platform a more steady and predictable target for commercial software vendors.

Shuttleworth wants to organize major releases into three separate “waves” which would each include different components of the desktop stack. The first wave would include fundamental components like the Linux kernel, the GCC compiler, graphical toolkits like GTK+, and development platforms like Python and Java. The second wave would include the desktop environments and desktop applications, while the third wave would be the distributions.

Although a unified release cycle would reduce much of the complexity associated with building a Linux distribution, the concept poses significant challenges and offers few rewards for software developers. Achieving synchronization on the scale that Shuttleworth desires would require some open-source software projects to radically change their current development models and adopt a new approach that isn’t going to be viable for many projects.
Understanding time-based release cycles

A time-based release cycle implies issuing releases consistently at a specified interval. The development process for projects that employ this model generally involves establishing a roadmap of planned features and then implementing as many as possible until the project reaches the code-freeze stage near the end of the interval, at which point the features that haven’t been finished get deferred. The focus shifts to debugging and quality assurance until the end of the interval, when the software is officially released.

This model works well for many projects, particularly the GNOME desktop environment. One consequence of this model, however, is that it forces developers to work incrementally, and it discourages large-scale modifications that would exceed the time constraints of the cycle. Sometimes that window just isn’t large enough to merge and test major architectural changes that were incubated in parallel outside of the main code tree.

When that happens, developers have to ask themselves whether the benefits of the new features outweigh the detrimental impact of the regressions (like with the GVFS adoption in GNOME 2.22, for example). Sometimes they have to decide to pull out features at the last minute or push back the release date to allow for more debugging. These are hard choices, and, as Shuttleworth himself notes, making those choices requires a lot of discipline.

Although time-based cycles can work well for some projects, attempting to force all projects to adopt this approach and then correlate these universally could seriously degrade the development process. If projects begin to depend on synchronization, then delays at any level of the stack would cause disruption to every other layer. This could put enormous pressure on individual projects to stick to the plan, even if doing so would be detrimental to the program and to its end users.