halting problem :: Episode 2.0: Retrospective

:: ~10 min read

A retrospective on GNOME 1.x, before we launch into the main narrative of GNOME 2; we’re going to look back at what GNOME 1 did right; what it did wrong; and what it meant in the larger context of the history of the project

Hello, everyone, and welcome back to the History of GNOME! If you’re listening to this in real time, I hope you had a nice break over the end of 2018, and are now ready to begin the second chapter of our main narrative. I had a lovely time over the holidays, and I’m now back working on both the GNOME platform and on the History of GNOME, which means reading lots of code by day, and reading lots of rants and mailing list archives in the evening—plus, the occasional hour or so spent playing videogames in order to decompress.

Before we plunge right back into the history of GNOME 2, though, I wanted to take a bit of your time to recap the first chapter, and to prepare the stage for the second one. This is a slightly opinionated episode… Well, slightly more opinionated than usual, at least… As I’m trying to establish the theme of the first chapter as a starting point for the main narrative for the future. Yes, this is an historical perspective on the GNOME project, but history doesn’t serve any practical purposes if we don’t glean trends, conflicts, and resolutions of past issues that can apply to the current period. If we don’t learn anything from history then we might as well not have any history at all.

The first chapter of this history of the GNOME project covered roughly the four years that go from Miguel de Icaza’s announcement in August 1997 to the release of GNOME 1.4 in April 2001—and we did so in about 2 hours, if you count the three side forays on GTK, language bindings, and applications that closed the first block of episodes.

In comparison, the second chapter of the main narrative will cover the 9 years of the 2.x release cycles that went from 2001 to 2010. The duration of the second major cycle of GNOME is not just twice as long as the first, it’s also more complicated, as a result of the increased complexity for any project that deals with creating a modern user experience—one not just for the desktop but also for the mobile platforms that were suddenly becoming more important in the consumer products industry, as we’re going to see during this chapter. The current rough episode count is, at the moment I’m reading this, about 12, but as I’m striving to keep the length of each episode in the 15 to 20 minutes range, I’m not entirely sure how many actual episodes will make up the second chapter.

Looking back at the beginning of the project we can say with relative certainty that GNOME started as a desktop environment in a time when desktops were simpler than they are now; at the time of its inception, the bar to clear was represented by Windows 95, and while it was ostensibly a fairly high bar to clear for any volunteer-driven effort, by the time GNOME 1.4 was released to the general public of Linux enthusiasts and Unix professionals, it was increasingly clear that a new point of comparison was needed, mostly courtesy of Apple’s OS X and Microsoft’s Windows XP. Similarly, the hardware platforms started off as simpler iterations over the PC compatible space, but vendors quickly moved the complexity further and further into the software stack—like anybody with a WinModem in the late ‘90s could tell you. Since Linux was a blip on the radars of even the most widespread hardware platforms, new hardware targeted Windows first and foremost, and support for Linux appeared only whenever some enterprising volunteer would manage to reverse engineer the chipset du jour, if it appeared at all.

As we’ve seen in the first episode of the first chapter, the precursors to what would become a “desktop environment” in the modern sense of the term were made of smaller components, bolted on top of each other according to the needs, and whims, of each user. A collection of LEGO bricks, if you will, if only the bricks were made by a bunch of different vendors and you had to glue them together to build something. KDE was the very first environment for Linux that tried to mandate a more strict integration between its parts, by developing and releasing all if its building blocks as comprehensive archives. GNOME initially followed the same approach, with libraries, utilities, and core components sharing the same CVS repositories, and released inside shared distribution archives. Then, something changed inside GNOME; and figuring out what changed is central to understanding the various tensions inside a growing free and open source software project.

If desktop environments are the result of a push towards centralisation, and comprehensive, integrated functionality exposed to the people using, but not necessarily contributing to them, splitting off modules into their own repositories, using their own release schedules, their own idiosynchrasies in build systems, options, coding styles, and contribution policies, ought to run counter to that centralising effort. The decentralisation creates strife between projects, and between maintainers; it creates modularisation and API barriers; it generates dependencies, which in turn engender the possiblity of conflict, and barriers to not just contribution, but to distribution and upgrade.

Why, then, this happens?

The mainstream analytical framework of free and open source software tells us that communities consciously end up splitting off components, instead of centralising functionality, once it reaches critical mass; community members prefer delegation and composition of components with well-defined edges and interactions between them, instead of piling functionality and API on top of a hierarchy of poorly defined abstractions. They like small components because maintainers value the design philosophy that allows them to provide choice to people using their software, and gives discerning users the ability to compose an operating system tailored to their needs, via loosely connected interfaces.

Of course, all I said above is a complete and utter fabrication.

You have no idea of the amounts of takes I needed to manage to get through all of that without laughing.

The actual answer would be Conway’s Law:

organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations

We have multiple contributors, typically highly opinionated, typically young or, at least, without lots of real world experience. Worse case, the only experience available comes from years of computer science lessons, where object orientation reigns supreme, and it’s still considered a good idea despite all the evidence to the contrary.

These multiple contributors end up carving their own spaces, because the required functionality is large, and the number of people working on it is always smaller than a 100% coverage. New functionality is added; older modules are dropped because “broken”, or “badly designed”; new dependencies are created to provide shared functionality, or introduced as abstraction layers to paper over multiple modules offering sligthly different takes on how some functionality ought to be implemented, or what kind of dependencies they require, or what kind of language or licensing terms ought to be used.

Complex free software projects with multiple contributors working on multiple components, favour smaller modules because it makes it easier for each maintainer to keep stuff in their head without going stark raving mad. Smaller modules make it easier to insulate a project against strongly opinionated maintainers, and let other, strongly opinionated maintainers, route around the things they don’t like. Self-contained modules make niche problems tractable, or at least they contain the damage.

Of course, if we declared this upfront, it would make everybody’s life easier as it would communicate a clear set of expectations; it would, on the other hand, have the side effect of revealing the wardrobe malfunction of the emperor, which means we have to dress up this unintended side effect of Conway’s Law as “being about choice”, or “mechanism, not policy”, or “network object model”.

The first chapter in the history of the GNOME project can be at least partially interpreted within this framework; the idea that you can take a complex problem space and partition it until each issue becomes tractable individually, and then build up the solution out of the various bits and pieces you managed to solve, letting it combine and recombine as best as it can to suit the requirements of the moment, platform, or use case. Throw in CORBA as an object model for good measure, and you end up with a big box of components that solve arbitrarily small issues on their own, and that can theoretically scale upwards in complexity. This, of course, ignores the fact that combinatorial explosions of interactions make things very interesting for anybody developing, testing, and using these components—and I use “interesting” in the “oh god oh god we’re all going to die” sense of the word.

More importantly, and on a social level, this framework allows project maintainers to avoid having to make a decision on what should work and what shouldn’t; what is supported and what isn’t; and even what is part of the project and what falls outside of it. If there some part of the stack that is misbehaving, wrap it up; even better, if there are multiple competing implementations, you can always paper over them with an abstraction layer. As long as the API surface is well defined, functionality is somebody else’s problem; and if something breaks, or mysteriously doesn’t work, then I’m sure the people using it are going to be able to fix it.

Well, it turns out that all the free software geeks capable of working on a desktop environment are already working on one, which by definition means that they are the only ones that can fix the issues they introduced.

Additionally, and this is a very important bit that many users of free and open source software fail to grapple with: volunteer work is not fungible—that is, you cannot tell people doing things on their spare time, and out of the goodness of their hearts, to stop doing what they are doing, and volunteer on something else. People just don’t work that way.

So, if “being about choice” is on the one end of the spectrum, what’s at the other? Maybe a corporate-like structure, with a project driven by the vision of a handful of individuals, and implemented by everyone else who subscribes to that vision—or, at least, that gets paid to implement it.

Of course, the moment somebody decides to propose their vision, or work to implement it, or convince people to follow it, is the moment when they open themselves up to criticism. If you don’t have a foundational framework for your project, nobody can accuse you of doing something wrong; if you do have it, though, then the possibilities fade away, and what’s left is something tangible for people to grapple with—for good or ill.

At the beginning of the GNOME project we had very few individuals, with a vision for the desktop; while it was a vision made of components interoperating to create something flexible and adaptable to various needs, it still adhered to specific design goals, instead of just putting things together from disparate sources, regardless of how well the interaction went. This led to a foundational period, where protocols and interfaces were written to ensure that the components could actually interoperate, which led to a somewhat lacklustre output; out of three 1.x minor releases all we got was a panel, a bunch of clock applets, and a control centre. All the action happened on the lower layers of the stack. GTK became a reasonably usable free software GUI toolkit for Linux and other Unix-like operating systems; the X11 world got a new set of properties and protocols to deal with modern workflows, in the form of the EWMH; applications and desktop modules got shared UI components using CORBA to communicate between them.

On a meta level, the GNOME project established a formal structure on itself, with the formation of a release team and a non-profit foundation that would work as a common place to settle the internal friction between maintainers, and the external contributions from companies and the larger free and open source software world.

Going back to our frame of reference to interpret the development of GNOME as a community of contributors, we can see this as an attempt to rein in the splintering and partition of the various components of the project, and as a push towards its new chapter. This tension between the two efforts—one to create an environment with a singular vision, even if driven by multiple people; and the other, to create a flexible environment that respected the various domains of each individual maintainer, if not each individual user—defined the first major cycle, as it would (spoiler alert) every other major cycle.

Now that the foundational period was over, though, and the challenges provided by commercial platforms like Windows and OS X had been renewed, the effort to make GNOME evolve further was not limited to releasing version 2.0, but to establish a roadmap for the future beyond it.


Next week we’re going to dive right back into the development of GNOME, starting with the interregnum period between 1.4 and 2.0, in which our plucky underdogs had finally become mainstream enough to get on Sun and IBM radars, and had to deal with the fact that GNOME was not just a hobby any more, in the episode titled: “On Brand”.

References

history of gnome gnome podcast

Follow me on Mastodon