The Trouble with Linux

Linux is a fantastic idea: a highly-reliable, free operating system where we all have access to the source code, allowing us to review, enhance, or tailor it to our personal needs.

In practice, however, Linux is a clusterfuck.

I abandoned the platform circa 2003 after I got my first Mac, but Apple’s loss of interest in power-users since the invention of the iPhone has had my panties in a knot. Since the Linux-based Raspberry Pi is a common platform for my open-source project, pianod2, I eventually broke down and bought one. The goal was to reduce cross-platform problems through additional testing and easily resolve user-reported issues, but also I wanted to explore escape strategies from Apple’s ecosystem.

What I found is the same fucked-up mess as when I left 12 years ago.

Where Linux Succeeds

To understand why Linux sucks, we need to look at where it wins.

Linux’s technical underpinnings tend to be very advanced. There is a continual evolution of technology and, unlike some of the vendors, it does not hold back in the name of compatibility or ease-of-use.

Apple, for example, is running HFS+, a filesystem going back to the Mac’s pre-UNIX days with enhancements to support symbolic links, UNIX permissions, extended attributes, compressed files/executables, journaling and other cruft for which Apple has added support over the years. On the one hand, it’s impressive it works and is as reliable as it is (though there were quirks, especially around symbolic links, for several OS iterations in the early OS X days). On the other hand, it’s a bastardization with scary kludges that would be better off replaced with something modern and clean.

Compare that to Linux: ext2 gave way to ext3, which was upgradable to ext4. The new technologies offer improved reliability, efficiency and performance, and Linux takes advantage of it.

Embracing new technology and being willing to lose the old could let Linux be free of the hassles of legacy support.

Linux Users

Despite Linux being potentially awesome and modern, actual Linux use is by several different groups, all with different needs.

Server Farms
Because Linux can be super-efficient, it’s great for server farms. But the catch here is that a given company doesn’t use crazy varieties of Linux or hardware. Instead, they usually pick a particular model of server and choose a Linux distribution, pair them up and resolve the troubles, then use that as a template for hundreds of thousands of duplicate systems. Sure, there are hassles, but when scaled over enough hardware, the cost savings justifies it.
Embedded Systems
Because Linux is composed from lots of discrete projects, it can be stripped down and tuned to custom needs without a lot of waste. If you’re designing a NAS (network storage device), for example, Linux is a reasonable choice: it’s got support for all sorts of network filesystems, works efficiently, is equipped with excellent filesystems. With some work, Linux can be stripped of everything unneeded. Again, there are hassles, but after the problems are solved once, you have a template that can be duplicated to lots of identical little NAS or other embedded boxes you’re building.
Tinkerers are looking for parts for their projects. They aren’t worried about reproduction of their work or whether something it easy to set up or use. Resolving the snags that Linux presents is part of the tinkering process, along with building small circuits and writing software to interact with them, or solve whatever problems the tinkerer has invented for him or herself. They will re-use old components they own from earlier projects, meaning older hardware and obscure peripherals.

Where Linux Loses

Linux lacks overall integration. Using Linux is like driving a car grafted together from all the best components selected from Ferrari, Lamborghini, Porsche, Audio, Mercedes, Volvo, Jaguar, Lexus and Aston Martin. After going through the effort to get it all together, you’d have an engine that would run great but be a beast to work on. It would ride on a bare chassis, or perhaps a boxy body put together as an afterthought.

The problem is one of motivations:

  • The projects that build each component are very concerned about the performance, efficiency and reliability. They build something excellent for their needs, but not necessarily easy to use. They build what they want, not what everyone else wants.

  • The integrators have to support the different user groups. Server rooms have the latest-and-greatest hardware and lots of compute power. Embedded systems have modern devices, unless using unique hardware; either way there is usually relatively little computing resources. Tinkerers, depending on their project, may use great new hardware, some obscure development kit, or some obsolete piece of junk they had laying around.

  • To accommodate the different needs, integrators (the folks managing the distros) have packages divvied up into tiny components. I don’t install ffmpeg, nor do I install each of the half-dozen libraries for ffmpeg: I install each library’s development, documentation, library, debug support and utilities.

The problem then compounds itself. Linux is a hodge podge of projects, and there are hundreds of different flavors (distributions) of Linux built with different projects. Some distributions embrace a change, while others remain skeptical and retain the older version. Yet others may go a different direction, swapping out a package with an alternate solution. Source-code compatibility is a goal, not binary compatibility.

The result is huge messes such as system startup. There were system V startup scripts (/etc/init.d) which were then formalized as the LSB (Linux Standard Base). Then, init was replaced with systemd(1) and other attempts to improve on the old-style init. But rather than replace LSB wholesale, compatibility layers were put in place to allow old-school startup to still work.

The result is a confusing, poorly and contradictorily documented disaster. To be fair, Apple made a similar transition from init to launchd in 10.4. But by 10.6 (maybe even 10.5), support for the old was only there for add-on compatibility; the OS itself had been fully adapted to the new solution. (To be fair, the state of startup in Linux is improving in some distributions as they move beyond the transition.)

I’m finally ready to enumerate the problems with Linux:

  1. Because everything comes piecemeal, there are perpetual snags getting it to all work together.

  2. Because each project is solving its own problems, there’s no overall coherence to management. Sometimes there’s a GUI configurator. Other times there’s a command-line setup tool. But most often, you’re down to entering obscure command-line usage or editing mysterious configuration files with your text favorite editors.

  3. Trying to get anything to work is an exercise in man pages, Google searches and cookbooking on the command line.

  4. In the Mac and Windows world, paid software is available, profit drives availability of multiple choices of reliable, complete, easy-to-install, easy-to-use office packages, checkbook programs, calendars, mail programs, and photo managers. For-profit software essentially doesn’t exist for Linux because the hardware architecture, installed libraries and OS services aren’t consistent. While there are open-source versions of common software, they come with installation hassles, quirks and/or bugs, awkward user interfaces, and are often incomplete.

Perhaps some tools could be built to ease more setup, but since configuration files are often free-form text files, that becomes a difficult task. I’m certainly guilty of that one with pianod’s startscript, which is where one specifies audio output devices, crossfade parameters, inputs/sources, and specifies starting state. It’s easy to code it, easier to read and manipulate than XML, and very flexible—but it’s not easy to write a management tool for it because statement order matters, it’s unclear how to preserve comments, and how does the tool deal with unknown content? (With XML, unexpected items are easy: just put them back the way you found them.)

Back to my Mac

As much as I dislike the walls going up at the same time there’s a general dumbing down within Apple’s ecosystem, a month of dealing with Linux has reminded me why I left in the first place: I want to use my computer, not administrate it.

At least 95% of Mac administration is easy GUI controls. There is minimal futzing. Most applications install by downloading and dropping them in the Applications folder, and uninstall by dragging to the trash. Everything looks pretty (Well, that’s because I’m still running Mavericks and not ClownOS—I mean, El Capitan) and, a few third-party apps aside, everything behaves consistently.

I wish Linux had the ease-of-use of my Mac, because if Apple’s attempts to lock me in had me nervous, increasingly quirky apps (especially ones that used to work perfectly fine), bugs in Korn Shell’s sleep command, and a CERN-quality security issue where the app sandbox breaks PAM authentication wide-open have trashed my confidence in Apple (even if ksh has since been fixed and the PAM issue was patched on Sierra). Microsoft’s addition of a Linux Compatibility Layer has me seriously contemplating a move to Windows.

I can see Linux is brilliant for special uses, especially server-type tasks. But for day-to-day, desktop-style needs, Linux still doesn’t meet needs.