The Sad State of Modern Computing

The modern computer is like a poorly built house: its bad enough that its design was terrible in the first place, but instead of taking the time to fix it we simply place a bucket under every ceiling leak, and when that proves ineffective we just place a larger bucket in its place.

Computers in general are no different: we have gotten so used to the initial, unoptimal designs that instead of redesigning everything to a more suitable, long-term solution, we have instituted small hacks here and there to keep running with it to the point where changing everything now means breaking everything that’s been written to use the “temporary” solutions.

In this article I aim to show prime examples across many aspects of modern computing platforms where we failed to create a viable long-term solution, and even worse where we simply “patched a problem up” to the point where said patch has been so engraved into a platform that changing everything would be too drastic.

For the casual reader, here are the topics I cover:

Not-so Backwards Compatible
The Internet
Unix/Linux Sound
CPUs
Unix Issues
Windows Issues
API Hell
User Agents
Apple’s Weird Transition
Java
Conclusion

Not-so Backwards Compatible

The term “backwards compatibility” as we know it has grown into a heavily abused phrase for “allow stuff using the old, initially terrible design to keep on working as we try to remedy the situation”.

A prime example of this is Microsoft Windows Vista: the next generation of Microsoft programmers realized the primitive mess that the Windows NT codebase has become as a result of application-specific hacks, so they decided to improve it (although some “improvements” were still no better).

The result was the combined moans of hundreds of users as the XP-friendly applications crashed within Vista. The majority of said applications used undocumented features within the Windows libraries, and when the Microsoft programmers either removed or modified said undocumented features (and some documented ones as well), the fit hit the shan so to speak.

The same applies across all platforms, as you’ll see in other examples below, but because “backwards compatibility” means “keeping the same piss-poor design” in most cases, you can see why other problems begin to rear their ugly heads as our computational needs increase.

The Internet

The Internet as we know it (largely packet-switched, peer-to-peer, TCP/IP based), was first brought online in 1983 by the United States as part of the ARPANET network, with backbone speeds upgraded to 1.5Mbps from the “slower” 56Kbps. This precursor to the publicly-available Internet as we know it was designed with only a few thousand hosts in mind.

Fast forward to the 1990′s: everyone and their brother is gaining Internet access, and said Internet is based on IP version 4 (32 bits for storing a globally unique node address). Applications and operating system libraries are being written daily for usage atop this current Internet and its addressing scheme, and everything is fine.

Fast forward to the 2000′s: uh-oh, we’re starting to realize that 32 bits might not be enough to satisfy the exponential growth of the Internet. Subnetting, NAT, and other little hacks are anything but long-term solutions as we’re starting to realize. So we write IP version 6, this time allowing a whole 128 bits for addressing.

It’s like putting a bigger bucket under a ceiling leak.

But here lies the problem: remember all of those applications and libraries written to use the strained IPv4 protocol? Well, some of them are “legacy applications” (“we have the code, but don’t feel like improving it and recompiling it”, or, “Yes, this program IS ancient, but we’ve been using it for forever”).

While there are some IPv6 enthusiasts whom realize the problem, most people either keep on using IPv4 because it is completely cross-platform and supported, or they use both for the same reason, only allowing for forwards compatibility.

Although not a single computer itself, the Internet itself (more specifically, its addressing scheme) is a prime example of where we should have realized the growth potential ahead of time and allowed for it better. In our defense, nobody would have said with a straight face that the Internet would grow like it has back in 1989, but that still doesn’t mean that better design decisions would have prevented the problem we are facing now, even if it would mean more bandwidth overhead.

Unix Sound

The current state of sound in Unix and Linux can only be described as a disaster, and is represented by the following diagram

Unix / Linux Sound

Unix / Linux Sound

What you see here are the three Unix / Linux sound APIs: OSS, ALSA and Pulseaudio (the latter two being Linux-only). First, a little of the history behind the madness:

In the beginning, there was OSS. OSS provided POSIX-style access to the sound hardware, both Linux and Unix implemented it for cross-platform sound, and everybody was happy. Everybody, that is, until the developers of OSS sold out to a company, changed their license to a proprietary one, and royally pissed off the GNU folk behind Linux development.

Well, most Unix distributions continued to use OSS, as their license permitted, while the Linux community scrambled to find/create an OSS alternative for their platform, in the meantime using the old, free OSS that also contained problems (such as not allowing more than one application to play sound via the sound card, etc.).

So the Advanced Linux Sound Architecture was born, both providing a free software alternative to OSS and addressing problems OSS didn’t solve. Every problem, except the fact that most (if not all) sound applications were written to use OSS at ALSA’s birth. Here lies a fundamental problem.

So ALSA continued to hack away, providing an OSS compatibility layer, while the Unix developers laughed and continued to support OSS, which would later become free again, along with its own ALSA compatibility layer just to add to the madness. So if an application was proprietary/unmaintainable and used OSS for sound, you had to ensure it was the only program using the OSS (or OSS-emulated) sound card driver.

After confused application developers rewrote programs to use the new ALSA API and confused users had to learn to kill all sound applications to allow an OSS application to run (with Windows/Mac users laughing all the way), Pulseaudio came along.

Pulseaudio addressed the issues that ALSA didn’t address, such as controlling the sound for individual applications (which played sound at the same time, for once), better OSS emulation (the “padsp” program), ALSA compatibility, and the “need” for routing sound over a network to a remote host for playback. Major Linux distributions quickly saw the need for changing the default sound system to this new Pulseaudio program, albeit breaking most sound applications in the process of doing so.

So developers were encouraged, once again, to use the new Pulseaudio system for sound playback, even though the ALSA and OSS applications would be compatible with this new sound management system. Meanwhile, Ubuntu and Fedora users were even more confused as to why sound no longer came out of their speakers, and thus were subjected to this long history of sound madness in Linux.

And as a result of this madness, most Linux systems have inconsistent sound if any at all, application writers (especially those for Flash on Linux) are extremely confused as to which API to use, and OSS programs (such as VMware Player, Flash, and everything else that matters) are almost broken unless you tax your CPU with the “padsp” program, which relies on pulseaudio working, and so on.

In this case, OSS is the leaking ceiling, ALSA is the little bucket someone put there in a hurry, and pulseaudio is a larger bucket somebody else put the other bucket inside of. And all along, somebody should have just fixed the ceiling once and for all.

CPUs

When Intel launched the 8088 microprocessor in 1974, they probably had no idea that we would still be using the same core instruction set, interrupt architecture, and operating system designs as this chip would more than 30 years later.

Instead of moving to a more optimal design to meet today’s computing needs, Intel and others have simply added more instructions to the same old architecture, more cores, and more CPUs per board. And we wonder why Moore’s law is starting to become inaccurate.

This picture depicts both an old CPU, as well as a representation of the current state of processors as a whole.

This picture depicts both an old PDP-11 CPU, as well as a graphical representation of the current state of microprocessors as a whole.

Intel made a valiant effort to replace ye olde IA-32 with the Itanium processors, but we all know how the “Itanic” sufficed. Other alternatives include Sparc chips, PPC/POWER, and others, but all failed because of a single common denominator:

Operating system authors.

The hardware was out there, yet OS authors, particularly those at Microsoft, failed to allow the transition from the insufficient x86 architecture to a better alternative. So in response, AMD adds 32 more bits to the architecture as a whole, Intel crams more cores and instruction set extensions, and motherboard manufacturers cram more CPUs into their boards to keep x86 chugging along as the standard architecture.

The main reason for this remains backwards compatibility. I don’t know about you, dear reader, but I don’t have the urge to run Windows 3.1 – 2000 on my hardware out side of an emulator/VM anytime soon, but apparently the operating system developers, compiler writers, and even CPU manufacturers blatantly disagree. IBM’s AS/400 (or iSeries, or System i) set of mainframes are able to run programs across any CPU version, so why can’t we?

And look at Apple: for years they used PPC as their architecture, before they joined the “established in 1974″ bandwagon and moved to Intel. And how did they manage this “two steps backwards” transition? Universal binaries. And they did it even earlier with Rosetta. See? is IS possible, just nobody will wake up and try to make the transition (despite the past failures). Anything to keep x86 just chugging along.

Multiple Cores

Another thing about CPUs is that, despite being built on long-aged technology, operating system authors have yet to master effective usage of multiple CPU cores, which is partially the fault of CPU manufacturers.

It is NOT the application developers’ job to take advantage of multiple cores: that is something the operating system scheduler should be able to do, regardless of the application running in user space. Since when is it the program’s job to take advantage of hardware? Hardware abstraction is something taught very early in operating system classes, yet modern kernel developers don’t seem to grasp this concept when it comes to taking advantage of multiple CPU cores.

Unix Issues

Nothing much has changed since Simson Garfinkel wrote The UNIX-Hater’s Handbook in 1994, sadly. And many of the issues discussed in that book also plague Linux, although it has done a fairly decent job of fixing many others (so much for being truly Unix-like).

And one more thing to note about the above book: the examples of better systems, such as MULTICS, VMS and the ICS still apply today, although a single system that addresses all of our problems has yet to arrive.

Windows Issues

Ah, now while this post could go on all day with miscellaneous problems associated with the most widely used family of operating systems today, let me first start out with where Microsoft actually made a very good decision: moving from DOS-based Windows to Windows NT.

If Microsoft had kept with the “patch it and go” mentality of other systems, then the current Microsoft operating system would be like an updated Windows ME (Vista jokes aside). Instead, Microsoft hired Dave Cutler (architect of VMS; somebody with sense) to work on Windows NT back in the late 80′s, and it would eventually completely replace DOS-based Windows.

And the best part: except for true DOS programs, most portable executables that ran on Windows 9x/ME still run on Windows NT. Bravo, Microsoft, bravo.

Now for the kicker: to solve for most of the security and other criticisms of Windows XP and 2000, Microsoft made drastic changes to the latest OS, Vista, which inherantly broke compatibility with many applications in the process and drew more criticism.

That’s right – Microsoft once successfully replaced their entire line of operating systems with an entirely new (VMS-like, OS/2-derived) codebase, while remaining backwards compatible, yet broke application functionality in a run-of-the-mill NT to NT update.

So much, in fact, that Windows 7 actually aims to downplay some of the Vista changes to try and remedy the situation. Trust me, it gets even better!

Portable executables – you know, .exe files, programs, etc. – have been around since MS-DOS, some of which truly live up to their name and run across versions of both DOS-based Windows and NT seamlessly. But behind the scenes, portable executables are a growing mess.

PEs are based on the Unix COFF executable format, and if you read the sections above (or at least the book linked to in the Unix section), then you know this automatically means trouble. Since their inception for execution on MS-DOS, portable executables have retained the same layout, even across 16-, 32- and 64-bit versions.

The easiest way to view one of the most obvious workarounds to this “portability” issue, is to either try to run a Win32 executable on an olde DOS machine, or simply open one up in a text/hex editor. You’ll see, “This program cannot be run in DOS mode,” either way.

32-bit Windows skips right over the section of the PE header that contains the code to execute this and exit, in order to execute the 32-bit code that comprises of the program that was meant to be run all along. 64-bit PEs implement a similar hack.

So 32-bit PEs can’t run on 16-bit systems and likewise with 64-bit PEs, problem solved. But wait, there’s also .NET applications…

Yes, you may have noticed that .NET (Microsoft’s Java) applications are also knighted with the .exe file extension. These contain the exact same format as the above example, only the 32-bit (or 64-bit) code contained within is instructed to locate and load up the CIL (Common Intermediate Language) interpreter to execute the .NET code contained within.

Let me repeat that in perspective: Microsoft’s Java clone, .NET, relies on the same architecture-dependent portable executable format in order to load up the CIL interpreter/JIT-compiler for execution.

Java distributes compiled programs in bytecode format ONLY, in contrast, to acheive true portability, which in comparison to Microsoft’s implimentation would hurt their transition to another architecture later. This would prevent Microsoft from wisely using .NET to move to ARM to penetrate the ARM netbook market, as I discussed in the article .NET Could Be Key in Windows On ARM Netbooks.

So while Microsoft have proven they are able to smoothly change designs in an effort to better their software, as they did with Windows 9x->NT, they simultaneously shoot themselves in the foot with mistakes such as their continued usage/patching of portable executables and NT updates.

API Hell

At our “disposal” as application progammers, we have frameworks on top of APIs on top of libraries on top of runtimes on top of syscalls on top of the operating system. Most of this is to retain compatibility with older programs/code that uses the “old code”, e.g. so the legacy C-language application can continue to run/compile on your system while you use the newer C++ APIs and runtime libraries.

And they all use the same system call architecture derived from Unix, popularized as software interrupts atop the Intel architecture.

Everybody has their favorite language to program in, and I don’t knock that. But here’s the thing – C was invented during the Nixon administration, and since then has continued to be the standard for both system-level and application software. C is the usual choice for system software, but application software?

It’s like building a house with your bare hands when there’s a group of workers willing to do the job for you. Why? Because you’ve gotten “so used to doing things that way”? And you wonder why there are so many bugs/exploits in the wild for such a wide range of software – somebody let a function pointer or array go unchecked with no exception to catch it.

C++ is no whiz kid either, but compared to C is a much better alternative to application programming what with an actual runtime for exceptions. But it still looks aged compared to other languages with garbage collection, true dynamic typing, and less-cluttered syntax.

So in the midst of this mess, we have C shared libraries sitting on our system, with C++ runtimes loaded into memory, that huge Java API taking up space on disk, and not to mention the system-specific libraries that some applications don’t even use/need. It’s not so much what the libraries do, it’s how they are arranged, example: DLL hell, dependency hell and shared library versioning difficulties.

The way the design has been since its conception, and my own corruption via getting too used to said design (or curse, as I would have it), leaves me at a loss for a solution.

Many of the other problems detailed in this article stem from library issues, and library issues stem from some of the problems, so there is no easy way of approaching a solution because of how deep this problem has become embedded into modern computing standards.

User Agents

As another prime example of how developers patch things up in order to solve a problem that shouldn’t exist in the first place, web browser user agents have evolved overtime to become an indistinguishable mess that plague both web masters and log-readers alike.

This blog post perfectly sums up the history of the situation while using Google Chrome, by far the worst perpetrator of this crime, as the poster child of hackish user agents browser developer conjure up to make our lives a living hell.

Apple’s Weird Transition

While Apple has made the necessary design changes and transitions necessary to overcome the patch-and-go syndrom other computer systems have undergone, other changes have not been as optimal as they should have. Case in point: the PPC > Intel transition.

I can partially understand the Intel transition, which came primarily as a result of IBM’s delay in POWER development, but doing so without jumping directly to AMD’s x86-64 architecture was in my opinion a stupid move. When x86-64 becomes the more popular architecture over IA-32, how will they handle that transition? An Even-more-universal-universal-binary? Confuse us more, please!

And retaining full, licensed compatibility with Unix has only made Apple as secure as the open source software lying at Mac OS X’s Darwin core, and if you’ve been following recent exploit news (Milw0rm), you’ll see the downsides I’m alluding to. The time for Macintosh AntiVirus is just around the corner.

While Universal Binaries have been great in Apple’s transition from PPC, I await the day where applications don’t contain executable opcodes for both platforms, saving more space on my hard disk for iTunes music (for example). But wait, they still have to make the transition to x86-64, remember? We’re not gaining any hard drive space anytime soon, especially if developers can configure a target executable to support PPC, IA-32 AND x86-64 in the same binary.

Combine that with Apple’s possible move to solid state disks, and your hard drive space just got cut into a third of its old size. But look how fast and compatible it is! Now it can play your songs even faster! (what songs?)

Java

Java had it all: it was perfectly cross-platform, supported by all operating systems, and most of all was consistent. But now that Sun is gone, Java’s future is uncertain, and the only alternative is Microsoft’s MSJVM/Java-derived .NET framework (see the complaints about PEs above).

Even the worst of operating systems, which in my opinion include anything from the Netware line (sharing kernel memory address space with programs is never a good idea), supported Java. This earned it as one of the best examples of how computers should act – it even extended to embedded technology such as Blu-Ray players and other hardware.

Java had its own hacks and flaws as well, such as its API size and bytecode modifications to name a few (and what’s the difference between Java 2.0 and Java 2 1.6?) But regardless, Java was an example of something in userspace done right – true portability, a unified library, and allowing the operating system to actually abstract like it was meant to do.

This isn’t a rant about how I loved Java and now its gone – it’s to show an example of something in computing history that was done right. Java wasn’t meant for everything, and even at that it had JNI for C extensibility, but even now it remains a symbol of how most things in computing should have been done, with a few exceptions.

Conclusion

While I’m not by a long shot happy with the hole we’ve dug ourselves in the various fields of computing discussed here, we have to live with it none the less. A complete redesign of all of the problems discussed here would require a drastic change in computing (quantum computing?), and is very unlikely to happen anytime soon.

But gradual changes in how things are done could remedy the situation at the very best. A movement to a more web-centered field, as netbooks accomplish, is a necessary task to both take the stress off of most of the other problems here. This is, of course, only able to be done if we move to IPv6 or a more scalable Internet first to allow for the shift in computing focus.

Even a BIOS and Browser setup would be more efficient with the right combination of available web applications at it’s arsenal. This new design for radically netbook-centered computing seems to be the direction we’re heading, and I radically support it given the current technological state we’re living with.

But regardless of solution, one thing’s for certain: we can’t keep on providing temporary patches to solve long-term problems like we’ve been doing for almost half a century. And backwards compatibility can only go so far with this development process. We need a change, regardless of what it is as long as it provides more stability than we currently have.

Anthony

Anthony Cargile is the founder and former editor-in-chief of The Coffee Desk. He is currently employed by a private company as an e-commerce web designer, and has extensive experience in many programming languages, networking technologies and operating system theory and design. He currently develops for several open source projects in his free time from school and work.

More Posts - Website

There are no comments yet, add one below.

Leave a Comment

Your email address will not be published. Required fields are marked *

*