Archive | Desktop UI RSS feed for this section

Reinventing the desktop (part 2): I heard you like lists… [text version]

3 Aug

I originally posted this as a screencast, but I figure a lot of people want to scan rather than sit through a whole 40 minute presentation, so here’s the same stuff (somewhat abridged) in text form.

In part 1, I made a negative case against the desktop interface as it currently exists, but I promised to make a positive case for my solutions. Because it would take at least a few weeks to put together a complete presentation, I thought it more timely if I instead present the ideas in installments (and hey, more reddit karma whoring this way). Most of the pushback (both constructive and vitriolic) to part 1 concerned my ideas about lists, so I’ll start there.

Lists good, hierarchies bad

Many of the most notable recent innovations in software have revolved around lists:

  • Before Google, people had the idea to organize the web in a catalog, a big hierarchy of everything, e.g. the Yahoo directory. After Google, it became clear that a list of search results is far superior, and now such directories are mostly remembered with head-shaking bemusement (to the extent they’re remembered at all).
  • Gmail greatly deemphasizes the notion of sorting mail into separate folders and instead organizes mail by tagging and search.
  • Before iTunes and its imitators, users would play their music by navigating into folders, e.g. ‘music\artist\album\’. Today, iTunes simply presents everything in one big list that is textually filtered.
  • A blog is basically any site on which new content appears strictly in a chronological list: new stuff comes in the top, old stuff goes out the bottom. So, for instance, on a non-blog like Slate.com, some attempt is made to hand-editorialize the presentation of content on the front page, as in a magazine, but on Boingboing.net, the authors just create new content and post it into the stream.1
  • Link-driven sites, like Slashdot and Reddit, also revolve around lists.
  • So do many social sites, like Twitter and Facebook.

The way these examples use lists differently is mainly in how they order their items. For instance, in Google search, results are ordered by relevance to the query whereas, in Reddit, items are ordered by a combination of chronology and user votes. The key lesson here is that, if you can find the right way to order and filter things, you probably are best off presenting them in just a big, flat list.

My favorite example of this is the AwesomeBar introduced in Firefox 3. The AwesomeBar filters my history and bookmarks as I type and orders items by “frecency”,  a combination of the recency and frequency with which I’ve accessed the items. This means that I can type, say, ‘sl’, and my Slashdot.org bookmark will reliably appear at the top of the list. So when I want to visit Slashdot, I just reflexively type <alt+d>, ‘sl’, <down>, and <enter>. I don’t have to navigate a menu of any kind, I just act on reflex. This works so well, in fact, that I don’t use the regular bookmarks menu at all anymore.

The AwesomeBar isn’t without flaws, however. Consider that there are three different basic cases of search:

  • In some cases, I know specifically what I want.
  • In other cases, I only know generally what I want, e.g. I want to play some game, but I haven’t decided on a game, and perhaps I’m not sure about my options.
  • In the remaining cases, I just want to browse. Sometimes this is because I’m just bored and looking for something to do, but often I browse because I just want a refresher on what things exist, e.g. I browse my calendar because I need to see if there’s anything there I’ve forgotten.

While the AwesomeBar is awesome when I specifically know what I want, it’s somewhat less awesome when I only know generally, and it’s not at all awesome when I know not at all. In particular, I want a way to browse the sites which I’ve bookmarked but haven’t returned to, because many of these urls are things I didn’t have time to consume at the time but bookmarked so as to consume at a later date.

One solution would be to perhaps create a distinct kind of bookmark for sites I intend to consume later rather than visit on a regular basis. Another solution would be to make the Firefox “library” window (“Bookmarks–>Organize Bookmarks…”) more usable and fix its behavior: currently when you delete history, the ‘last visit’ date for each bookmark is lost, meaning you can’t afterward browse just the sites which you’ve bookmarked but forgotten about.

Launching programs without hierarchies

The core mechanisms for program launching in Windows and Linux are hierarchical start menus. In Windows, an individual application is generally placed in its own folder in the start menu, but in Gnome, applications are sorted into categories. The problem is that such sorting is largely a fool’s game. Consider:

sound and video

Sure I might think to look under Sound & Video when I want to burn an audio CD, but if I just want to burn data, it’s not going to occur to me to look there. Why put the disc burner there and not under Accessories? Well, in fact, Brasero is listed under Accessories too, but there it’s called CD/DVD Creator.

Why is Sound & Video one category and not two? Well that would leave us with two categories, each containing just one or two items, which would be silly.

These sorts of dilemmas tend to abound with categorization, leading us to settle for compromise solutions, such as:

  • OpenOffice.org Drawing is the only OpenOffice.org app listed under Graphics and not Office.
  • Evolution is listed under both Office and Internet.
  • We have all this miscellaneous stuff, and, hey, it’s gotta go somewhere, so we stuff it under Accessories.

Combine these faults with the fact that many users find it difficult to mouse through cascading menus, and the end result is that people don’t like using the start menu, so we make up for these deficiencies by piling on other conveniences:

  • Shortcuts on the desktop.
  • The QuickLaunch menu on the taskbar.
  • The system tray.2
  • The recently-opened programs list.3

The one addition I really like, though, is the text search/filtering added to Vista’s start menu. This allows for AwesomeBar-like behavior, e.g. I can type ‘fi’ and hit enter to launch Firefox. Also really nice is that I can type some term and see all relevant Control Panel items whether my term strictly matches those items or not: for example, I can type “reso” and get “Adjust screen resolution” even though there’s no Control Panel item of that name.

resolution

Simplify, simplify, simplify

So the question is, might we be better off giving users just one or two mechanisms for launching programs rather than half a dozen? I believe we would, but my solution requires accepting a few somewhat unconventional premises:

  • First, as I’ve already described, organizing things into categories is largely a fool’s game.
  • Second, when mouse-only mechanisms seem too inefficient, designers tend to introduce additional mouse-oriented mechanisms, which not only create redundancy, these mechanisms often challenge users with poor mousing skills and almost always involve adding new screen elements. If we could somehow make keyboard interactions easier to discover and recall, we could stop trying to get overly clever with the mouse and could clean up some of our messes. I believe this is doable with real-time textual filtering and a few other tricks.
  • Third, if you’re going to present a list, don’t be afraid to let it take up a proper amount of screen space so users can actually read and scan the damn thing. Some designers think big lists scare users, so they scrunch lists into small boxes, requiring the user to scroll a lot and manually resize columns. This is silly: if something is too scary for users to deal with, don’t present it at all. You aren’t helping by making the information hard to view.4

So what’s the solution? Well let’s start by flattening Ubuntu’s Applications menu out into one big list, and while we’re at it, let’s throw in the shutdown and settings items:

ubuntu menu 2

Too long, right? Maybe, but it seems pretty decent to me. It’s only about twice the height of the typical screen resolution, and as long as the most frequently used stuff is in the top half, is it really going to kill the user to occasionally scroll down? Besides, most users who care about efficiency will pick up the habit of opening most applications by filtering:

filtered menu

Here, the user types ‘w’, and so the list only shows the items matching a term starting with ‘w’; the word processor is listed first because that’s the item which the user has most frequently selected in the past when they type ‘w’. Users can also filter on terms that describe a program but aren’t necessarily in its title:

filtered menu 2

Here, the user’s query ‘ga’ matches the tag ‘game’, so the user sees all items with that tag.5

So that’s basically it. With a big filtered list, I don’t see a need for shortcuts in a QuickLaunch menu, shortcuts on the desktop, shortcuts pinned to the start menu, shortcuts to recently opened programs in the start menu, or shortcuts pinned to a dock/taskbar.6

I should note that this isn’t terribly radical, and in fact, it isn’t all that different from the direction Gnome and KDE have been heading. The Gnome shell prototype, for instance, introduces textual filtering. What I find odd, though, is that both projects seem very attached to the idea of categorized menus. Here, for instance, is a recent KDE screenshot:

kde menu

In this design, the categories slide into view rather than pop out. Sadly, this make navigation among the categories no less annoying, just annoying in a different way.

Application menus

If we can reduce program launching to just a big filtered list, could we do the same to the traditional menu bars in applications? Well, here’s what you get if you stuff everything from the menu bar of a moderately complicated program, Paint.NET, into one big list:

paint menu long

This is about the same length as our program menu, but for application controls, it doesn’t seem as acceptable. The fix is to pack things horizontally7:

paint menu wide

The question, then, is how to add textual filtering. We could simply have matching items show up in a one-dimensional list, as usual:

paint menu filtered simple

Here the user types ‘b’, and so items beginning with ‘b’ show up, with the most frequently used items showing up first. Alternatively, we could simply highlight all items that match the query:

paint menu filtered highlight

The solution I like best, though, is to combine these two such that we highlight the matching items but filter out sections without any matching items:

paint menu filtered combo

It may have occurred to you that this idea bares some resemblance to the Microsoft Office 2007 “ribbon” interface: just take the individual ribbon tabs, array them vertically, and add a text field on top:

word menu

(For our purposes, ignore that this is an offensively complicated array of controls. Obviously you wouldn’t want to bombard a user with something like this.)

The thing I really like about the ribbon is that, unlike the traditional menu bar, the ribbon directly contains complex controls, so a lot of stuff which would otherwise get punted into annoying dialog boxes can be done directly in the ribbon (or at least in little pop-out overlays, which aren’t nearly as annoying as dialogs). This is something menus going forward should imitate.

On the other hand, the most annoying part of the ribbon is that it’s modal: the user has to often switch the currently-viewed tab to get at a control. In contrast, with a pull-down menu, the user is always oriented at the same place (the top) every time it’s opened. I also believe that a big scroll is easier to scan and better facilitates the user’s spatial memory: more is visible at once, your eyes can track as you scroll, and everything is in clear spatial relation to everything else.

A pull-down menu obviously has a disadvantage, though. In the ribbon, related functionality tends to live together on the same tab, and the last-used tab stays visible; consequently, a lot of tab switching is avoided that otherwise would be required. In a pull down, while it’s nice that the menu is hidden when not needed, quickly repeated actions annoyingly require opening the menu (and potentially scrolling) for each action. The solution to this—without resorting to toolbars—I’ll discuss in a later installment.

Command filtering

Ubiquity is a Firefox add-on which adds a command line. Unlike a traditional command line, Ubiquity effectively guesses what the user is trying to say rather than requiring the user to precisely recall the full names of commands and their precise syntax, and it does this basically by treating the user-entered text as a query to filter the set of commands. In the next installment, I’ll describe how something very much like Ubiquity would work at the desktop level rather than just confined to the browser.8

One text field to rule them all

So it looks like we’re going to have a bunch of text fields in our desktop for doing different things:

  • entering urls and searching bookmarks and history
  • searching the web
  • searching our filesystems
  • launching programs
  • searching application menus
  • executing commands

Ideally, we could combine these all into just one universal text field such that I can just reflexively hit a keyboard shortcut, start typing, and then decide what kind of action to perform—whether a web search, a command, or whatever. I’ll discuss how this is managed in the next installment, which will primarily cover window management.

Continued in part 3 (coming soon).

  1. And notably, Slate has moved in recent years towards a more blog-like front page. []
  2. The system tray, of course, is supposed to be for status indicators, but many programs end up abusing it. []
  3. Found in the start menu since Windows XP. []
  4. Be clear that there’s a distinction between hiding controls and hiding information: a bunch of controls, obviously, can intimidate and overwhelm a user, so it makes sense to be careful about how many controls the user sees at once. []
  5. The user needs some sort of indication of how an item matches a query, so here, perhaps the tag ‘game’ should appear highlighted next to each item. []
  6. A lot of people like how the OS X dock keeps an application’s icon always in the same place, allowing for reflexive program switching. As I’ll describe in the next installment, my design retains this affordance in a different way. []
  7. In a list where the set of items changes, arraying things in two dimensions is generally bad because it means things tend to shift around in a confusing way; when the set is fixed, things aren’t going to move around []
  8. This isn’t original, of course: Ubiquity actually derives from Enso and Quicksilver, which are basically command lines for the desktop. []

Reinventing the desktop (part 2): I heard you like lists…

2 Aug

In part 1, I made a negative case against the desktop interface as it currently exists, but I promised to make a positive case for my solutions. Because it would take at least a few weeks to put together a complete presentation, I thought it more timely if I instead present the ideas in installments (and hey, more reddit karma whoring this way). Most of the pushback to part 1 (both constructive and vitriolic) concerned my ideas about lists, so I decided to start there.

Rather than writing this up, I thought a screencast would be more appropriate for presenting these visual ideas. The run time is 37 minutes. (Yes, I know that’s long, and it starts slow, but if I’m not thorough, I just leave myself open to superficial dismissals.)

(I apologize for some parts where my narration gets a bit difficult to follow: I heavily edit the audio and end up removing awkward gaps and pasting together sentences from different takes; this works surprisingly well most of the time, but sometimes the result sounds a bit like Max Headroom.)

Reinventing the desktop (for real this time) – Part 1

20 Jul

Being of a presumptuous nature, I tend to get big ideas, and among those big ideas are notions of how to “reinvent the desktop”, notions which I call collectively Portals (a play on Windows).

Ain’t broke?

Before I explain Portals in detail, we should establish whether anything is really wrong at all with the modern desktop or if desktop “reinvention” is just a chimera of UI-novelty seekers. This is only prudent because, if we can’t clearly identify deficiencies of the status quo, we may fall into the trap of replacing the status quo with something not truly better, just arbitrarily different.

So let’s first consider what functionality comprises a GUI desktop. A desktop consists of:

  • An interface for starting applications, for switching between open applications, and for allotting screen space between open applications.
  • A common set of interface elements for applications, often including guidelines for the use thereof to achieve a cross-application standard look-and-feel.
  • A data-sharing mechanism between apps (copy and paste).
  • A common mechanism for application associations—what applications should be used to open such-and-such file or send a new email, etc.
  • A set of system-wide keys, e.g. ctrl+alt+delete on Windows.

And because most users don’t/can’t/won’t use a command-line, desktops include a minimum set of apps:

  • File management.
  • System configuration and utilities.
  • Program installation/removal.

Since the 1980′s, this functionality has been presented to users on most systems with only minor variations upon the standard WIMP (Window, Icons, Menu, Pointer) model handed down from Xerox PARC and the first Mac, so, obviously, the modern desktop is not really broken: people have been getting by with essentially the same design for decades now. Still, there is a perennial longing for something better, so the question is what motivates this feeling?

What’s wrong?

Scaling issues

A fundamental difference between the computing experience of 1984 and the computing experience of twenty-five years later is that users simply do a lot more with their computers: more diversity of tasks, more tasks at once, and a lot more data, both on the user’s local machine(s) and out there on the network. In particular, the window management and file management that made sense for 1984’s attention load just don’t hold up in an age of web distractions and half-terabyte hard drives.

Lack of sensory stimulation and tactile interaction

Only librarians want to live in a grey, motionless, silent world of text, but for a long time, that’s what the computing experience was. Then came icons and windows, and they could move! Quickly this novelty wore off, so today our menus slide, our workspaces spin in three dimensions, and our windows cross the event horizon every time we minimize them. And our iPhones fart.

Moreover, we increasingly expect interfaces to entertain our hands. Touch screens! Multi-touch! Surface top! Gestures! I’ll admit that these developments are exciting, but they’re exciting mainly because we don’t really know what will come of them—our hopes at this point remain still very vague. As clearly as we can define it, our hope is that computer interaction can be made satisfying in the same way that a good hit on a tennis ball is satisfying or in the same way that closing a well made car door is satisfying.

Sadly, these ideas may turn out to be like virtual reality: worlds of possibilities, none of the possibilities very useful. So we may be in just another cycle of the permutations of fashion. Still, aesthetics and feel really do matter to an extent, for a good layout of information and good use of typography tends to be aesthetically pleasing, and good tactile feel, such as proper mouse sensitivity, definitely facilitates usability.

We should acknowledge, though, that computing is no longer a dull, grey world anymore, mostly thanks to the web, not changes in the desktop. This suggests, then, that the best way forward for an aesthetically pleasing and stimulating desktop is to minimize the interface: the less screen real estate occupied by the interface’s “administrative debris”, the less there is that we need to make look good and therefore the less opportunity that we have to fail.

Administrative debris

Edward Tufte coined administrative debris to denote all of the elements of a UI not directly conveying the information the user really cares about. For instance, the menus and toolbars of most apps are almost entirely administrative debris. Such debris is problematic because:

  • Debris takes up precious screen real estate, which would be better used to present information.
  • Debris distracts the user.
  • Debris requires the user to learn its layout and how to navigate in and around it.
  • Debris is aesthetically displeasing and intimidating because it suggests complexity, both in terms of information clutter and conceptual difficulties.
  • Debris often has to be managed by the user, thereby creating more “meta work”.

Meta work

Meta work is any work which the interface burdens upon the user in addition to the user’s actual work. Meta work is terribly displeasing, the mental equivalent of janitorial work.

Some meta work is hard to imagine getting rid of, such as scrolling through a list of information, for if we really intend to present more information than fits on screen, the user must scroll or page through it somehow. Most interface meta work, however, comes from two sources:

  • Positioning things and navigating. In particular, moving and resizing windows and navigating through menus and dialogs. This also includes any kind of collapasble or adjustable information display. I find file browsers, for instance, to require constant adjustment because the directory tree view and the columns of the grid view are half the time either too wide or too narrow.
  • Debris. When the debris can’t all fit on screen at once, we require mechanisms for the user to manage the debris. The Office 12 ribbon, for instance, requires the user to manage which strip of controls he is viewing at any moment.

Most disconcerting, meta work perniciously tends to beget more meta work because the mechanisms introduced to manage information and controls often themselves take up space and require management.

Indirectness

Interactions with information through debris are indirect, so Tufte’s general prescription for minimizing administrative debris and meta work is to make interactions with information direct. For instance, rather than editing properties in a dialog, users should directly edit those values in some screen element directly attached to the affected object or, ideally, directly edit the object itself.

Direct interactions also have the virtue of being generally more obvious how to do than indirect interactions. On the other hand, most users aren’t familiar with direct interactions as a convention, so it may not occur to users to try them.

Hierarchies

Because we must hide a lot of things for the sake of limited screen space, a lot of information and administrative debris gets buried into hierarchical trees, meaning users end up spending a lot of time and mental energy navigating (which is really just another kind of meta work). For instance, to change my mouse settings in Windows, I follow the chain Start->Control Panel->Mouse. Or, say, to open a file, I must recall its drive, its directory path, and then finally its name. This hierarchical recall—and the ensuing navigation action—is mentally taxing and error prone.

The usual justification for using a tree is to avoid stuffing everything into one big flat list, but this is generally a misguided tradeoff. Consider a typical hierarchical menu, first in the usual pull-down/pop-out configuration, second in one big scrolling list with divisors between sections. Which is easier to learn? Which is easier to explore? Which is easier for recall? I believe you’ll find the flat list is better on all measures but perhaps one: a long list may be a bit intimidating on first glance compared to a hierarchy that hides the items in submenus by category.

(Actually, the flat list may be better even on this count because a menu which hides complexity is daunting in its own way: the user browsing such a menu quickly finds lots of complexity which they’ll have to recall how to find again later. Besides, the “first contact” shock of a long list can be mitigated with visual design that appropriately emphasizes the right elements. So flat lists arguably win on all counts.)

Now consider file hierarchies. Rather than having to remember that your Twin Peaks / Doctor Who crossover fan fiction is stored as e:/fanfic/twinpeaks_doctorwho.txt, it would be far better if you could just textually filter down by a query for twin peaks who or any other query terms that occur to you by free association. In fact, it would be nice when creating the file if you didn’t have to decide between twinpeaks_doctorwho.txt and doctorwho_twinpeaks.txt and didn’t have to decide whether to place this file in fanfic or some other directory. The lesson here is that:

  1. Hierarchical recall is mentally taxing and error prone. What we really want is free-associative recall.
  2. Hierarchical naming and placement are mentally taxing and error prone. What we really want are tagging and full-text search.

(See Clay Shirky on hierarchy.)

Frustrating discovery and recall

Perhaps the biggest frustration in using software is knowing what you want the software to do and knowing that your software can do it but not being able to figure out how to get the software to do it. These frustrations typically stem from an an inability to guess what the developers decided to name a feature and where the developers decided to place the feature in a hierarchical menu or dialog chain. For instance, the user looking for a program’s options dialog has to guess whether to look for File->Preferences, Edit->Preferences, Edit->Options, Help->Options, Tools->Options, or some other path.

The general solution here is, again, a big, flat list filtered by textual query. Like disambiguation pages and redirection in Wikipedia, a single item should be associated with any synonyms so that users need not recall the single precise name favored by the developers, e.g. preferences should show up in a query for options and settings.

Redundancy

Thinking up features is easy, but thinking up features that obviate other features is hard. Moreover, once a feature is added to a program, it takes a lot of political will to remove it. Consequently, many interfaces are laden with redundancy.

A degree of redundancy often serves a legitimate purpose, for many tasks should be equally doable by either keyboard or mouse, and common tasks often warrant shortcuts that make up in convenience what they lack in discoverability. In many cases, though, designers have simply let redundancy proliferate unchecked. A typical Windows application, for example, presents the user with at least four ways of closing the application using the mouse:

  • Via the X in the top right.
  • Via the right-click menu of the window on the taskbar.
  • Via the icon menu in the top left.
  • Via the menubar.

Additionally, users can close an application using the keyboard:

  • Via alt+F4.
  • Via ctrl+w
  • Via accelerator keys for the icon menu.
  • Via accelerator keys for the menubar.

That makes at least eight ways to close an application. This particular case of redundancy is maybe not so bad because most users have a favored method which they use by reflex, but the redundancy still clutters the interface, not just in screen space but in documentation space and mental space.

At its worst, redundancy isn’t just clutter, it’s more meta work heaped upon the user. Not only are such choices more management work, the bother of having to make these choices often lingers on the user’s mind. As Barry Schwartz discusses in The Paradox of Choice, choices are often a hidden source of unhappiness: when presented with a choice, people fret because they want to believe that the choice has a correct answer, even when none exists and even when the disparity of outcomes is inconsequential.

Most choices in interfaces impose very small burdens individually, but together they add up, and too often, designers underestimate this burden of choice. When users are making little choices optimizing for the best way to do something, it’s quite likely that the interface should be making these choices for them.

Thwarted reflexes

The opposite of making a choice is to act upon reflex. Enabling good reflexes and consistently rewarding them gives users a very satisfying feeling of control.

Ideally, a good reflex action should be context-free, meaning it shouldn’t require a particular desktop or application state. For instance, alt+tab is a desktop-level reflex that is supposed to work in all contexts such that, at any time, the user can hit alt+tab to get back to the window that last had their focus. Unfortunately, this reflex doesn’t work in some contexts, such as in some fullscreen games that either don’t respond to this command or only do so very slowly. Another aggravating example is Flash in the browser, which often steals keyboard focus and thus blocks the alt+d, ctrl+k, and ctrl+t commands.

Some reflexes, though, users pick up like bad habits. In Windows, I’m in the reflexive habit of hitting windows+e every time I wish to browse to a folder even if I already have that folder open as a window, thereby creating more meta work for myself in the form of another folder to close. A better designed reflex action would get me to my desired folder while somehow avoiding this duplication, for well-designed reflex actions don’t lead users down the wrong path.

Virtuality

Because hierarchies suck, designers frequently provide shortcut paths to various nodes in hierarchies. For instance, file dialogs in Windows Vista provide shortcut buttons to standard directories like Documents and Pictures. Or, for example, the display settings in Windows can be accessed via right-clicking the desktop rather than going into the Control Panel, but both paths take you to the same dialog.

The problem is that this virtuality not only introduces redundancy, it presents an inconsistent and disorienting picture to users and burdens them with more arbitrary crap to remember. Virtuality makes hierarchies more confusing, not less, because the same “shape” is presented in many different alternate forms, obscuring the “true” shape and thereby hindering discovery and spatial recall. Furthermore, when the user can’t picture at least the outline shape of the possibilities open to her, she feels surrounded by hidden pitfalls and paralyzed by choice.

Textual search is technically a virtual kind of access, but it doesn’t share these problems. If I access my Doctor Who / Twin Peaks crossover fan fiction by searching for who peaks, this isn’t another bit of arbitrariness for me to have to recall later, it’s just the set of terms that occurred to me at the moment by free association.

Burdened and stolen focus and attention

There’s a word for a person who repeatedly calls your name and taps you on the shoulder: annoying. We also have a word for someone who tries to hand you something when your hands are full already: asshole. So it’s not surprising that the most commonly cited interface annoyances are those obnoxious little pop-up windows that demand your attention and steal your keyboard focus.

Obviously, having your attention actively stolen is bad. Less obviously, meta work in all forms steals attention, but usually passively and in small chunks: after all, attention focused on meta work is attention taken away from actual work.

If there’s something many people feel increasingly short on in the networked world, it’s attention. A well-designed interface enables the user to focus on their own actual work, switching between tasks with little friction.

All your conventions suck

Now let’s get into some concrete criticisms of actual mechanisms commonly used today:

Icons suck

To a large extent, icons exist just as an excuse for designers to introduce eye candy, but the usual justification designers give for using icons is the truism that ‘simply having users point at the very thing they want is the simplest and most intuitive kind of selection.’ This is misguided:

  • Pictographs do not scale as well as text because you can’t alphabetize or do searches on images.
  • As you add more and more icons, the visual distinctiveness of each icon quickly gets murky and ambiguous.
  • Icons are generally not “the very thing” that users are looking for. A pictograph typically provides hints about the thing it represents but is not synonymous with the thing itself.
  • Worst of all, interpreting pictographs is more mentally taxing than reading a word or two, especially when the semantic content is even mildly abstract.

The crux here is that it is far easier for people to recall the general qualities of a picture—its dominant colors and overall shape—than it is to recall its precise details. Also, compared to abstract images, images of recognizable objects are much easier to recall details of because we can mentally fill in the blank spots with our assumptions of what such objects look like. For instance, if shown a picture of a car, a viewer immediately discerns the notion of a car, not because the viewer quickly absorbs all the visual detail but because she immediately registers a few key details and then her mind fills in the missing pieces. This explains why most icons in software are so bad: most icons found in software are small, indiscernible messes, so users fail to recognize what the icons depict and learn to think of them as abstract shapes.

Now suppose I know what I want my software to do but don’t remember at all how the interface designers decided to label that function with text or an icon. If I’m looking for a label, I have to figure out what words the designers chose to describe it, which often requires consulting my mental thesaurus. In contrast, if I’m looking for an icon, I have to figure out what words the designers chose to describe the feature and then figure out how the designers chose to represent those words as an image. While the number of synonyms for a particular concept can be frustratingly many and elusive, the number of visual representations for a concept are innumerable: even if you narrow down the concrete object(s) being depicted, there are still the variables of perspective, composition, style, and color.* Moreover, users can always fall back on actually reading a list of words till they find a likely match; this is reasonably doable, in contrast to “reading” a list of icons, which is painful and slow.

* (Sure many real-life objects only come in one color, but many don’t. In fact, looking over the icons in a few applications, I notice that a strong majority have basically random color assignments, either because of the nature of what they depict or because of the need to make them stand in contrast to their neighbors.)

To the extent you do use icons, follow these guidelines:

  1. All but the most frequently encountered icons should be labeled by text. Many applications omit text labels because small, unlabeled icons allow for buttons that minimize space use (see Photoshop). This is a poor trade off. First of all for the sake of image recall outlined above, but also because even the best designed icons rarely communicate their function as clearly as a word or two of text. In fact, the real virtue of icons is that their shape and color make them noticeable to peripheral vision or visual scanning, so they help users find points of focus and do an initial culling of their possible options. After that initial culling stage, however, users have only narrowed their options and so prefer the relative precision of words to help them make their final selection.
  2. Icons should be simple in shape, distinct in silhouette, have contrasting interior lines, and almost never use more than two dominant colors.
  3. Icons should be as big as necessary to make them conform to rule 2.
  4. The number of icons that it is acceptable to use is proportional to how large and distinct they are, vis-a-vis rules 2 and 3. The array of icons found in today’s typical complex apps, like word processors and Photoshop, is too many by a factor of about three.

Icon view sucks

Compared to the detailed-list view of files, the icon view is a paragon of form over function. Not only should icon view not be the default folder view, icon view should not exist. It’s flat out stupid. Not only is the browse-ability of a list in one dimension far superior to a list in two dimensions, a two-dimensional listing must be rearranged when the view width changes, meaning icons end up changing their horizontal positions, thereby disorienting the user and thwarting his spatial recall.

(A thumbnail view of pictures is a special exception to this rule.)

Thumbnail previews suck

Continuing with the theme of pictures being a false cure-all, thumbnail previews of windows and tabs rarely justify their use:

  • First, most such previews are triggered by a delayed reaction to a mouse hover, which tends to mean they pop up too soon one half the time and too slow the other half.
  • Second, even with great anti-aliasing, a two or three square inch representation of a full window or tab is often just too small to make out clearly.
  • Third, most documents and tabs are comprised mainly of text and so very often look pretty much the same, especially when shrunk down to a small preview.
  • Fourth, the user may expect to see one portion in the scroll of a document and so not quickly recognize the document if another portion is shown in the preview.

For previews to be worth the mental burden, they need to be instant and large, perhaps even full-sized.

Animations suck

Currently, much work is going into GUI toolkits to make it easy to add UI animations, such as having elements that slide around. The inevitable problem with animations, though, is that they introduce action delays and so must be kept very short, and yet the shorter the animation, the more the animation defeats its original intent, which is to convey to users where elements go to and come from. (See Philip Haine’s critique of Apple FrontRow)

Settings management sucks

Desktop settings management exhibits virtuality gone mad. On the one hand, Windows has Control Panel and Gnome has a Settings menu—central places to do configuration—but centrality is deemed too inconvenient for some cases, so we sprinkle special access mechanisms ad hoc throughout the desktop. In Windows 7, for instance, the start menu includes both Control Panel and Devices and Printers even though Devices and Printers is just an item in the Control Panel. Or, for instance, the Network and Sharing Center is an item in the Control Panel, but it’s also accessible via Network in the left panel of the file browser. Worse, some settings are not found in the Control Panel at all, e.g. folder options are in Tools–>Folder Options of the file browser but not in the Control Panel. Most ridiculous and aggravating, though, is how these ad hocisms change with each release such that the user’s hard-learned arbitrary nonsense becomes useless. In the end, the path to every setting becomes an ad hoc incantation, a little piece of version-specific arcana to document in user manuals with a dozen screen shots.

The Desktop itself sucks

Interface design is largely about rationing precious screen real estate, and…

…hey, everyone! Here’s this big blank surface going unused! Let’s give it a random assortment of redundant functionality to make up for the inadequacy of our main controls! Sure, the start menu already has a frequently-used program list, but it’s too orderly. And users already have a home directory, but they can’t see its contents at the random moments that their un-maximized windows are positioned just so. Users love messes! Hmm, now we just need umpteen different special mechanisms for hiding all these windows that obscure this precious space.

*Ahem*…yeah. Put another way:

  • The desktop creates clutter by encouraging people to use it as a dumping ground for files.
  • The desktop contains ‘My Computer’ but itself is contained by ‘My Computer’. Well done, Microsoft, for helping make the concept of files and directories clear, and so much for the metaphor of files as physical objects (which isn’t a good metaphor to begin with, but if you’re trying to go with a metaphor, stick with it).
  • The desktop as a working surface necessitates mechanisms to get at it easily from behind all of these damn windows.
  • The desktop compensates for inadequacies of the start menu and file browser by duplicating some of their functionality, so users are presented with the silly choice of whether to put an application shortcut or file on their desktop and/or in the start-menu/dock, and then later they have to remember where they put it and possibly make an arbitrary choice of which to use.

Menu bars suck

The drop-down, pop-out style of menus found in application menu bars are optimized for minimal obtrusiveness (both in terms of visible space and visibility time) and for minimal mousing (both in terms of motion and clicking). Unfortunately, these optimizations are ultimately inadequate:

  • First, as most applications have conceded, users simply don’t like using the menu bar for frequent accesses, so applications add redundant shortcuts, such as toolbars, for frequently used items.
  • Second, many users find mousing through these menus frustrating despite refined mousing affordances.
  • Third, these standard menus have an artificially limited vocabulary—both visual and functional (e.g. sliders and textfields can’t be menu items*)—so all but the simplest features get shunted into pop-up dialogs.

* (Clicking an item is supposed to dismiss the menu overlay every time, which wouldn’t work for textfields or sliders as items.)

Worst of all, menu bars are not only hierarchical, they present their hierarchy confusingly: their various menus and submenus overlap and flash in and out as the user mouses, and because floating dialogs are untethered from the items which open them, users quickly forget how to get back to dialogs.

Context menus suck

Pop-up context menus suffer most of the same ills as menu bars, and they introduce redundancy. In Firefox, for example, the context menu of the page includes back, forward, reload, stop, and several other items also found in the menu bar.

On the plus side, a context menu doesn’t suffer from the same hierarchical recall problems as menu bars (unless the context menu includes many submenus). However, each context menu effectively presents a virtual view into the menu bar: the menu bar is where all my controls live, but right-clicking different things shows me different mixes of those controls, and sometimes it even shows me things not in the menu bar. This virtuality is bad for all the reasons discussed above.

Dialogs suck

Developers love dialogs because dialogs allow developers to avoid hard decisions of positioning and sizing. Don’t know where to place a feature? When in doubt, stuff it into a dialog.

Yet most users hate dialogs:

  • First, navigating to dialogs is often a frustrating discovery, recall, and mousing process.
  • Second, dialogs not only steal focus, they often block interactions with their parent windows.
  • Third, dialogs have a tendency to get lost behind other windows because they’re generally small and don’t show up in the taskbar list.
  • Fourth, it’s often unclear how users should close a dialog. For instance, clicking X in the top-right is sometimes effectively the same as clicking cancel but sometimes effectively the same as clicking OK.

If there’s anything worse than a dialog, it’s a dialog spawned from another dialog. Thankfully, most of today’s applications have learned to avoid that particular sin.

Toolbars suck

Application developers resort to redundantly placing menu bar items in toolbars mainly because menu bars suck. The redundancy this introduces is aggravating enough, but on top of this, toolbars usually consist mainly of icons (which, recall, also suck), and just like menu bars, most toolbars artificially restrict themselves to simple buttons and thereby end up punting complexity into dialogs. Triple suck score.

In simple applications, like web browsers, the redundancy is not so bad, but as applications get more complex, the number of convenience icons tends to grow (think Word or Photoshop) until the redundancy becomes a nuisance to both newbie users and experienced users alike: newbies find the preponderance of overlapping choices confusing and distracting; experienced users find repeatedly making the arbitrary choice of whether to look in the menu bar or toolbars bothersome and distracting.

The taskbar sucks

Like the web browser tab bar, the taskbar suffers from an intractable dilemma: in the horizontal configuration, it scales poorly past more than 7-9 items; in the vertical configuration, more items fit naturally, but each item has less space for its title unless you’re willing to make the bar a few hundred pixels wide. Widescreen monitors alleviate the space problem in both configurations, but not sufficiently to dissolve the problem.

The start menu sucks

Since Windows 95, the start menu has been arranged in a hierarchy of aggravating pull-out menus, with each program typically getting its own folder. Vista has sensibly moved towards textual query over a flat list, but the flat list is only flat-ish because folders remain. Not only do the folders mean that most items in the list have unhelpfully identical folder icons, virtually all folders have no reason for being: I don’t need a folder that contains X and Uninstall X, for if I want to uninstall X, I’ll use Programs and Features in the Control Panel like I’m supposed to; if a folder contains items other than the program itself, they can simply be their own standalone items or can simply be moved into the application menu or application splash dialog (World of Warcraft does this).

So if I had control of the Windows 7 start menu, I would simply:

  • Put every item in one big scroll such that you get rid of All Programs.
  • Get rid of folders.
  • Add section dividers.
  • Make the whole menu taller, if not the whole height of the screen, and make the program list section wider so that long names are more presentable.
  • Put the items in the right-side of the menu into the left or simply get rid of them, e.g. Shut Down and Control Panel get put in the program list. (If users really need to access these features so quickly—which I don’t think is the case—just add shortcut keys.)

You might object that getting rid of categorical hierarchy means programs can’t be browsed by type, but this is not really the case. First, programs should be arranged into appropriate sections with titles. Second, when menu items are textually filtered, they can be filtered on tags as well as names, e.g. filtering on game should show any game program whether or not it’s in the section games or has game in its title.

Application windows suck

The primary reason to put applications in free-floating windows is so that users will be able to put applications side-by-side, even though doing so is, in truth, at best a niche use case. The problem is that positioning and sizing windows takes a lot of bothersome meta work, especially when maximizing a window’s space usage.

Furthermore, window overlap requires the user to make annoying random choices of how to get at a particular window. Shall the user move or minimize other windows to get at the window underneath? Or should the user alt-tab directly to the window? Or use the taskbar/dock?

In the end, windows burden users with meta work and unnecessary choices for virtually no real benefit. Of course we should have the capability to see applications side-by-side, but we shouldn’t build the whole desktop around the idea.

Drag-and-drop sucks

For drag-and-drop to work efficiently, the drag source and drop target must be in view, but this is very rarely the case without burdensome pre-planning on the user’s part, especially when dragging from one application to another. Nearly as bad, users often mess up drags because drop targets are often unclear or finicky, resulting in unintended actions that must be undone. Users also sometimes simply change their mind mid-drag but are given no obvious way to safely abort the action. Finally, drag-and-drop actions are often poorly discoverable. In iTunes, for instance, the only way to move individual tracks to a device is by drag-and-drop, which many users fail to figure out on their own.

Virtual desktops suck

Floating application windows suck, hierarchies suck, and the desktop itself sucks, ergo virtual desktops suck. (And note how virtual desktops make drag-and-drop suck even more than it already does.)

Gadgets/Widgets/Gizmos/Plazmoids/Desklets/Applets all suck

Application windows suck and the desktop itself sucks, but applets are fucking ridiculous.

OK, I’ll walk that back a bit. Little status/info panel thingies? Fine, but let’s neatly organize them into some proper window rather than dump them onto the desktop surface (which, recall, needs to die).

If an applet is something the user actually interacts with at length, such as a game, there’s no reason whatsoever not to make it a proper application.

Wrong track

Before finally laying out Portals, let’s examine the good and bad interface reform ideas currently in circulation. First, the bad ideas follow four general themes:

Eye candy

Elitism is an essential part of human aesthetics. For instance, while we normally think of the criteria that make a good-looking person good-looking as objective, much of the attraction towards that person hinges on the rarity of their looks, not the looks themselves, per se. Similarly, gold is shiny, but an essential part of its worth is its rarity.

We see this in graphic design as well: what we consider stylish design hinges a lot on what is simply hard to duplicate. In the 60’s, this meant curved plastic furniture; in the 80’s, this meant cheesy computer video effects; today, this means web pages with rounded corners and glossy effects.

On the desktop, today, elite style means using hardware graphics acceleration because, five years ago, no desktop had it. As it stands right now, none of the major desktops have totally sorted out the infrastructure to make acceleration work ubiquitously, nor has the software caught up to make use of the new toy.

The trouble is that the set of new possibilities which acceleration opens up includes a lot of distracting, silly ideas which actually detract from usability. The obvious example of falling into this trap is Compiz and similar projects. Even aside from the purely aesthetic toys in these projects (such as drawing flames on the desktop), many of the features clearly exist purely for the sake of ooh…shiny.

Virtual physicality

Graphics acceleration has also led designers to create physical-simulation abominations like 3D desktops. Examples include:

This review of Real Desktop sums up the problem:

We can’t count the number of times we wished our Windows desktop was as messy as a regular desk. You know, because we’ve never really wished for that. But that’s exactly what Real Desktop lets you do. Oh yeah, it also turns your desktop into a 3D workspace.

While the 3D desktop is certainly pretty, we’re not sure it’s particularly useful. You can move icons around the screen with a left click. Click both of your mouse buttons to “pick up” an icon, or click the edge to rotate it. Probably the most fun you can have is when you highlight a bunch of icons and then drag them into another group of icons and watch them scatter like bowling pins.

Of these desktops, Grape is the least offensive because it mainly sticks to two dimensions, but it still exhibits everything bad about icons and drag-and-drop and imposes a heap of meta work upon the user in the form of innumerable icons, boxes, and text labels to create, position, and manage.

After a little thought and experimentation, it should be evident that treating virtual things as if they are like physical things is satisfying only up to the point where it becomes maddening, for the physical world simply does not scale the way the virtual world can. Sure, these desktops look neat and manageable when you have a couple dozen files, but who has just a couple dozen files anymore?

Manual, transitory organization

When people work in a physical space, they develop organization habits and strategies to cope with the mess of things before them. On your desk, for example, you might keep your personal stuff segregated from your business stuff, which makes sense because, as you work in one domain, you don’t want interference from another domain.

In the virtual world, however, such interference is not a problem: if I don’t have personal documents open at the moment, they don’t in any sense get in the way of the business documents I’m working on. If I do have a personal document open, presumably it’s because I’m switching my attention back and forth to that document. If I were to segregate my current items of attention, I wouldn’t solve the problem that I simply have only one focus of attention to give.

Interfaces that allow users to group or order items for the sake of coping with their number are imposing meta work on the user. Worse, grouping introduces hierarchy such that, to select an item, the user first must recall what group it’s in.

These burdens on the user often make sense when the user is organizing persistent state (e.g. files), but not transitory state. So, for instance, users shouldn’t order their browser tabs and group them into separate browser windows. Rather, the interface should automatically help users cope with dozens of open tabs in a way that obviates this manual work.

Half of the new interface design proposals I see assume that users would like doing manual, transitory organization, I think because the idea seems like it reflects the “natural” way people think and work. This probably stems from a sort of grass-is-greener fallacy: having worked on computers for so long, people begin to feel they’ve lost the virtues of physical paper work, forgetting why they moved away from paper in the first place.

Special pleading

In many desktop and web browser proposals, certain often-used applications and often-used sites are given special priority, usually in the form of convenient-access mechanisms. For instance, a number of design proposals for GNOME and netbook Linuxes elevate personal contacts—IM, email, address book, etc.—to first-level status on par with applications and file directories. Such proposals may have a proper motivation, for perhaps our current general mechanisms really don’t suit a particular common task or workflow. However, we should always try to rethink our general mechanisms before introducing special cases. For one thing, special exceptions tend to please one set of users to the great annoyance of others. For another, each exception is a design complication that all users must learn (or at least learn to ignore) and which inevitably becomes a barrier to change.

Steal from the best

Despite what the previous six-thousand words might convey, I don’t actually hate everything. In fact, Portals largely synthesizes a number of ideas from existing stuff, the most notable being:

  • The Firefox AwesomeBar
  • Quicksilver/Enso/Ubiquity
  • Wikipedia, Google, and various other sites

The things these examples do right fall under a few general themes:

  • Responsive, text-based navigation and action (e.g. search, text links, and commands)
  • Tags, not hierarchies
  • Lists sorted by recency and frequency
  • Chrome-minimal design
  • Typography-focused design

Having already trashed the alternatives, I won’t give these ideas detailed justifications, but “typography-focused” requires some explanation:

Whether you like the term Web 2.0 or not, we definitely did see a quiet revolution in web design somewhere around 2002. This new style is associated superficially with rounded corners and shiny gloss, but there’s more substance to it.

In the web’s first decade, designers strove to imitate magazine layout, wherein eye candy is stuffed into an asymmetric grid of boxes surrounded by cluttered, omnipresent headers and navbars. This style was motivated mainly by:

  • An aversion to simple flow layouts. No self-respecting designer wants their stuff to look like a Geocities page. By fighting the natural bias of HTML/CSS for flow layout, you get a look that’s hard to reproduce and therefore “professional”.
  • An inability to decide what’s really important. Business people in particular have a hard time coming to terms with that fact that, for some things to stand out, other things must be deemphasized. Of course you want visitors to partake of all your wares, but what do visitors want?

Today, good web design is typified by generously spaced and well-formatted text in one, two, or, occasionally, three columns that are allowed to flow down the page rather than divided into unnecessary widget boxes. Some good examples are:

To be clear, “typography-focused” doesn’t always mean ditching images and widgetry in favor of more text. Take for example Amazon.com, which is not an exemplar of the new style but exhibits subtle improvements when you compare Amazon.com of 2009 to Amazon.com of 2000. Like many shopping and portal sites, Amazon still retains much of a cluttered magazine layout, but you can see how the site today better uses images, colors, boxes, and spacing to avoid a ‘mass-of-text’ look.

The point here is that typography is about the complete presentation of the text—its context—not just the text itself. When text is presented well, you can do more with it, as many web designs in this decade have shown.

First principles

Before finally getting into the actual design of Portals, I’ll summarize the design philosophy in four slogans:

Don’t make me think

The title of Steve Krug’s book, Don’t Make Me Think, works as a great design mantra because it succinctly states that:

  1. The most important thing in interface design is the user’s thought process.
  2. Users would rather not have a thought process.

Obvious, perhaps, but easy to lose sight of when caught up in design details.

The explanation is the design

Is your design hard for users to understand? Does user proficiency hinge upon hours of practice and study? The best way to answer these questions is to start by writing the manual. Sometimes this will lead to changes in design, but often all that’s required are some changes in wording or terminology. In any case, your first concern should be how to explain the design to its users, not other designers and programmers.

The right features and only the right features

As I stated above in passing, it’s easy to devise new features, but it’s hard to devise features that make other features unnecessary.

It’s not worth it

Lastly, ‘it’s not worth it‘ is a handy, all-purpose way for me to shout down anything I don’t like:

Me: It’s not worth it!
You: What’s not worth it?
Me: It!

But the mantra has a non-abusive purpose as well. Ask yourself, say, ‘Why have we stuck with menu bars for so long?’ Well, when anyone argues that menu bars suck, the perfectly correct reply comes back that a menu bar is the optimal way to minimize mousing over an hierarchy of things. The problem is that this argument hinges upon the hidden assumption that efficiently mousing over hierarchies is of primary importance. Such hidden assumptions are the “it” to which I refer. What you think is so important perhaps isn’t.

‘It’s not worth it’ also works for cases where users themselves lose track of what’s really important. For instance, I advocate getting rid of the desktop surface, but I just know some people will object. ‘Users love wallpapers,’ they’ll say, never mind that wallpapers exist solely to (literally) paper over an unnecessary problem. The proper reply here is that good design requires balancing users’ desires to give them what they really want, and sometimes that means disregarding some desires for the sake of others.

Continued in part 2.

Gutsy Gibbon: the first real desktop Linux

15 Nov

Ubuntu 7.10 (codename “Gutsy Gibbon”), released last month, is for me the first really usable Linux, which is saying a lot considering I’ve made a serious attempt to switch to Linux about once a year for the last six years. The last 3 of those attempts have been with Ubuntu, and while I could always get my system dual-booting into Ubuntu, there was always some essential functionality I couldn’t get working such that, when the GRUB menu came up, I would always choose to boot into Windows. With Gutsy, I can finally say that the only compelling reason I have for booting into Windows is to play games.

What follows is a rundown of various issues hindering Linux desktop adoption, most of which I can happily report are solved–or on their way to being solved–in Ubuntu.

For reference, my desktop specs are:

  • MB: Asus P5NSLI (nvidia nforce Intel Sli)
  • CPU: Core 2 E6400 (LGA775)
  • RAM: 2 gigs
  • HD: Western Digital 7200 rpm 250gb SATA
  • Sound: Creative Labs X-Fi Gamer and ADI AD1986A onboard audio
  • Video: geforce 8800 GTX 768mb
  • Monitors: Gateway FPD2485W (24″ 1920×1200) and Samsung 215TW (22″ 1680×1050)

My laptop is a Gateway M-6816, which has Intel PRO/Wireless 3945 and Intel Graphics Media Accelerator X3100 (up to 384MB shared).

Installation media

I start with the issue of installation disc integrity because, in my experience, it is not a rare problem. My first torrent of the Gutsy amd64 DVD was faulty, apparently, and this caused the live CD boot to hang early in its process. I’ve experienced similar problems with previous releases. I suggest always running the media check (an option in the boot CD splash menu) before installation.

386 vs amd64

The 386 DVD version installed without a hitch, but sadly, I’ve yet to get the amd64 version to work. The trouble seems to be graphics related, as the boot CD stops before it gets to the graphical login screen. I successfully installed in text mode, but booting from that install exhibits the same problem.

Partitioner

A non-destructive partitioner is essential for Linux’s success on the desktop because most users coming from Windows wish to dual-boot and don’t want to reinstall Windows or lose any Windows partitions. Also really important is a smart installer that helps users pick the right set of partitions to create for Linux. With previous releases, I’ve had to turn to external solutions, like GParted and the Ultimate Boot CD, to mixed results. With Feisty, Ubuntu started to integrate GParted into the install process, but the result was still sketchy. When installing Gutsy, I already had free space on hand for a new primary partition and so basically by-passed the issue, but I’d be very interested to hear how others fared with setting up partitions for Gutsy where they had to resize NTFS. One thing I’d like is a manual/auto mode so I can have the installer recommend a partition layout but still see what exactly it’s doing and modify its plan.

Reading and writing NTFS

Just as important as easy non-destructive partitioning, users coming from Windows want to read and write their NTFS partitions. (Reading and writing Linux partitions from Windows is less pressing, but can be done with explore2fs.) With Feisty, users had to know they needed to install a package to get NTFS support, and I myself couldn’t get writing to NTFS to work, only reading. With Gutsy, my 3 NTFS partitions are readable and writable out of the box in Nautilus just by double clicking them and entering my password (this just mounts the partition: you must click again to view the root of the partition, a behavior which should be changed or made clearer with user feedback).

Boot loader

On the laptop I bought a few months ago, the first thing I did was make room for Linux by clean re-installing Vista from the included install disc into a smaller partition. Gutsy installed with no hassle, and GRUB allowed me to boot into Vista. On my desktop with XP and Vista already installed, installing Gutsy replaced the XP loader with GRUB, but strangely this left XP bootable from GRUB but not Vista. My guess is that GRUB tried starting Vista as if it were XP because that’s how it listed the partition in the menu; this may have arose from the unusual case of a Vista partition existing on a drive with the XP loader installed (it got that way because I installed XP after Vista). Only after reinstalling Vista and then a second install of Gutsy (amd64) could I boot into any OS from GRUB (even though, as mentioned, Gutsy amd64 won’t boot). Hopefully my unusual case will be accounted for in future releases.

GRUB could stand a few basic improvements. First, the default names given to the Ubuntu boot options should be simplified, as they are currently quite scary. Second, there should be an obvious GUI way to edit the boot menu, especially for changing the default partition.

Boot time

From hitting the power button, it takes Windows XP 60 seconds before the desktop appears. From that point, it takes another 90 seconds before Firefox will open and I can interact with it.

On the same machine, booting Gutsy cold also takes 60 second before I see the desktop. However, at that point, it only takes 5-7 seconds to fully load Firefox.

(I’m sure a few programs I have installed on XP hinder it compared to a clean XP install’s baseline performance, but I don’t have anything that major loading with XP; the only significant startup daemons I have are nvidia’s ntune panel, dtools daemon, and Valve’s Steam.)

Ethernet and Internet connectivity

In previous Ubuntu’s, I had issues getting wired and wireless internet connectivity, and I don’t think I have to tell you how useless a system without an internet connection is. Thankfully with Gutsy, I haven’t had one problem whether wired or wireless. I’ve yet to try networking to other Ubuntu installs or to Windows, so I can’t speak to those issues.

Pointer feel

If your mouse doesn’t feel right, your whole user experience is severely degraded. For instance, I probably wouldn’t dislike Macs so much if it weren’t for the fact that every time I’ve ever used one, the mouse motion was way off (and lets not even mention Apple’s “innovations” in mouse body shape and clicking mechanisms). Even when configuring my MacMini (the first and last Mac I’ll ever own) with mice of my own choosing, I’ve never gotten close at all to the quick, accurate control I’m used to in Windows, where I have a high DPI mouse (the Logitech G3) set to high sensitivity and low acceleration with “enhanced mouse precision” enabled. (Jeff Atwood ellaborates on mouse acceleration in Windows vs. Mac.)

The mousing in previous Ubuntu releases was similarly unsatisfying (though Kubuntu was considerably better). Compounding the problem in Ubuntu, the sliders for acceleration and sensitivity in the Gnome mouse control panel never seemed to do anything, as if they were just included for placebo effect. Well, in Gutsy, I still can’t tell if the sliders are doing anything, but happily the mousing feel out of the box is nearly up to par with Windows. My laptop’s touchpad is similarly satisfactory. While still not perfect, the mousing in Ubuntu is sufficiently good I rarely notice it (unlike in OS X). Still, the fact that the motion adjustments don’t seem to work worries me: I could have just gotten lucky this time with my choice of hardware while other people might not be so lucky.

[To be fair, after a few minutes spent playing with a new model iMac, I must confess the mousing was quite good, even with the gimicky mouse (the fact that I was using a fast, responsive system rather than the under-powered MacMini probably made the difference here). Also, Apple's new ultra-thin keyboards work surprisingly well considering they appear horridly anti-functional.]

Graphics driver, 3D acceleration, and multi-monitor

Unlike in previous Ubuntu’s, installing working 3D drivers in Gutsy for my latest and greatest nvidia card was effortless. On the downside, the “new” nvidia driver is proprietary, but I’m OK with this, as I don’t think the strategy of boycotting proprietary hardware will pressure the graphics chip makers to release open specs or drivers. Features like Compiz need to get in front of users for proper attention to be paid to 3D among the develeper community, let alone the chip makers, and as long as no one is actually using 3D hardware on Linux, nvidia, ati, and intel will feel little pressure to advance the platform. Hopefully, ati will fulfill their promises of open source drivers and specs for R500 and later hardware, and hopefully success there will prompt nvidia to do the same. (Like with Samba and proprietary multimedia codecs, the basic strategy here needs to be the FOSS version of “embrace and extend”: supporting proprietary technologies gets Linux out of the gutter while interoperability increases use of free technologies. FOSS needs market power first before it can make demands. But that’s a whole post unto itself.)

Getting my second display configured was frustrating at first, but running ‘nvidia-settings’ got everything in order (run it as root so that it can overwrite xorg.conf). The Linux multi-monitor situation is a bit confused at the moment because of limitations in the underlying X window system that have yet to be corrected or worked around. This results in some unsatisfactory behavior: currently, my desktop is a 3600×1200 virtual desktop such that it extends off the top and bottom of my 1680×1050 display; while maximization on that display thankfully works correctly, some oddities occur, such as desktop icons hiding off screen.

Sound

My Audigy X-Fi is not supported in Linux except by proprietary beta drivers from Creative Labs for 64-bit Linux only. As I have yet to get 64-bit Gutsy working, I can’t report on that support. Rather than swap back in my Audigy 2 ZS, I turned on the onboard audio in my bios. This got sound working, though I did have to fiddle a bit in the sound panel. Also, I can’t get anything beyond 2-channel sound to work, likely because the onboard audio’s jack sensing is not supported by the drivers, so the 3 jacks from my motherboard are stuck as line-in, line-out, and microphone (hmm, except then you would think I could get sound to record, though I can’t).

These driver issues are understandable considering I’m using a relatively new proprietary sound card and a not-so-common generic onboard solution. Really though, the biggest failing of Linux sound-wise at this point is not any lack of hardware or lack of auto-configuration but simply that the sound situation is so damn confusing. In trying (and failing) to get 6-channel sound working and in (successfully) fixing the lack of sound in Flash on my father’s system, I was confronted with a mess of OSS, ALSA, ESD, OpenAL, PulseAudio, and a whole array of opaque options I didn’t understand. Hopefully these projects will coalesce or at least learn to better exist side-by-side so they can be configured to work simultaneously on the same system. The situation feels like GNOME vs. KDE of several years ago when getting KDE apps to work in GNOME and vice versa was tricky.

Font rendering

In previous releases, smoothed font-rendering was tricky to get working, but now it is turned on in GNOME by default. You can even select between “best shape rendering” (OS X -like rendering) and “subpixel rendering” (Windows-like rendering). (Here’s the difference between the two.)

On the downside, the selection of fonts out of the box is a bit lacking: too many almost identical sans fonts, not enough quality serifs, not enough monospaced fonts. On the upside, most of the included fonts are quite good, especially the great Monospace, which is very similar to Microsoft’s new Consolas font but with a few changes that make code even more readable.

Multimedia codecs

On all Windows installs, I install mplayer, vlc, and the K-lite mega pack because doing all that seems to cover all bases, codecs-wise. In previous Ubuntu releases, I’ve been lucky to get MP3 support working, but in Gutsy, I finally don’t have to worry about codecs. I simply open any media file in mplayer or vlc, and GStreamer detects which codec I need and downloads it, and then the file plays (though sometimes only after restarting the player). Dubious legality aside (OK, not so dubious illegality—I’m in the US, and it definitely ain’t legal), the only way for codecs to work better is to simply have the whole lot installed by default.

Portable music players

My Creative Labs Zen Microphoto (8gb) used to work fine on Windows XP before Creative discontinued its original drivers (which they pig-headedly don’t offer for download) and upgraded its firmware to use Media Transfer Protocol. Now, Zen users on XP must use Media Player 10 to interface with their device, but it’s never worked for me. Since then, I have had to use my laptop to charge and interface with the device.

In Linux, applications can use libmtp to interface with MTP devices. Ubuntu installs by default Rythmbox, a stripped-down but slick iTunes-like player, which supports libmtp via a plug-in. I just wish it were clearer that you must enable the MTP plug-in (in the plug-in preferences) before your device will be recognized, as it was days before I noticed the option.

So here’s a case where something works for me in Linux but not Windows (XP).

(I should also remark how much I like Rythmbox, which is surprising considering I dislike iTunes. The key thing that makes Rythmbox acceptable to me is that it keeps your music files in place rather than presumptuously duplicating and re-encoding your entire media collection the way iTunes is wont to do.)

Package management and application installation

In the previous Ubuntu, I had trouble with Apt (the package manager) getting buggered right out of the box such that its database got locked, causing Apt and Synaptic (the GUI front-end) not to work at all because they couldn’t access the database. This happened consistently immediately after a clean install, rendering Ubuntu basically unusable.

It’s annoying that many packages in the repository are for old versions, even for major programs, like Eclipse and Azureus. Azureus, for instance, wouldn’t work for the version I got from Apt; only the latest version, gotten manually from the Azureus site worked. So there are many cases where you must go fetch a lot of apps by means other than Apt.

Java

Now that Java is free, I assumed it would be included by default, but that didn’t seem to be the case (or at least, I had to manually install Sun’s Java 6 to get Eclipse and Azureus working). It’ll be nice when this situation gets resolved so users can simply have Java apps working out of the box.

Web browsing

Firefox in Ubuntu is much slower at rendering than in Windows. In Windows, my habit is to ctrl-mousewheel to change font size, and Firefox resizes in real-time as I scroll the wheel with all but the most complex pages. In Ubuntu, even simple pages can take a moment to resize the text, so I had to train myself not to reflexively resize pages. You can also see Firefox’s slow rendering in pages like Google Maps, where dragging the map is far from smooth. It’s likely this slow rendering is tied to inefficiencies in GNOME, X, or maybe the new font rendering, as I can’t imagine that Firefox’s rendering path changes much between Linux and Windows. It’s possible the new font rendering is causing this slow down, ormaybe something similar is the culprit, but I see the problem on all machines I’ve installed Gutsy. Hopefully Firefox 3′s new rendering engine (with a Cairo backend) will bring this back up to par.

(Sadly, my favorite Compiz plugin, Enhanced Zoom, causes animated cursors to disappear after the first time you zoom in, and this is most aggravating in Firefox, where the cursor animates as pages load; the only fix is to restart X (by ctrl-alt-backspace), but this closes all your programs, so the only real solution for now is to just disable Enhanced Zoom.)

User switching

A panel widget included by default is “fast user switching”. I’ve experienced debilitating bugs with this widget, so I suggest you not use it.

Gnome desktop

Finally, I have some assorted thoughts on GNOME:

Thankfully, the application menu has been cleaned up since Ubuntu’s last release: rather than listing apps by their project names—names which are meaningless to most people—most apps now are just given simple descriptive names that reflect their functionality. The menu editor, however, could stand some more work, as it’s a bit hard to discover and awkward to use.

In previous GNOME’s, the icons ranged from acceptable to ugly turd. Now, the default set of icons in the menus and file browser not only do not look like crap, they are actually very attractive and exemplary models of clean design, better even than many icons seen in OS X. And speaking of bling, I do wish there were more color themes and bundled wallpapers with the stock system in the vain of Windows. It would be neat if they included some of the great photos from Wikipedia’s photo of the day, which includes some very neat panorama shots.

The default panel config of Gnome is a silly attempt to split the difference between the Windows / Mac desktop, and too many unnecessary panel widgets are included by default. At the very least, a quick panel config wizard should allow me to choose the standard Windows taskbar layout.

Unfortunately, my biggest irritations with GNOME can’t be fixed with a lick of paint. First, windows with scroll panes inside open too small to see the content of the scroll pane, forcing me to resize or maximize these windows to properly see this scrolled content. This problem is exasperated by my other grievance: grabbing window corners is too tricky, and I often accidentally click with the horizontal or vertical resize cursor when I wanted the 2-axis resize cursor.

User interface semantics

27 Oct

One of the largest deficiencies in most software interfaces lies in their poor use of semantic categorization. This is most clearly seen in windowed applications’ menu bars, where organization is done in several different ways (often inconsistently within the same application):

  • organization by hierarchy of action, e.g. “Paste” goes under “Edit” because pasting is a kind of editing
  • organization by hierarchy of things, e.g. “Layers” goes under “Window” because the layers dialog is a window
  • organization by phrase completion, e.g. “Grid” goes under “View” because one views the grid
  • organization by inverted/scrambled phrase completion, e.g. “Rotate 90 degrees” goes under “Image” because one rotates the image 90 degrees
  • organization by free association, e.g. “Back” and “Forward” go under “History” because they are navigation links like a previous-site link is a navigation link
  • organization by fuck all, e.g. “Preferences” go under “Help”, “Tools”, “Edit”, “File”, et al.

The hopeful thing about this problem is it suggests much software could be greatly improved simply by re-examining a few word choices and moving some menu items around. The discouraging thing about the problem is that getting semantics and categorization “right” is very, very hard and may in fact be an intractable problem (see Clay Shirky here and here).

Portals: window management for those who hate window management (mockups in Javascript)

17 Jul

Portal

Jeff Atwood discusses the way Mac OS X windows don’t really have ‘maximize’ buttons, and he comes to the right conclusion: better to have overly large windows than to make users futz with the dimensions of their windows. He says:

Apple’s method of forcing users to deal with more windows by preventing maximization is not good user interface design. It is fundamentally and deeply flawed. Users don’t want to deal with the mental overhead of juggling multiple windows, and I can’t blame them: neither do I. Designers should be coming up with alternative user interfaces that minimize windowing, instead of forcing enforcing arbitrary window size limits on the user for their own good.

As it happens, minimizing the hassle of windows—both main application menus and pop-up dialogues—is the major design goal of my desktop UI design, which I’m calling ‘Portals’. Back in this post in March, I promised to present the Portals design, but I never quite finished the mockup demos in Javascript. Still, there’s enough there to convey the biggest ideas. Eventually I’ll fill in the notes and the rest of these demos and perhaps also finish the screencast about Portals which I started.

The mockups come with lots of (rambling) notes, but one thing they oddly fail to make clear is that Portals has no desktop, i.e. no flat empty surface on which to dump icons and files.

Just what Aunt Tillie needs: Vi?!?

21 May

This last week I’ve been getting familiar with Vim. I’ve dabbled a few times in the past, but this time I’m finally feeling comfortable enough to stick with it. I was quite annoyed with having to hit ESC all the time, but a neat tip is to set this in your config file:

imap jkl <esc>
imap jlk <esc>
imap kjl <esc>
imap klj <esc>
imap ljk <esc>
imap lkj <esc>
set timeoutlen=500   " cuts down the pause time you'll see after typing j, k, and l

…so now you can get out of insert mode by smashing ‘j’, ‘k’, and ‘l’ simultaneously. This will, of course, conflict if you ever need to type one of these sequences, but those are surely quite rare and work-around-able. I’m even going to experiment with reducing this just to ‘j’ and ‘k’, which is itself a really rare sequence in English. (Maybe ‘s’, ‘d’, and ‘f’ or just ‘d’ and ‘f’ can be set to ESC as well.)

If you don’t like that solution, other good choices are:

imap <S-space> <esc> " shift-space to get out of insert mode

…and:

imap <S-enter> <esc>  " shift-enter to get out of insert mode

I mention Vim because I recently played with the alpha of Archy, which is a project to implement the ideas of the late interface theorist Jeff Raskin, and its text editing mode feels much like Vi stripped down to the essentials for the common user. Here’s a quick summary of Archy’s interface:

  • The primary part of the interface is one big text-editing window; there’s only one sequence of text, but the text is separated into ‘documents’ by special divider lines (which are inserted by the ` key).
  • Navigation up and down the big glob of text is done primarily by holding down an alt key and typing a string of text to “leap” to. Using left-alt does a back search for that string while right-alt does a forward search. Releasing alt takes you back to typing mode at the location you just leapt to. To repeat your last search, hold capslock and tap alt (right-alt for forward searches, left-alt for back searches). Having to hold down alt while typing is pretty awkward and uncomfortable after a while.
  • This ‘leaping’ feature with the alt-key searching is also how you select text: the selected text portion is always the section between the place you last leapt from and leapt to.
  • The user issues commands by holding down the capslock key, then typing the command. E.g. [capslock]-C-O-P-Y to copy the highlighted text. Even with auto-completion, having to hold down capslock while typing is pretty awkward, especially if the command contains an ‘a’, ‘q’, or ‘z’.
  • Text can be formated (coloring, fonts, style, size, alignment, etc.) by highlighting text and issuing a command.
  • And for some reason, to get a key to autorepeat, you must triple-tap it before holding it down. (This is really annoying, especially with the navigation and backspace keys.)

Text-editing is the only part of the alpha, but Archy’s other key component will be what they call the Zooming User Interface (ZUI), which I take it is where non-textual elements will reside. How these components will fit together, I’m really not sure.

What to make of this? First, I’ll say there are some things I admire about this approach—mainly, Archy bravely recognizes average users can cope with a few non-discoverable elements: as long as the set of basic concepts to learn is small and worth the pay off, it’s OK—contra Steve Jobs—to expect users to learn something. Archy strikes me as an attempt to bring the Unix philosophy to the masses: rather than relying upon packaged solutions to problems, users are encouraged to build upon a base of simple yet powerful mechanisms; Archy simply starts from a clean slate, dispensing with the accumulated detritus of 40 years of terminals and programmer conventions, keeping things tidy, sacrificing power for approachability.

This said, there are glaring faults with Archy as currently available. First, having users hold down alt and capslock while typing is clearly not going to fly, as it’s difficult for even expert users, let alone the majority of users that have trouble touch typing. Yes, the purpose is to eliminate modality, but this is clearly not a realistic solution. (Maybe we need special keys below our space bar which we can hold while typing more naturally, or maybe the spacebar itself could be used.)

Really, modality is not as bad as UI orthodoxy claims. Keep the number of modes few and try to eliminate unnecessary ones, surely, but modality is extremely powerful, and to do it right, you really just have to give unmistakable visual, auditory, and feedback cues to the user conveying what state the interface is in. (A good test is to see what happens when users step away from the computer then come back later; if they mistake the mode they’re in, you have a problem.) Archy’s cues are just too damn subtle. For instance, I found myself confused when I started typing because sometimes Archy would move my cursor to some other document; turns out I was trying to type in a ‘locked’ document, but it took me a number of tries to figure this out as the message Archy briefly flashes is black text superimposed over very similar black text at the top of the screen; it’s great they didn’t annoy me with an ‘OK’ pop-up, but there’s unobtrusive and then there’s covert. In general, Archy has far too few elements of on-screen guidance, likely because the developers are too enamored of the idea of an interface-less interface. At the very least, Archy should have a training mode in which at least some screen real-estate is devoted to guiding new users.

Also problematic is how Archy messes with the user’s expectations about character keys, giving the keys surprising behavior in certain contexts such that they don’t always produce their usual characters. Now, there are obvious cases where average users quickly adapt to character keys not producing their respective characters on screen—games, for instance, often make use of the alphanumeric keys for non-text purposes—but such programs usually clearly delineate between typing mode and non-typing modes. In contrast, Vi, Emacs, and now Archy violate this barrier, messing with character keys in the context of typing text. In Archy, you find yourself in a few annoying moments like you do when first using Vi(m) where it’s not responding like you expect and you just can’t understand why.

Fortunately for Archy, both of these faults—inadequate cues and poor key assignments—can be fixed, but I wonder if the project will be willing to do so, as it requires compromising on its ideals of a totally modality-less and interface-less interface.

dtm-ufth_dif2grok-pmdw3

4 May

Title translation: ‘Don’t tell me you find this difficult to understand purple monkey dishwasher (version 3)’.

Preston Gralla on O’Reilly Net complains that Linux package names are preventing wider Linux desktop adoption. While I find his claim that Linux will never get there extreme, I do agree this is a significant hindrance.

Such package names simply shouldn’t be presented to regular users at all, even in the context of browsing packages. Sure, people can just ignore them, but don’t underestimate the psychological toll of being confronted with a stream of information you can’t even categorize let alone understand.

More generally, I oppose the Unix/old-style-programming practice of privileging ease-of-typing and compactness in names over descriptiveness. Even for those attempting to verse themselves in the lingo, the level of contextual familiarity presumed by this preponderance of abbreviations makes the learning curve very steep.

Thine desktop runneth over

26 Mar

I don’t have to be Aunt Tillie to crave a simpler desktop computing experience. Whether I’m using Windows, OS X, GNOME, or KDE, my current work flow gets tangled as I juggle several open folder windows, half a dozen instances of Firefox with 30 tabs between them, a text editor, Google Talk, Eclipse, terminal windows, a media player, and sometimes more; on top of this is the ever expanding mess that is my hard drive. This basically sums up my two main computing problems: my desktop is an unstructured mess of windows, and my hard drive is (between major cleaning jaunts) a mess of files.

The second problem, the files problem, is being partially addressed by desktop search (Google Desktop, Beagle, etc.), but that solution doesn’t really help keep my drives clean—it just let’s me cope better with the mess I create. I think the real solution will be in adding tag-based directory structure onto our current hierarchical directory structure (yes, we would have to meld the two), but that notion will have to wait for a later post.

As I argued in the previous post, the root of the first problem (window management) lies in the desktop metaphor itself and the dominant GUI conventions of the last twenty years. To deal with this problem, we should first step back and analyze what purpose the desktop serves. The GUI desktop is, at a minimum:

  • An interface for starting applications, for switching between open applications, and for allotting screen space between open applications.
  • A common set of interface elements for applications, often including guidelines for the use thereof to achieve a cross-application standard look-and-feel.
  • A data sharing mechanism between apps (copy and paste).
  • A common mechanism for application associations—what applications should be used to open such-and-such a file or send a new email, etc.
  • A set of system-wide keys, e.g. ctrl-alt-delete on Windows.

And because most users don’t/can’t/won’t use a command-line, desktops include a minimum set of apps:

  • File management.
  • System/hardware configuration and system utilities (e.g., a GUI non-destructive partition resizer—which has been too long in coming to Linux live-CD’s, frankly).
  • Program installation/removal.

So what should the desktop not do? The standard Linux distros all come with an additional assortment of basic apps (web browser, office suite, mail, etc.), which is great, but I think integrating such things into the desktop (or giving naive users the illusion of integration) is very risky and of dubious benefit. The problem with such notions is that they smell of ‘ad hoc-ist design’, i.e. design which tries to meet needs by making exceptions to the rules rather than by applying the rules. Ad hoc-ist design evinces an unfortunate fact about design: it’s easy to come up with features; the hard part is devising features which make other features unnecessary. At a minimum, ad hoc-ism fails to minimize complexity; at it’s worst, ad hoc-ism introduces complexity, and that goes for all parties— for designers, for implementers, and for users. If your design evinces ad hoc-ism, it’s likely because your core mechanisms don’t fit your needs well enough, so rather than piling on more features, it may be time to rethink those mechanisms. Any proposal to expand the desktop should heed this advice, but most such proposals I’ve seen don’t (e.g. see here).

For instance, writing two years ago on O’Reilly Net, Jono Bacon (of LugRadio fame) suggested that project management should be promoted to first-class status as part of the desktop [link]. Jono’s suggestion basically requires three things: 1) integrating features of a personal information management (he calls it ‘project’ management) application into the desktop; 2) implementing an automated personal information data sharing mechanism for applications; 3) making applications work with this mechanism.

Now, Jono may actually be on to something here: his core complaint is that some applications should be able to automatically share certain kinds of data. Problem is that automated data sharing between desktop apps in general is the proper problem to tackle—after all, what’s so special about PIM? Now, lack of a general desktop data-sharing mechanism—let alone an automated one—is actually a long standing problem. Arguably the time for a solution has come, but narrowing the problem and tying the solution to programs of a specific domain would be hurtful in the long run, both for the sake of solving the general problem and the domain-specific problem: the mechanism that would result would likely be too tightly coupled to particular apps, discouraging the formation of an ecosystem of competing experimentation and natural selection.

I can understand Jono’s frustration: the Unix people tend to see every data sharing problem in terms of the mechanisms already existing in Unix (there are quite a few of them, after all), and so they don’t exactly rush to fill in gaps that hinder certain problem domains. Who knows, maybe something already exists to meet desktop data sharing: copy and paste via web services, anyone?

Anyway, I’ll finally present my desktop UI design later this week.

I hate Macs

19 Mar

Continuing a discussion of desktop UI from the previous post. Be clear that most of what follows applies equally to Windows and the Linux desktops; my point is that Apple popularized these ideas and the others—-misguidedly—-still follow Apple’s lead.

Compared to Microsoft, I don’t especially begrudge Macs for their existence, and I do recommend them over Windows to naive users who don’t have anyone to maintain and set up their boxes for them. Still, it annoys me how many people have drunk the Apple koolaid and believe that OS X has anything on the basic Windows experience beyond gloss (and an ‘it just works’ quality earned only by a controlled, minute hardware and software landscape); just about everything beyond a few features of the OS X interface is just arbitrarily different from other desktops. Macs wouldn’t annoy me so except for how their influence perpetuates through fashion stupid graphical interface design ideas which both Microsoft and Unix desktops have slavishly followed. Contrary to popular opinion, Macs are not the end-all/be-all of usability, and in fact, Macs have long perpetuated some erroneous thinking about usability. There are several things seriously wrong with the desktop/windows metaphor, and Apple is responsible for most of them.

[...]