Archive | July, 2007

Warts on a snake: Ugly bits in Python syntax

27 Jul

Opera python I point out this post not to comment on its subject but just as an example of Python code and to remark that the otherwise pretty and compact Python syntax is blighted by a few things:

  1. The ‘self’ as the first parameter of every class method is cluttery and is only necessary as the artifact of Python’s conceptual distinction between bound and unbound methods. I think Javascript got this right: when a function bar is invoked as foo.bar, foo gets passed to the function as the keyword ‘this’ (though I would choose the keyword ‘me’ instead for brevity).
  2. Another thing Javascript got right is not requiring quote marks around identifier-like names used as keys in object literals. Where Python requires {‘foo’:bar}, Javascript allows {foo:bar}. Of course, in Javascript, keys can only be strings while Python allows any kind of (immutable) object, so some syntax would be needed to distinguish between foo to mean the string ‘foo’ and foo to mean the object referenced by foo. I suggest something like @foo to mean ‘the object held by foo‘.
  3. The pseudo-special names beginning and ending in underscores are just ugly and annoying to read and type because it can be difficult to tell the difference between one underscore and two adjacent underscores.
  4. The colons after if, elif, else, etc. look fine, but they’re annoying to type and easy to forget to include. A delimiter is needed for one-liners, e.g. if foo: print bar , but should be omitted for multi-liners (and this omission should be compulsory).

Portals: window management for those who hate window management (mockups in Javascript)

17 Jul

Portal

Jeff Atwood discusses the way Mac OS X windows don’t really have ‘maximize’ buttons, and he comes to the right conclusion: better to have overly large windows than to make users futz with the dimensions of their windows. He says:

Apple’s method of forcing users to deal with more windows by preventing maximization is not good user interface design. It is fundamentally and deeply flawed. Users don’t want to deal with the mental overhead of juggling multiple windows, and I can’t blame them: neither do I. Designers should be coming up with alternative user interfaces that minimize windowing, instead of forcing enforcing arbitrary window size limits on the user for their own good.

As it happens, minimizing the hassle of windows—both main application menus and pop-up dialogues—is the major design goal of my desktop UI design, which I’m calling ‘Portals’. Back in this post in March, I promised to present the Portals design, but I never quite finished the mockup demos in Javascript. Still, there’s enough there to convey the biggest ideas. Eventually I’ll fill in the notes and the rest of these demos and perhaps also finish the screencast about Portals which I started.

The mockups come with lots of (rambling) notes, but one thing they oddly fail to make clear is that Portals has no desktop, i.e. no flat empty surface on which to dump icons and files.

Better tabbing in Firefox (mockup in Javascript)

16 Jul

In response to a challenge by Aza Raskin to come up with a better way of tabbing in Firefox—in particular a solution that scales better the more tabs you have—I produced this mockup in Javascript. Be clear that, because of the way the tab previews are done, the performance is creaky and not representative of what a proper implementation would be like. Please, use your imagination and pretend the previews pop-up instantly. Also understand, it ONLY WORKS IN FIREFOX. (While it won’t work at all in IE, it should mostly work in other non-Firefox browsers, though I haven’t tested any).

The rationalization is given on the page, so I’ll simply discuss here why I rejected some ideas proposed by others and also consider some variants on my design which might be even better.

In the comments of Aza’s post, a number of people expressed a desire to introduce alternative ways of conceptually ordering the tabs other than the default order in which you opened them, e.g. some wished to be able to group their tabs (which you can kind of do already by reordering), some wished to see their tabs listed by chronology of the pages (as opposed to of the tabs), and some wished to see their tabs in a web displaying heritage (which page was opened from which other page). While there might be something to these ideas, I avoided them as there seemed to be easier gains to be made that didn’t involve conceptual changes for the user. I wanted to improve the tactile experience of dealing with many tabs.

Others have proposed some kind of zooming UI. Again, there may be something to this idea, but until hardware support can be implemented consistently across platforms, I don’t see this happening. Besides, it’s a tricky thing to get right, as many users are easily disoriented.

Others mention multiple tab rows. This idea is problematic for the same reasons displaying any one-dimensional information in rows is problematic: things move around in unexpected ways when the bounds gets resized and tabs are added and removed, messing with the users spatial memory of where their tabs are. (Of course, text has the same problem, but text has paragraph breaks that help a large section of text mostly retain some recognizable shape as it is edited and its bounds resized.)

As for variants on my design:

A major flaw of the current Firefox tabbing which my design doesn’t really conquer is that I find myself often confronted with having to do multiple searches to find a tab: by reflex, I first search through those tabs I can see, then I’ll mousewheel back and forth, then occasionally I’ll go into the full list on the right if I still haven’t found the tab. The problem here is that the worst-case scenario searches are very expensive and distracting, but they wouldn’t be if I simply went to the full list to begin with. As nearly good would be if I couldn’t scroll the tab bar at all, forcing me to go into the full list of tabs sooner; in this scenario, we’re actually better off keeping the number of tabs visible in the main bar rather low, say 7-9 at most.

If we embrace the idea that the full tab list should be used more often, it then makes sense to better purpose the main bar. Rather than show tabs which occur consecutively in the full tab list, the main tab bar could display the last viewed tabs in the order they were viewed. In this design, you actually wouldn’t ever see the current tab in the main tab bar as you don’t need to click on it, but you’d still find it in its proper place in the full tab list.

If this is too confusing, perhaps get rid of the main tab bar altogether and have the full tab list button sit by itself to the right of the search box.

I’ve also considered simply having a single vertical sidebar for tabs. This would be like having the full tab list always open. While you might object to the loss of screen space, I’m not sure it would be so bad, especially for wide screen users, who often have to artificially make their Firefox windows narrower for reading, anyway. The Vertigo extension already offers Firefox users vertical tabs, but it could be improved:

  • The tabs bar should not be as wide as the history and bookmarks sidebars are by default, not really for functional reasons (at least on big widescreen monitors) but rather aesthetic ones.
  • Each tab should be two lines high for easier clicking and so that titles can wrap onto two lines if needed.
  • Hovering over a tab should display a preview (as in the demo) and any part of the title cut off should appear extending out of the tab bar into the page. (In fact, I’m thinking that hovering over the vertical tab bar should make all cut off titles appear in this manner; even with the titles extending out of the sidebar, you would still be able to see where the sidebar ends, so when you mouse out of the sidebar, the titles would all go back to normal.)

Player “freedom” in multiplayer games is a design cop-out

15 Jul

For the sake of cleaning random stuff off my hard drive, here’s something I saved which I posted to a forum a couple years back. It mainly concerns the Battlefield series of games, developed by DICE. In Battlefield, two teams fight each other for control of key points on the map; players spawn into the world on foot, but they can enter and exit vehicles found in the world, such as tanks and planes, and this has always been the key appeal of the series.

I’m waiting for Battlefield 2 with much anxiety because the developers keep talking about how the original was great because of its “freedom” and “rock-paper-scissors” balance.

BF1942 (Battlefield 1942, the original game in the series) was a great game for about 6 months, but then on public servers, the gameplay devolved into mindless deathmatch: initially, players were excited by the new objective-based gameplay, so interesting public matches were common, but then, when the novelty wore off, individuals just got frustrated in not being able to coordinate with the strangers on their team, so they started taking the ‘freedom’ of the game too far—meaning they just started goofing around, playing only for personal kill counts, and sabotaging their own teams. (Which is unexpected: you’d think there would be more focused play as the game aged compared to the first months, but this only applied to BF1942 clan matches, not public games.)

Furthermore, certain roles got too powerful as players got really skilled, most notably the pilots: remember how everyone fought over the planes all the time? Well, some of those guys won planes and kept winning them. Then they got really, really good at flying them to the point that every server has that one pilot who climbs to a billion feet then dive bombs everything with pin-point precision. Because everyone else never got practice in the planes (and because they never even hear the planes coming from that altitude), they’re mostly helpless.

Perhaps the basic question that needs to be answered is how to design a more strategic action war-game by upping the average stay-alive time while making it much more deadly for players to play in a Quakish manner (jumping around like a chicken with your head cut off) so that players are forced to use cover. In other words, designers need to coerce players into preserving their lives more without going for the Counter Strike (CS) nuclear-option of permanent death. The CS scheme has the fatal flaw of punishing less skilled players—punishing them not only with less play time and therefore less fun, but also with less practice time. Being dead most of the time hampers the novices’ ability to learn. (CS suffers worse from this dynamic more than other games because of its system whereby the best players have the most money and thereby get more practice time with the best weapons, which are very difficult to learn how to shoot accurately.)

My point is that, first, DICE over-rates freedom: ‘freedom’ in gameplay of this kind is neat at first, but then as the balance of the game comes into focus as enough players gain skill, the freedom destroys the coherence. Second, the rock-paper-scissors balance of deadly encounters must be put in a strong teamplay context or else the encounters with the enemy devolve into just random noise: player on side A has the overwhelming advantage one third of the time while player on side B has it another third.

Another reason to coerce players into playing as a team is because of the ‘90/10 rule’: 10% of players far out-class the other 90%. Such disparities make a game fun for no one except the highly skilled players (assuming they don’t care about being challenged), and worse, from a game maker’s perspective, not giving less skilled players a useful role to play discourages new entrants to the game. Forcing teams to really work together would dampen the distorting effect of the outliers upon the game.

To encourage teamplay, the most basic step is to keep teammates near each other. In the Battlefield series (and in fact in most other FPS games), teammates wandering off on their own is a constant problem because the temptation for each player to take their own course of action without consulting or informing their teammates is simply natural: even if players could reach agreements on what to do, the pace of most games is too fast and the communication mechanisms too cumbersome to execute coordinated actions, so few players try. Voice communication is not enough because 1) only a quarter of players tend to have mics 2) there’s a limit how many people you can effectively talk to at once 3) action games tend to move too fast for players to debate a course of action, and no one can decide who should give orders. Instead, what’s really needed is a way to effectively coordinate with players near you without inundating players with useless info spam; this alone will greatly encourage players to actually move in convoy and thereby really play as a team.

To enforce sticking together, the mechanism I concocted is to damage and eventually kill players for straying away from other players. The details of this are a bit tricky, as you must account for the fact that players might respawn far away from their team, a player’s teammates might die around him, and griefers might abuse the system. The system would work something like this:

  • Every few seconds or so, for each player, get the set of teammates within a certain radius. Those with n teammates within their radius are ‘in compliance’.
  • Depending upon the value of n and the number of players on a team, a team might have separate clusters of ‘in compliance’ players, so it’s not necessarily the case that the whole team has to travel as one. The center point of these clusters is found by averaging together the in-compliance players of each cluster, and HUD arrows direct out-of-compliance players to these clusters.
  • Each player has a compliance rating bar: out-of-compliance players start losing compliance points, and when they get to zero compliance, they start losing health.
  • To account for cases where a player spawns far away from other players, set their compliance bleed rate to something slow enough to reach any of the clusters.
  • A player might wish to move to another cluster, so the bleed rate of a player leaving compliance is slow enough for them to reach the other clusters.
  • Players don’t instantly regain their compliance points when going from out of compliance to in compliance; otherwise, players would abuse the system by hoping in and out of compliance, making the compliance radius less meaningful.
  • If a player’s cluster is dissolved because of his nearby teammates’ deaths or desertions, the player gets full compliance and a new bleed rate slow enough to reach another cluster.

Obviously this is all subject to real-world testing, but I think something like this would go a long way to making gameplay more coherent. Getting the bleed rates right would be tricky, so perhaps it would be more effective to instead give in-compliance players significant artificial advantages, such as more health points and/or more powerful weapons; making out-of-compliance players simply not competitive better avoids annoying death and griefing scenarios.

Stealing the web’s precious, bodily fluids

14 Jul

Raganwald argues that link-voting sites (Digg, reddit, et. al.) hurt the web by locking comment threads into proprietary databases, depriving the web of some of its vital webness. I would agree, except:

  1. I’ve always found Digg and reddit comment threads function as contests to see who makes the best joke, and this they do well enough. For serious discussion, they are almost always useless, so I turn to the source’s comments.
  2. Slashdot is a pretty strong counterexample. Whereas Digg and reddit comments are largely dominated by immature people, the Slashdot community is dominated by knowledgeable and insightful immature people. The only thing I really dislike about Slashdot threads are all the people complaining about Slashdot dupes, Slashdot story quality, the Slashdot moderation system, and Slashdot comment threads.
  3. Many links don’t point to blogs but rather to ‘heavier’ sites, like newspapers. I never read nor contribute to the comment threads of such sites: if it isn’t a WordPress blog or something near like, I ain’t going to bother with your crappy commenting registration process and interface. So in those cases, Slashdot, Digg, and reddit provide a forum that in my mind wouldn’t otherwise really exist.

Now as it happens, I’ve had an idea of late of how to create a decentralized Digg-like system using RSS feeds. I leave it as an exercise to you to imagine how this would work (hint: there are no centralized feeds; everyone sees their own personalized collation of received items; while not exactly the same thing as Digg, I believe this actually has some important advantages for the user). Oh, and I call the system Panoptikon (with a ‘k’ because the closest domain I could get was panoptikon.org)*.

* If you’re that one crazy person who actually follows my blog, you remember ‘Panopticon’ is the name I gave to another proposed idea. Yes, I’m re-purposing the name, as I believe it fits this idea better.

Lost in translation

12 Jul

A learner’s guide to the terminology and concepts of software build processes.

What’s the difference between an assembler, a compiler, and an interpreter, and what’s a linker?

Tower of Babel

Assemblers

Let’s start with the clearest case. An assembler is a program which translates ‘assembly language’ code into processor instructions (a.k.a. ‘machine instructions’/’machine code’, a.k.a. ‘native instructions’/’native code’). What’s assembly language? ‘Assembly’, ‘assembler’, or ‘asm’ for short, is the generic name given to all low-level languages. Now what’s a low-level language? Well, whereas in high-level languages, each line of source code typically translates into more than one processor instruction, in an assembly language, each line directly corresponds to one single processor instruction. Assembly offers the programmer exact control: what you write is exactly what gets executed, instruction-by-instruction.

Because different processors understand different sets of instructions, the assembler language you use must be particular to the processor platform you intend to run your program on. For instance, if you are targeting a processor that uses the x86 instruction set (which includes Intel and AMD processors), then you would use an x86 assembler.

So why write assembly? On the downside, writing your code one processor instruction at a time is far more tedious than writing the functionally equivalent code in a high-level language. Moreover, assembly language can’t protect you from even the most basic errors and allows you to do dangerous things like trying to read memory that doesn’t belong to your program (something which the OS and the processor conspire to stop your program from doing by halting your program when it tries to do such things). So not only is programming assembly like using tweezers to move a hill of sand, the tweezers are slippery and sharp. Producing complex, reasonably bug-free programs entirely in assembly is very hard and generally just hasn’t been done since the late-80′s.

On the upside, the exact control provided by assembly allows for optimizations simply not possible in high-level languages. While compilers and interpreters have gotten quite smart, they very, very rarely, if ever, produce the fastest possible code, leaving room for a human to do better. Again, writing a program entirely in assembly is simply too impractical given the size of most modern programs; however, if a key portion of your code is a bottleneck, it might be beneficial to rewrite that piece of code in assembly and then invoke it from your high-level language code.

Assembly retains one other important role. Some important processor instructions will never be generated by the output of a high-level language, so it is left to assembly code to allow access to those instructions. For instance, on most processors, system calls can only be invoked using a particular instruction, but there’s nothing you can write in C code which will make the C compiler spit out that instruction—it’s simply something (consciously) missing from the semantics of the language; therefore, to make a system call in C, a piece of assembly code that uses the system call instruction is written in a way that the code, when assembled, can be invoked from your C code. For instance, when you open a file in C with the C standard library’s ‘fopen’ function, depending upon your implementation of C, that function either calls a function written in assembly or is itself written in assembly, and that assembly function contains the instruction to invoke the system call that opens a file. (A ‘system call’ is a function provided by the operating system that can’t be invoked like a normal function because it exists in the operating system’s protected memory space; the OS and processor conspire to protect this memory space from direct access by ordinary programs because otherwise it would be possible for ordinary programs to bring down the whole system out of incompetence or do malevolent things like read files they aren’t supposed to be able to access. So, processors typically provide a system-call-invoking instruction which allows ordinary programs to invoke code at OS-defined specific addresses in the OS’s protected memory space. By allowing the execution of ordinary programs to enter this memory area only at specific points, the OS can prevent any funny business.)

Assemblers used to be a much bigger deal back in the DOS days when most programmers worked in assembly, but those days are gone. Today, assembly work is rarely done except by developers of operating systems and device drivers, and whereas there used to be many assemblers for Intel-compatible processors, today there are only a few real options (on the upside, they are all now free downloads):

  • MASM (Microsoft Macro Assembler)
  • GAS (GNU Assembler)
  • FASM (Flat Assembler)
  • NASM (Netwide Assembler)

Aside from these options, some C compilers feature mechanisms to embed assembly code amongst the C code. For instance, the C compiler in the GCC (GNU Compiler Collection) allows you to embed GAS assembly code using a special directive. (Understand, this and similar mechanisms in other C and C++ compilers are not official parts of either the C or C++ languages.)

Now, whereas high-level languages, such as Java, C, or C++, are typically highly standardized, the assembler languages for a particular processor may diverge significantly in syntax, e.g. while most assemblers on the x86 platform tend to follow the syntax established by Intel in its processor manuals (with the notable exception of GAS), they still have many sizable differences.

A high-level assembler is an assembler with some high-level-language-like conveniences thrown in. MASM arguably fits into this category, but the best example is certainly HLA (High Level Assembly), an assembler language originally conceived as a teaching tool.

Compilers

A compiler is a program which translates high-level language code—called the source—into some other form (usually processor instructions)—called the target. Whereas assemblers do basically a verbatim, one-to-one translation—like a translation from English to Pig-Latin—compilers typically have a considerably more sophisticated task—more like a translation from English to Latin. So whereas the whole point of assembly generally is that the programmer controls the exact sequence of instructions, compilers only guarantee that the code they spit out is functionally equivalent to the semantics expressed in the source. Moreover, compilers generally attempt to optimize the code they produce, making the end result correspond even less directly to the source.

Just as assemblers are particular to the precise assembly syntax they can translate, compilers are specific to the high-level language(s) they can translate, i.e. a compiler for the C language can translate C code but not Pascal code. Also like assemblers, compilers are particular to the processor platform(s) which they can target (except some compilers don’t spit out processor instructions at all but rather some kind of ‘intermediate code’, as I’ll discuss later).

Consider the case of the C language. Like with assembly, there used to be a wide variety of C compilers used back in the 80′s and 90′s, but today the market has sorted out, and there are only a few notable C compilers. The two most important are:

  • GCC (GNU Compiler Collection): Originally called the GNU C Compiler, GCC now supports many languages other than C and C++. GCC can target dozens of processor platforms, including all the most popular ones.
  • Microsoft Visual C++: Despite the name, Visual C++ supports C as well as C++. Visual C++ only targets the Intel-compatible platforms: x86, x64, and Itanium. (Technically, ‘Visual C++’ is actually the name of Microsoft’s IDE (Integrated Developer Environment), but there isn’t a more commonly used name for Microsoft’s C or C++ compilers.)

Linkers

The source code of all but the smallest programs is written spread across multiple files, and in most languages, these files are treated as separate ‘compilation units’, i.e. they are compiled independently of each other. When a compiler produces processor instructions, the resulting code is called ‘object code’, and the resulting files are called ‘object files’. While some operating systems, including Unix systems, will allow an object file to be run as a program (i.e. it will happily load the file and begin execution of its instructions), this is of limited use because, to make a complete program, the object files need to be ‘linked’ together:

In a program, the code in one source file makes a reference to code in other files and/or is referenced by code in other files: a program is a web of source files which make external references to each other, and so the source files depend upon each other. (If a source file does not reference other files and itself does not get referenced by other files, then it can’t have any effect on or be affected by the rest of the code, so it can’t be said to be a part of the same program.) Still, each source file is compiled separately, meaning that, when processing one source file, the compiler has no knowledge of the files referenced by the source code; consequently, when the compiler encounters an external reference in the source code, all it can do is leave a ‘stub’ in the object code allowing the connection to be patched later. Patching together the external reference stubs of one object file to another is precisely the job of a linker. It is the linker that takes many object files and produces from them an executable file (e.g. an .exe file on Windows).

Interpreters

Whereas assemblers and compilers translate code into other forms of code, an interpreter is a program that translates code into action, i.e. an interpreter reads code and does what it says, right then and there. If you intend your program to be run via an interpreter, then every user must have both your program and the interpreter to run it, and your program is then started by starting the interpreter and telling it to run your program. (This may sound unfriendly to naive users, but the installation and starting of the interpreter can be disguised from users such that they install and run your program like any other.)

Because interpretation happens every time you run the program as you run it, interpretation introduces a significant performance overhead. This cost can be mitigated using what I call the ‘hybrid model’. First, the source code is compiled into some intermediate form (i.e. code which is more like processor instructions than high-level code but which is not executable by the processor), and then, to run the program, an interpreter executes this intermediate code. (In this model, the linking of the compilation units is typically done by the interpreter every time the program is run.)

A further refinement of the hybrid model is to use a JIT (Just-in-time) compiler. You use a JIT compiler as you would an interpreter—you run your program by feeding the JIT compiler some form of code (usually intermediate code)—but the JIT compiler compiles code into processor instructions and runs that instead of interpreting the code. Despite the time spent to perform this compilation (typically reflected in a longer program load time), JIT compiling is usually considerably faster than using interpretation: using a JIT compiler with the hybrid model is typically only 10%-20% less performant than were the code ‘natively compiled’ (compiled into an executable and run as such), compared to 70-100% slower for interpreting intermediate code. [The term "performant" is used by programmers to mean 'fast performing' or 'acceptably performing', but you won't find it any dictionary---yet.] Some claim that, in a few cases, a sufficiently smart JIT compiler can run code faster than the same program compiled into an executable because the JIT compiler can make optimizations only discoverable at runtime. (The comparative performance of JIT compiling versus native compiling is a hotly debated topic. While most concede native compilation almost always produces better performance, it’s debated how much of a performance hit JIT compiling introduces.)

Understand that, whether using the hybrid model or not, an interpreted program is limited by its interpreter. Just as programs executed by the OS can only do what the OS allows them to do, interpreted programs can only do what their interpreter allows them to do. This has potential security benefits: as the theory goes, users can download programs and run them in an interpreter without having to trust those programs because the interpreter can block its programs from accessing files on the system and/or using the network connection, etc. In such schemes, the interpreter is often called a VM (virtual machine) because, as far as the programs which it runs are concerned, it looks and acts much like a full computer system. In practice, truly secure virtual machines aren’t quite a reality, for real VM’s have bugs which malicious programs they run can exploit to breach the limitations imposed by the VM; consequently, users should still be careful of which programs they download and run, even if the program is run in a VM.

Another often-cited benefit of interpretation is that, as long as an appropriate interpreter for your language exists on all the platforms you wish to run your program on, you only need to write the program once. This is often called ‘write once, run anywhere’. This argument made a bit more sense when computers were slower and so compilation took considerably longer, making compiling your program for all target platforms a bit more bothersome, but aversion to this inconvenience doesn’t really explain why interpreted programs are considered so much more portable. The real reason writing your program for an interpreted environment makes it generally easier to get it working on multiple platforms is that the interpreter acts as a layer of indirection between your program and the OS, so the interpreter can handle the messy particulars of dealing with variances between OS’s, e.g. the process of opening a file often differs from one OS to the other, but your program only has to tell the interpreter to open a file, and the interpreter in turn deals with the particulars of the OS.

The portability advantage of interpretation holds out as long as your program uses functionality that is available and works consistently on all of your target platforms. A notorious problem area is GUI’s (Graphical User Interfaces): many GUI widgets (windows, menus, scrollbars, drop-down menus, etc.) simply don’t look and act the same on Windows, Macs, and Linux desktops. Attempts to provide a cross-platform means of writing GUI code have to date only been partially successful.

In principle, any language can be either interpreted or compiled, but in practice, languages are designed with a particular model in mind. For instance, were you to interpret C language code, you would defeat the purposes of using C in the first place (mainly performance and greater machine control), and so this just isn’t done (though I bet someone somewhere has done it—someone somewhere has done everything, no matter how strange or daft). Another language, Java, was conceived and implemented to use the hybrid model; ‘native compilers’ (compilers that spit out processor instructions) for Java exist, but aren’t used very often because the performance benefits generally aren’t significant enough to be worth the downsides.

Thus endeth the lesson.

Singletons considered harmful

4 Jul

Alex Miller, Steve Yegge, and this poster explain.

Among the reasons given:

  • Singletons are most commonly used as excuses to have global variables and functions.
  • As Steve puts it, “using the Singleton is usually just a sign of premature optimization…” .
  • Singletons make it difficult when later you decide you actually need more than one of that type or subtypes.