Other people’s choices matter

13 Feb

PC vs. Mac, XBOX vs. Playstation, iPhone vs. Android, my programming language vs. your programming language—these are not idle disputes. Contrary to popular moralizing, you’re perfectly justified complaining about what other people in the market choose because what they choose affects you. No market is infinite, so without a critical mass of consumers that share your preferences, your preferences may not get met.

For example, an ever growing tide of Apple-fawning consumers at some point may ruin things for the rest of us as the Apple model of locking everything down gets duplicated by its competitors. Who knows what an Apple-dominated world of computing then looks like: does the price-performance ratio improve as steadily as it has in the PC era? Does the cost of assembling a PC from parts go up? Does the ability remain at all?

If it were a good idea, it would exist already

8 Feb

Re my previous post, I should acknowledge that ‘Lisp without parens’ is a very old idea. Old enough that any time it comes around again, long-time Lispers leap out from their parenthesis-girded fortresses to ridicule the idea. This raises a good question: if Lisp without parens is a good idea, why hasn’t it become a reality? I have three explanations:

  1. The Lisp-without-paren solutions of the past made the fatal mistake of trying to infuse Lisp with infix notation. See, for example, Dylan. This is just a bad idea, as it solves the too-many-parens problem but complicates (at best) Lisp’s homoiconicity, making macros much harder to write and thereby defeating Lisp’s one remaining unique feature.
  2. Indentation-sensitive syntax was an old idea before Python, but before Python took off, everyone ‘knew’ it was a bad idea. (And in fact, some still insist that indentation-sensitive syntax doesn’t work.) And it wasn’t until Python was well established that a few people began to suggest using indentation to leave Lisp parens inferred but keeping the S-expression structure intact. So the idea of Lisp-without-parens is maybe 50 years old, but the idea of Lisp-without-parens-but-keeping-S-expressions is less than a decade old. As the Python example illustrates, sometimes good ideas just take time and a few failed starts to become reality.
  3. A more general problem is that having one good idea often isn’t enough: existing technologies and their accompanying ecosystems have a lot of inertia, and the current set of users will resist the pain changes bring as long as the benefits are unclear or seemingly minor. The applicable lesson from this is that the first successful Lisp that gets rid of parenthesis will most likely include other compelling features.

I’ll submit that Animvs avoids these problems. It cleans up the parens and indentation style, but keeps the syntax homoiconic and reductively simple (simpler, in fact, than any existing Lisp, what with their hacky reader macros polluting the nice clean symbols). It also introduces new ideas other than just a new syntax.

Animvs: Lisp for people who don’t like Lisp

7 Feb

I made this video several months ago but just uploaded it today. It describes, Animvs, my programming language project I’ve kept on the back burner for the last five years. The selling points are, briefly:

  • based on the semantics of Clojure, but allows mutability and immutability to live side-by-side
  • also allows dynamic and static code to live side-by-side
  • features clean syntax with the virtues of traditional S-expressions but without the ugly flood of parens
  • designed with editing environment conveniences in mind, including some usually only found in traditional static languages, e.g. name refactoring like found in Java IDE’s

The video doesn’t get into the mutable/immutable or dynamic/static business, but it does give you a good taste of the syntax (which is far more important than many would give credit).

What is fun?

6 Feb

The definition of fun often gets hung up on the distinction between fun and entertainment. Some insist that fun requires interaction rather than just passive consumption, e.g. games are fun, but movies are entertainment. This however is clearly prescriptivist semantics, for plenty of passive entertainments have been called “fun”. If there is any real distinction between “fun” and “entertainment”, it’s a matter of degree: both are synonyms for ‘pleasurable engagement’1, and fun simply is a more intense, attention-consuming form. While fun has unique connotations with games and child-like play, these follow from the fact that games generally require greater attention than other entertainments and the fact that children tend to more easily invest themselves in their experiences.

Now, ‘pleasurable engagement’ comprises a very broad swath of experiences, so fun is a multi-varied thing: there is no one type of experience that defines “fun”. Off the top of my head, here’s every category of experience that may qualify:

  • excitement: facing danger, going fast, punching, LOUD NOISES!
  • challenge: overcoming obstacles
  • competition: challenges against other people
  • exploration: discovering an environment
  • accomplishment: a sense of progress
  • socialization: idle chat, social climbing
  • narrative: being told a story by any means (visual, auditory, textual, etc.)
  • informative: learning things of interest, like from a documentary or a lecture
  • aesthetic: pretty pictures, music, etc.
  • auditory and visual stimulation: we often enjoy external stimulation for its own sake, even if random and garish, e.g. the flashing lights and noises of a casino
  • olfactory, gustatory, tactile, sexual stimulation: pleasure from smells, tastes, touch
  • humor: anything amusing or that makes you LOL
  • physical manipulation: affecting the physical world gives us a satisfying sense of control; even virtual action in a simulated world can give us this feeling
  • motor skill action: exercising hand-eye coordination
  • physicality: exercising our full body and/or subjecting it to straining conditions
  • creation: playing with our own ideas
  • vicarious experience: living out experiences we normally don’t have
  • logic, reasoning: puzzles, riddles, strategy, etc.

Most games push buttons in several of these categories rather than just one. Also note that some of these types of pleasurable engagement alone rarely rise to the level of fun. A good lecture, for example, may often be entertainingly informative, but hardly ever fun.

Some people would argue that the definition of fun I’ve just given is circular, that the real question is not ‘what experiences are fun’ but rather ‘what essential quality do fun experiences have in common’. Again, though, my argument is that there isn’t any such common essence. While psychology and neurology may identify some common mental state behind all these experiences, we won’t find some deeper common quality in the experiences themselves. The things we find pleasurably engaging are diverse, end of story.

For another definition of fun, we can distinguish fun from non-fun experiences by their lack of unpleasant qualities. The most notable of these ‘anti-fun’ qualities are:

  • tedium
  • pain
  • frustration
  • pressure of consequence2

In this version of the story, the hard-wired human impulse to seek out stimulation is driven mainly by an aversion to these unpleasant states of mind. Basically, we get bored, but most of us won’t drive nails into our hands as relief.

A question that immediately follows, now, is whether games are defined by any particular type or types of fun (or aversion to anti-fun). Are there certain boxes every game must check to be sufficiently ‘gamey’? The classic example of insufficiently gamey games are those that really want to be movies, like the FMV (Full-Motion Video) games of the 90′s. The real sin these games committed, though, was not in delivering the wrong kinds of fun but in failing to deliver much fun of any kind. They failed as “interactive stories” because they were barely interactive and were terrible stories (both in conception and execution). Similarly, the cutscenes in most of today’s games grate, not because they’re the wrong kinds of fun but because they’re generally presenting bad stories, poorly written and sloppily directed.3 So the real problem here is that good passively-consumed entertainment is extremely hard to deliver: a game that attempts to be entertaining in the ways movies and TV try to be entertaining probably fails because good movies and TV are by themselves very hard to make. Moreover, even when a game’s canned narrative content is done well, the game can easily frustrate players by thwarting their expectations: just as I probably wouldn’t be happy with a movie consisting mostly of text on the screen, I won’t be happy sitting controller in hand, waiting for overly-long cutscenes to end. So really, a designer needn’t worry about whether their game delivers the right types of fun to qualify as a “game”. That’s an academic question. The important question is whether the game actually delivers the fun it means to, keeping in mind that this success can hinge upon the player’s expectations.

Another common question that arises is whether games are more than just fun-delivery mechanisms. In recent decades, academics have advanced many theories of games concerning their social function, their role in personal development, and their status as cultural artifacts. While all these ideas may be valid and interesting to think about, I don’t think they’re of much use to any practicing game designer. A proper theory of fun, in contrast, is useful, even though it isn’t a secret formula that auto generates good gameplay. What the theory tells us is, first, what constitutes a game (as far as game designers should be concerned):

Different people favor different kinds of fun. All kinds of fun are valid, though not all necessarily marketable or easy to create. So don’t worry about what a “game” is supposed to be, but do worry about finding an audience with the right expectations.

More significantly, the theory helps us properly analyze games and break down what makes them work (or not work). This exercise yields loads of useful information and allows designers with conflicting values to, at the very least, articulate their differences, e.g. Designer A wants an RPG focused on exploration while Designer B wants an RPG focused on the thrill and action of combat. Moreover, applying this analysis to the best games out there reveals an important pattern: any one type of pleasurable engagement loses its impact if not properly paced, so good games hinge upon the proper mixing and pacing of different modes of fun.

(As for what constitutes “proper pacing”, that’s a whole other post.)

  1. Pleasurable, of course, being an important qualifier. I’m sure torture is quite engaging but neither fun nor entertaining. []
  2. The pressure of consequence is interesting because we sometimes find it pleasurable, e.g. the thrill of gambling. []
  3. Today’s cutscenes certainly look flashy, but the value of flashy imagery has radically diminished in the last decade due to market glut. Neat visuals were a novel, exciting treat up until about the year 2005. Today, not so much. []

“exclusive province” vs. “exclusive provenance”

20 Jan

When wondering which is proper, I found this obnoxious answer:

“Exclusive province” not “provenance.” People shouldn’t use words without knowing what they mean.

I’ll take the answer as given, but how the hell does this person think people learn most of their vocabulary, idiomatic expressions and all?

Besides, while “exclusive province” seemed more right to me, “exclusive provenance” isn’t that much less logical and has the virtue of being less metaphorical: most uses of “exclusive province” analogize land area to an abstract domain of ideas. “Provenance” on the other hand applies without metaphor as equally to ideas and culture as to tangible things.

Where’s the fire?

19 Jan

Too often, opponents of SOPA, PIPA, and their like concede that ‘piracy is a real problem’. While it’s intuitively obvious how illicit copying could be a problem, it’s equally and empirically obvious that illicit copying has had no serious effect on the production and availability of content. The new distribution models of the internet are almost certainly responsible for a far greater boost in production than the illicit copying is responsible for depressing production. Maybe the copying has cost content industries non-trivial money, but until the public actually sees a real reduction in output, why are the rest of us supposed to care? The current levels of illicit copying seem perfectly benign or even beneficial, so why subject the internet to chemotherapy?

Typing: which is the one true faith?

26 Dec

Dynamic typing can be attributed three main virtues over static typing:

  • flexibility: A single function can vary the type of its returned value, and collections can hold heterogeneous values. Consequently, dynamic typing often lets us get away without having to think too far ahead.
  • concision: Functions, variables, and collections needn’t declare their types, and interfaces can be kept informal. (Inferred typing arguably closes this concision gap significantly.)
  • simplicity: Heterogeneous collections mean we don’t need to introduce generics.

Conversely, static typing can be attributed two main virtues over dynamic typing:

  • efficiency: When the compiler knows the variable types for certain, it can make numerous optimizations that otherwise aren’t possible.
  • correctness: The compiler can perform type checks, effectively eliminating a whole class of bugs. (However, any remaining errant null references and incorrect ‘down casts’ arguably constitute type errors, so not all static languages eliminate type errors entirely.)

The interesting question to me is, ‘when and why do programmers actually make type errors?’ In my limited experience, I’ve worked on long term projects in dynamic languages and hardly ever made any type errors. In a 10k-line Javascript project, for example, I bet I probably made fewer type errors than I can count on one hand.1 This realization puzzled me for a while because it got me wondering why the hell so many people obsessed over type errors? After all, a type error is only one kind of bug among many and, in my experience, a not terribly common kind. All the programmer has to do is consult the documentation of the functions and objects they use to avoid making type errors. Right?

Well a little more experience explained this mystery: type errors become easy to make when dealing with ‘alternate-form types’. Such types enter the picture in a few ways:

  • numeric types: int, short, double, float, decimal, complex, etc. These are all numbers and so easy to confuse.
  • strings of different encodings: Again, these are all representations of essentially the same thing and so easy to confuse.
  • strings representing non-textual data: Do we need a number or a string of numeric digits? A boolean or a string reading ‘true’ or ‘false’? A code object or a string of code? A Foo object or its string representation?
  • numbers used for enumerations: Do we need a 0 or false? Does this function expect us to indicate the color blue with ‘blue’, 3, or COLOR.BLUE?
  • wrappers and collections: Do we pass in a Foo object or a FooWrapper object? An int or an Integer? A Bar object or a collection of Bar objects? The row object returned by the ORM or the business object representing the same data?

Without alternate-form types, type errors would almost never occur, for why would anyone ever accidentally mistake an Elephant for a Motorcycle? Mistaking a Scooter for a Motorcycle, on the other hand, is not so hard to imagine.

The important takeaway of my experience is this: the fact that alternate-form types arise less commonly in higher-level domains (such as Javascript for a webpage) partly explains why dynamic typing is generally more favored in front-end coding than in back-end and ‘engine’ coding. What seemed like a non-issue to me in one domain became a constant concern when working in another.

Typing peace in our time?

Now, if front-end and back-end code always lived in neat separate boxes, having to choose between static and dynamic code wouldn’t be a hard choice: we could use dynamic languages for front-ends and static code for back-ends. Most projects, though, straddle a line in both worlds, for a front-end, of course, must ultimately call into a back-end. Often this is done over the network such that the dynamic/static divide doesn’t really matter, but in other cases, we want to invoke a back-end as a library or framework, requiring some bridge. For example, Python can use modules written in C, but only with some significant adaptation.

Could we solve this problem? Could a single language accommodate both static and dynamic code that interoperate without hassle? I believe such a language is possible, and all it would require is for the programmer to declare each function/method as either static or dynamic:

  • In a dynamic function, type declarations would be optional, such that you might leave some or all types undeclared.
  • In a static function, all types must be declared (except those which can be inferred).
  • A dynamic function can invoke static functions with no hassle, though the runtime of course would have to perform a type check on the inputs. In some cases, type errors could be caught by the compiler in the context of a dynamic function, e.g. the return type of an invoked static function is known, so we might detect the improper use of the return value as argument to another invocation of a static function. On the whole, though, no guarantee is made about the type correctness of code in a dynamic function.
  • A static function can invoke dynamic functions but must declare the expected return type (where it cannot be inferred by context) so that the type can be checked at runtime. Of course, while the runtime check preserves the type correctness of the rest of the code, invoking a dynamic function effectively introduces potential for a type error within the context of the static function, in a sense nullifying the type assurance we’re aiming for with static typing (though of course null references and down-casts, if present in the language, already undermine the type safety of our static code.)
  • Homogenous collections and generic types would retain their typing in dynamic code, e.g. an ArrayList<Foo> would throw an error if you attempt to append a non-Foo object.

The other necessary measure for mixing static and dynamic code is introducing a distinction between static and dynamic classes, for it wouldn’t do for static code to access properties that might get deleted or change their type. Dynamic code would interoperate with static types freely, but static code would have to assert types to use dynamic types.

So the general pattern would be that using static functions and types in a dynamic context would be pain free, but using dynamic functions and types in a static context would require a bit of bother. This is likely not really a painful cost at all, as invoking static code from dynamic code is the much more useful case, for generally front-ends call into back-ends rather than the other way around. Programmers would start out writing a project in dynamic code but then gradually evolve their codebase, in whole or in part, into static code.

It can’t be that easy

Surely if the solution were so simple, someone would have done this by now, right? Well, not necessarily. One explanation is that static vs. dynamic is one of those religious debates in programming, and language designers are certainly opinionated, for why else would they create a language? Static typing purists create static languages because they believe efficiency and type safety shouldn’t be compromised, and dynamic typing purists create dynamic languages because they believe static type systems are overly troublesome and complicated. On the static side, especially, most energy seems to go into fixing type system problems by doing static typing ‘the right way’ (see Scala and Haskell).

But the main reason no one has integrated dynamic and static typing into one language is that, until fairly recently, no one could see the point. Until the rise of Javascript, Python, et. al., dynamic languages were used almost exclusively for small codebases of high-level code, meaning the alternate-format types problem never became pronounced. Now, however, we’re pushing dynamic languages into domains of larger codebases and infrastructure code, which suggests the need for a dynamic/static hybrid.

I’m certainly not he only person to get this idea. A future version of ECMAScript, for example, may feature optional static typing (though this is very much up in the air). Also, the academic project StaDyn approaches this problem from the opposite direction as my solution, treating static code as the default with optional dynamicism. I haven’t looked closely enough at this to form an opinion, but it looks interesting.

  1. This excludes errors from mistyped property names, which is an error I made multiple times each day. I don’t consider these to be type errors, though, but rather ‘name errors’, and as I’ll discuss an another post, static typing isn’t needed to eliminate name errors. []

“A later digression about the characters in a 1988 video game called Lee Trevino’s Fighting Golf is hilariously even less relevant.”

17 Dec

Mark Schmitt gives Third-Way Savior fantasies the generous shit-kicking they deserve. (And yes, Lee Trevino comes up at one point.)

Newt the intellectual

13 Dec


I have to confess that I myself, being of a presumptuous nature, have a proclivity to “‘fundamentally [transform]‘ everything as a first step to doing anything.” I like to think, though, that I have an awareness about it. Rather than castigate those who dismiss ‘big ideas’ as small-minded fools, I try to always acknowledge the complexity of real-world problems. Newt? Not so much.

Elections should matter

11 Dec

Given that my recent essay post is so long, I thought it a good idea to distill it into a workable political message, to sloganize it as best I can. Here goes:

  1. The political problems in the United States stem from insufficient democracy, not too much democracy (as many commentators of late have claimed).
  2. In a more democratic United States, elections would actually decide things. If the voters vote for it—conservative or liberal—that shit should actually happen.
  3. Fixing this, changing our politics, requires actually changing our politics—actually modifying the processes of our elections and lawmaking.