Archive | Learn Programming RSS feed for this section

A beginner’s first programming language

27 May

I’ve finally put together and posted the video of part 1 of my introduction to programming. This first part presents a simple programming language to users in about 70 minutes.

UPDATE: I’ve also now added the second part, which covers representing numbers as bits.

The remaining eight or so parts will have to wait until I devise a better process for turning my slides and narration into video.

The naturalistic (language) fallacy revisited

5 Jan

The comments to this y.combinator item add to what I said here. In particular, commenter apinstein writes:

I used to do a lot of AppleScript programming. When I initially learned about the “natural” syntax I though “this is gonna be so easy!” But ultimately it works against you.

Computers are very precise beasts, and they need to know exactly what you want them to do. The looser the “syntax” gets, the more guesses the compiler has to make to come up with a set of precise instructions.

What I initially thought would be easy and liberating turned out to be a total PITA. AppleScript programming is horrible. Ultimately there is an underlying syntax, but it’s harder to remember because it’s less consistent (ie “natural”). I had to spend way too much time trying to understand what goofy “natural” grammar I had to use to get it to do what I wanted.

Even if you assume an “ideal AI,” I still don’t think that a natural language syntax is a good idea, since language itself has a lack of specificity that requires even an “ideal AI” to make guesses that could be logically wrong.

Technical writing is hard

16 Jun

Having spent a lot of time in the last 3 years writing programming education material, I recognize just about everything said here. (The section “Writing clearly” is the interesting one.)

Teach the (other) controversy

31 May

My programming education began when I took a C language course at the local community college. I can still recall how strange I found the language’s rules about when I could and couldn’t use a variable (e.g. variables declared in one function can’t be read or modified in others), for it seemed to me this made writing programs far harder than they would be otherwise. Combine this confusion with the syntactical kruft of C and the fact that I took my instructor’s prohibition against global variables to mean never use globals (something I later learned real world C programs of non-trivial size don’t actually do), and the result was that I ended up totally paralyzed, baffled as to how programmers ever got anything to work. For these and a few other reasons, I basically abandoned programming entirely before taking it up again two years later, this time studying independently from books.

Somehow, some very basic ideas in programming just didn’t click upon my first learning attempt even though I now find these ideas very simple and clear. While my C instructor was mostly competent, he failed to focus on the vital ‘why’. Why does the language make me do this? Why is the syntax like this? Etc. Unfortunately, it’s too easy for learners to give up on ‘why’ because so few sources out there—teachers, books, blog posts—provide clear, accurate, complete answers to the ‘why’ questions (and far too many sources aren’t too hot on the ‘what’s or ‘how’s, either). Why do modern computers use 8-bit bytes? Why do we need to allocate memory? Why are exceptions expensive performance-wise? Many decent, working programmers out there simply have no idea how to answer questions like these. The really bad ones wouldn’t understand the answers or care if you tried educating them.

Recently, I’ve realized that the biggest, most common failing of programming education is the tendency to teach a technical matter as a solution in search of a problem—as a mechanism without a ‘why’. A great example is generics in Java, which are so convoluted that their explanation takes up a good quarter of a full treatment of the Java language. Absorbing all the subtle rules and asymmetries of Java generics typically distracts students from a critical understanding of why generics exist in the language in the first place. I would go so far to say that the fact that Java got on fine for many years without generics is the first and most important thing a student should know about generics. Only after well establishing what perceived problems generics were meant to address should students learn what generics are and how they work, and then it’s critical that this be followed up by exposure to dissenting arguments against generics.

You might assume confronting learners with controversy up front will lead to confusion, but on the contrary it gives a clearer presentation because it is more honest. Lies, hype, and wishful thinking tend to be incoherent and therefore perhaps impossible to be understood by anyone not already versed in the truth. Furthermore, teaching controversy has the benefit of putting students in a critical mindset, e.g. if the dominant languages of the day may harbor serious mistakes about which it’s OK to have your own opinion, maybe the whole basis of programming is neither set in stone nor out of reach for you to one day fully understand computing and to one day have a hand in directing the course of computing’s future development.

Look, ma, no lesson plan

21 May

Just about everything I described in my talk about what goes wrong in education goes wrong at nearly every step in this 10-minute video. I don’t mean to pick on this guy, but he’s the top Google video result for “python tutorial”, and that makes me sad. Sure, sure, I should forgive a high school student who is very likely just passing on his own miseducation, but his video makes a useful example of how tutorial-based programming education so commonly goes so very wrong.

Video of talk on Pigeon

18 May

Last month at LugRadio Live USA 2008 in San Francisco, I gave a talk discussing programming education and Pigeon, my learner’s programming language. Videos of all the talks at LugRadio Live are going up. Below is my talk, which you can also download. (I occasionally mumble a few key words. Sorry.)

The current status of Pigeon is that I still haven’t bothered to put the finishing touches on it for it to be actually usable, as I’m currently working on the material for students to learn Python after learning Pigeon. Until I give learners some plausible place to go after Pigeon, I figure I can put off finishing up Pigeon itself.

Little things add up

8 Apr

A discussion of various little things about Python 3.0 that make it easier to learn than Python 2.x.

Subtlety hinders grokability

17 Mar

Here’s some nice paragraphs recycled from an old crappy post no longer worth reading.

In C, the conceptual and syntactical distinction between definitions and declarations is blurred. This is a prime example of a misguided attempt at conceptual unity in design.

I think what goes on in designers’ heads is that they spend their time juggling many parts around, mentally banging the parts together to see which fit with which and which overlay the others, and then occasionally, in moments of revelation, the designer sees how parts they were thinking of as separate can be neatly overlayed, interlocked, or even dissolved into one, greatly simplifying the design. A small minority of the time, these revelatory moments really pan out just like they seem they will in that initial flash of recognition. However, most of these moments come to nothing when it later turns out that, upon further reflection, the idea doesn’t really make sense or fit consistently with the rest of the design. Other times, refactoring the rest of the design to fit the revelatory idea actually makes the whole more complicated. Other times, the idea can be accommodated just fine, but the gain is just an illusion: the designer, unhappy with some trade off, finds a solution that seems to dissolve that trade off, but euphoria blinds the designer to some side-effect introduced by his solution; on any other day, this new problem would displease the designer just as much as the problem he’s just solved, but he just isn’t thinking about it at the moment—maybe he’ll notice in a week or two.

I believe the misguided attempts at conceptual unity in C that displease me so are actually partly examples of success: after all, the conceptual unities of C ‘work’ in the sense that (obviously) they comprise a real working language and the sense that the conceptual unities do in fact reduce the syntax (e.g. pointer and array declaration syntax mirrors the syntax for dereferencing and array indexing). Still, these design choices also exhibit the designer euphoria blindness I described: aspects of the design have been simplified, but only by incurring disregarded costs elsewhere. In this case, the costs are to language transparency: these supposed conceptual unities in C are difficult to convey to outsiders because they really only make sense to people who already understand them. Having read many accounts of the C language, I’ve come to the conclusion that many of the traditional stories and vocabulary which C programmers use to talk about the language to each other simply fail to account for what is really going on in the language. Really, this is an unfortunate fact of any area of expertise: the experts are already cognizant of what’s really going on, so it’s fine for communication amongst themselves if their explanations and vocabulary abridge or misrepresent to untrained ears what they’re actually saying, but these inaccurate utterances are profound barriers to the uninitiated.

if !johnny.canRead() then…

4 Feb

The state of educational programming languages.

Ideally, a proper programming education would start with a thorough discussion of how data is represented as bits, followed by a brief tour of encryption, compression, information theory, data structures, search/sort algorithms, and machine architecture. Unfortunately, students are just too impatient to start their education properly, having approached the topic of computing with the desire to get their computers to do something, preferably something neat, preferably something now. This is where educational languages should come in: students should be able to learn a simplified language in a week or two of lessons with no prerequisite material, thereby satisfying their curiosity of what programming is at least like. Learning a quick-and-painless language should make students more receptive to covering some of the theoretical basis of computing while also preparing students for learning their first ‘real’ language, whether it be C, Java, Python, Scheme, Javascript, Ruby, Haskell, whatever.

In the before time…

Today’s learners of programming typically start out with Javascript, PHP, Python, Ruby, Java, C, C++, or C#. Things used to be different. For much of the 80′s and 90′s, newbies to programming were directed to languages typically thought of as learner’s languages, such as BASIC and Pascal, though these languages weren’t necessarily designed explicitly for education. Why did languages like these fall out of favor as educational tools? Most likely for the same reason they fell out of general use: they simply got old and out of date. Practicing programmers begin to look upon superseded languages like they do old code—as distracting, bothersome clutter. In turn, the neophytes take their cues from practicing programmers and insist they be taught a “real” language, and in the end, it’s just very hard for a language to gain and maintain wide adoption as an educational tool when no one wants to do any real programming in it.

While some may mourn BASIC and Pascal, I think people’s estimation of these languages is blinded by nostalgia. Pascal, in particular, overstayed its welcome in the classroom. By the mid 90′s, you might as well have taught C in place of Pascal, for C had sufficiently matured that it was essentially Pascal with pointers, different syntax, and no silly distinction between “procedures” and functions. Moreover, by 2000, the only* easily available implementation of Pascal for students to run on their own computers was (and is) Free Pascal, which actually supports a much changed dialect of Pascal, the result of cross-pollination with Delphi. The added complexity was bad enough, but then the Pascal text books simply didn’t keep up, thereby compounding Pascal’s fate: if new programmers don’t learn the Pascal in actual use, they won’t continue to use Pascal once they’ve learned it, leaving the language stuck in a death spiral.

(*There are actually Pascal implementations other than Free Pascal still available, but they too all significantly diverge from the classic Pascal of classrooms.)

BASIC suffered from an even worse fracturing problem and was eventually wholly overtaken by Visual Basic, which came to be seen as the modern BASIC even though it bore only superficial resemblance to the BASIC of 1980′s nostalgia. Visual Basic saw decent up take in the classroom, but as Microsoft took VB from 3.0 to 6.0, the language got more and more complex until, finally, today’s VB.NET is an almost entirely pointless syntactical variant of C#, a language just as complex as Java and C++.

What now?

Among the languages in use today, the best candidates for starter languages are the dynamic languages: Javascript, PHP, Ruby, and Python. Javascript and PHP, however, can be quickly discounted:

  1. Both Javascript and PHP lack a real read-eval-print interactive prompt.
  2. Non-programmers are most familiar with programs as things which they install and run on their machine, and so they’re going to feel something is missing if they can’t see how the language you’re teaching them is used to write such programs. Unfortunately, Javascript is stuck in the browser. Even if you teach Rhino, you’ll still have to tell neophytes a very strange story they likely won’t understand about how their Javascript program is executed and how this qualifies as general-purpose programming. Similarly, PHP is so heavily skewed towards its niche of server programming, it’s basically useless for client-side programming. So, with either Javascript or PHP, the instructor must make a Solomonic decision between leaving students in the dark about how their programs run or burdening and distracting students with a confusing back story.

So it comes down to Ruby and Python. Many will say the choice between the two is just a matter of taste, but I feel Ruby’s latent Perl-isms makes Python obviously better suited for neophytes. However, despite Python’s exemplary cleanliness, it’s still a quite complicated thing from the neophyte’s perspective. There’s a reason O’Reilly’s Learning Python is 746 pages long, and I just don’t think learners should be confronted in any way with subject matter like:

  • multiple inheritance
  • ‘bound’ vs. ‘unbound’ methods
  • old vs. “new” classes
  • functions within functions
  • list comprehensions
  • generators
  • packages
  • operator overloading
  • exception handling
  • creating custom iterators
  • internal dictionaries
  • a complicated hierarchy of scope with (limited) closure

Of course any sensible introductory Python course will hold off on these topics or avoid them altogether, but simply trying to ignore features of a language has costs, as the complexities and corner cases of a system have a way of intruding into the more pristine areas. For one thing, a student might accidentally write syntax which the compiler interprets as a (successful or unsuccessful) attempt to use a feature the student isn’t even aware exists; the result is mysteriously misbehaving code or a mysterious error message from the compiler. Perhaps worse, learners of a language will not be shielded from features they don’t understand when using their most important learning resource, the internet. Lastly, while Python is certainly a one-way-to-do-it language in comparison to Perl, it still offers conveniences for expert users, conveniences that will only paralyze learners with distracting stylistic decisions.

So perhaps we’d do better starting learners with a language explicitly designed for education. Yes, I did start by saying that the now-dead learner’s languages died because no one wanted to do any actual programming in them, but the main problem there was that Pascal and BASIC grew into complicated languages: they didn’t start life as explicitly learners-only languages and so ended up stuck in a no-man’s land between ‘two-week intro to programming’ and ‘full application-programming tool’. However, there still remains the problem of learners wanting to bypass learning a tool they’re going to immediately drop, so an educational language has a tricky balancing act to perform: the language should be simple enough to be fully learned in one or two lessons so that students don’t complain about learning “all this stuff” they aren’t going to use…but not so simple that the language isn’t enough like a real language to help students understand their first real language…but not too much like a real language that anyone would want to actually use the thing for real work.

Dark horses

Outside the mainstream, we have more educational language options. The most notable among them are:

  • Squeak (Smalltalk)
  • Scheme
  • Haskell
  • Alice
  • Scratch
  • Logo
  • Phogrom

And here’s what’s wrong with these languages:

  • Squeak, Scheme, and Haskell are real languages used by real programmers for real work. This is bad. Real means complex. Annoyingly complex in annoying ways. No matter how elegant in conception these languages may be at their core, their corners lurk with ugly realities, and you can’t assume that learners can’t see the corners or can just ignore them. For the learner, more corner cases means more possibilities to consider, more mental dead ends, more detritus to sift through to establish a clear picture. Sure, you could teach a relatively clean subset of Java, but the subset you teach will be scattered here and there in the 1200-page Java book your student picks up. Sure, you could teach Haskell while ignoring its convoluted syntactical conveniences, but the compiler’s warning messages won’t be so friendly to your student. Sure, you could stick to just one dialect of Scheme, but Dr. Scheme will confuse your student with a list of umpteen different dialects from which to choose. For learners, these 1200-page books, cryptic error messages, and fractured sets of dialects are frightening and discouraging.
  • Squeak has a problematic relation with the operating system: Smalltalk has never reconciled itself to the OS, stubbornly refusing to let the programmer create programs that take advantage of the OS’s already existing mechanisms for interfacing with users: the filesystem, the console, and (proper) windowed applications. Whatever the merits of this kind of programming, like with Javascript, Smalltalk presents a very confusing situation to learners, most of whom are hoping one day to write Doom 8; Doom 8 is not going to be written in Smalltalk, and even the greenest newbie programmer will be able to smell that about Smalltalk. Furthermore, I believe that a learner’s language should partly disregard files and entirely disregard consoles and GUI’s; Smalltalk does that, but it does so by replacing those things with its own complications, complications which end up as just more stuff to sift through in the documentation (see above).
  • Alice and Scratch attempt to sidestep the syntax issue in a novel manner: rather than using text for code, the Alice and Scratch editors have users drag code fragments and fit them together like puzzle pieces: the pieces only fit together in syntactically proper ways, so syntax errors aren’t possible. I actually think this might be a great idea for training students in the syntax and semantics of real languages, such as Java. However, this does mean that the Alice and Scratch dev environments feature a lot more buttons than they otherwise would. More importantly, if these languages’ syntaxes were simple enough in the first place, the benefit of drag-and-drop editing would be negligible.
  • Alice, Scratch, and Phogrom all suffer to one degree or another from the “naturalistic language fallacy“: by attempting to have their code read like English, they end up greatly complicating their languages and obscuring the underlying formality, thereby failing in an important regard to prep students for real languages. There’s a reason most English-speakers struggle to learn English grammar: it’s an awkward attempt to fit formal rules on a complex, organic system. Doing the opposite—massaging a formal system into English—doesn’t clarify anything, either. (The biggest offender in terms of naturalism is actually AppleScript, a language some misguidedly hold up as appropriate for learners. Like Dylan, AppleScript, is an example of Apple favoring apparent simplicity over actual simplicity; such bargains are sometimes worth it, and more often than not Apple finds the right balance, but the result with AppleScript is disastrous.)
  • Explicitly educational languages present themselves as intertwined to a particular API: Logo is the language in which you do turtle graphics; Alice, Scratch, and Phogrom are languages in which you do simple 3D- or sprite-based graphics and sound. Simple graphics-oriented API’s are a great idea, especially for teaching programming to younger students, yet there’s something wrong when the API is presented as indistinct from the language proper. In real languages, everything useful aside from basic arithmetic and logic is punted into libraries, and this is how it should be. First, multi-module programming brings with it the essential concept of namespaces. Second, without the barrier between core and domain-specific functionality, the distinction gets blurred in students’ minds.

Here there be dragons

Some topics should simply be avoided in a learner’s first exposure to programming, whatever the language. These topics include:

1) Inheritance and other static-y object-oriented features

Java and C++ support OOP the way they do for the sake of compile-time checks and efficiency (relative to languages with dynamic object systems, at least). These features introduce a plethora of rules in those languages, the sum complexity of which distract from the essential ideas in OOP: encapsulation, polymorphism, and inheritance. In a dynamic language, these ideas can be conveyed by convention using regular functions and dictionaries: plain functions are used as constructors and methods; dictionaries are used as the objects; methods take the object as their first parameter; and constructors return a dictionary with the right instance and method members. For example, a method call ends up looking like:

obj.foo(obj, 3)  // call the foo method of obj with an argument 3

This may be more verbose, but it’s explicitly constructed out of already existing mechanisms. Inheritance relationships would be ‘faked’ by simply giving one type all the same methods as another, e.g. A is a subtype of B because it has all the right fields and methods. Duck typing, essentially. A student introduced to OOP as a set of conventions rather than new language mechanisms has a better chance of seeing the point of the whole thing and so is less likely to fall into common newbie traps, such as using inheritance for the sake of using inheritance.

2) Shells

Shells seem to be problematic in programming education. For one thing, every shell tutorial I’ve ever seen outright fails to make clear the distinction between shell code and program invocations in the code. For example:

ls -a > foo

While “> foo” is part of Bash syntax, Bash doesn’t see “ls -a” as anything but a command it doesn’t recognize, so it looks for a program named “ls” in its search path, and it loads and executes ls, passing the string “-a” as the argument fed to the char* args parameter of main() of the ls process.

Got that? Well, if you don’t understand this, you don’t understand the first thing about shells. But it’s REALLY FREAKING HARD to start learners off by explaining all this. So we don’t explain it. Rather than really teaching Bash or other shells, we just give students a few example shell commands, wave our hands, and trust that students will eventually figure out what’s really going on. Few students catch on quickly, and many never do, including some that go on to program professionally.

The solution to this situation is to formally cover shell languages as topics in their own right rather than as incidental to instructing students in how to use a compiler. At the very least, then, shells should not be introduced at all until it’s time to give them proper treatment.

(Not only are shells conceptually tricky to explain, the complexity of Bash syntax distracts learners from the essential concepts. The long term solution is to replace Bash as the default shell with something with a proper expression-based syntax—something like Python or Ruby, perhaps—rather than a command-based syntax, even if it means some common tasks would be a bit more verbose to type. Bash is a horribly twisted pile of historical happenstance that is just no longer worth its few minor typing-efficiency advantages.)

3) OS concepts

Key pieces of every language’s standard library deal with operating system matters like files, processes, and threads. Like with shells, these are complicated subjects in their own right: by introducing these OS concepts intertwined with a language, you’re muddling the clear presentation of both.

a = new Hope()

Enough about how to do programming education wrong. What about getting it right? Well, I have a lot of opinions on that front too. So many, in fact, I created my own language, Pigeon. The design philosophy behind Pigeon can be summed up by a few slogans:

  • Get in, get out. Pigeon is designed to be fully learnable in 3-4 hours. Once the student fully understands every rule of Pigeon, they should write trivial programs in it for maybe 4-8 days, and then they should move on to learning a real language (especially Python). Pigeon is simple enough that it would be ridiculous to offer a full term course devoted to it, even at a high school or middle school. (At the very least, you should be embarrassed to attend an institution that offers a programming course with Pigeon in the title.)
  • The explanation is the design. No other consideration is as important as bare simplicity—both syntactical and semantic—and simplicity is best measured by asking the question, ‘How must this be explained?’ The shorter and clearer the explanation, the simpler the language*. Currently, the Pigeon tutorial is just over 20 pages; I don’t foresee it ever growing past 30.

(*Of course, the simplest possible language would be a minimal notation for a Turing machine, e.g. Brainfuck. While that might actually have educational value, the goal here is to set learners on the path to learning real-world languages.)

Pigeon sticks to the features common to almost all modern languages:

  • expressions
  • functions
  • local and global variables
  • branching and looping
  • recursion
  • arrays and associative arrays
  • modularization across files

(Global variables and recursion are not themselves essential at this point in the learner’s education, but they help establish the concept of local scope.)

The Pigeon program is a bare-bones text-editor window with a sub-window for printing standard output. Clicking ‘Run’ translates the code currently in the text-editor window into a Python module and then executes it. This is all done in about 1,000 SLOC using Python 2.5 with wxWidgets.

Currently, the only output accessible in Pigeon is the standard output window, and the only input is a bare bones pop-up dialog. As explained above, the decision to omit file handling is deliberate. However, one thing I would like to add is a simple interface to wxWidgets’s canvas to allow learners to play with simple 2D drawing. If you’re interested in contributing code to the project, that—or some other kind of simplified input/output mechanism—would be a good place to start.

Another big way to help the project is to concoct simple programming exercises, as I’ve neglected this important area.

If you’d like to contribute, ask for project member status in the comments here so you can edit the wiki. (You must have a gmail account for this.

Learn more about Pigeon:

The current Pigeon download is not usable, so don’t bother with it yet. Something apparently got buggered when I packaged the zip together (hey, it worked on my machine!). I’m putting off a fix until I get the time to add cross-platform filepath support (I developed on Windows without bothering about Unix) and also the time to update the language to match the documentation (I’ve neglected the code since I posted 0.1 in October, for I’ve been focusing on writing docs instead, over which time I changed my mind about a few things). Aside from those few things, I’m sure some things in the code will look hideous from the perspective of a three month hiatus and will demand refactoring. I also haven’t gotten around to putting my svn repository on Google Code, so that will be the first thing I do when I tie up these loose ends sometime this February.

Poorly explained aspects of Java explained not so poorly (part 1)

20 Oct

Most Java instruction materials fail to make certain basic things as clear as they could be, so here’s a FAQ-like rundown.

What are the types of values in Java?

Java divides its types into what it calls ‘primitive’ and ‘reference’ types (this terminology is unique to Java):

  • The primitive value types consist of five integer types of different sizes (int, long, char, short, byte), the floating-point types of different sizes (float and double), and the boolean type.
  • A reference value is an instance of a class.

(A reference value might also be an instance of an enum—a class-like enumeration. I won’t discuss enums as they were added late to the language and most programmers get by without using them.)

Confusingly, the terms value type and value variable are sometimes used as a synonym for primitive type and primitive-type variable, respectively. Less surprisingly, reference variable is used to mean a reference-type variable.

What’s a literal?

Values of some types can be expressed as ‘literals’, i.e. literal representations of particular values:

  • 35 : a literal int value (all integer literals are of type int by default)
  • -12.51 : a literal double value (all floating-point literals are of type double by default)
  • ‘b’ : a literal of type char (the ASCII value of ‘b’ is 98, so if used in an arithmetic expression, it will act as that integer value, e.g. (‘b’ + 2) is 100)
  • true : the reserved words true and false are literals of the two boolean values
  • “Aye carumba!” : a String literal

Notice that the only non-primitive type of literal is a string literal (a string is an object, an instance of the class String in the package java.lang).

What’s an expression?

An expression is one of two things:

  • a value
  • an operation

A value is either a literal or a variable. A literal obviously evaluates into the value which it represents, while a variable expression evaluates into the value it holds at the time it is evaluated.

An operation consists of an operator and operands and evaluates into a value. For instance:

3 + 2      // the operation + has operands 3 and 2 and evaluates into the value 5

Note that the operands are themselves expressions. In this case, the operands are values, but they could be any kind of expression as long as those expressions evaluate into the right type of value, e.g.:

3 + (9 - 2)     // the first operand of + is 3 and the second operand is the expression (9 - 2)

Also note that, in all cases, an operation evaluates down into a value—we can say that the operation ‘returns‘ a value—and that value has some particular type. In Java, it’s an important feature of the language that the type of value returned by an operator expression is always known from the operator and the types of its operands, e.g.:

('b' + 3)      // the + operator with a char operand and an int operand will return an int value

The language is designed such that the compiler always knows the type of each expression, i.e. the type of each value and the type returned by each operation. For the language to know this, you must always declare the type of each variable, the type of each parameter, and the return type of each function. This is the essence of what it means for Java to be a statically-typed language.

Each operator has its own rules about how many operands it takes, their types, and the type of the value it returns. Some operators change what type of value they return depending upon the number and type of their operands. For instance, the + operator will return a String rather than an int when used with a String operand:

"Johnny" + 5    // a + operation with a String and int operand will convert the int into a string and return a concatenation of the two strings as a string: "Johnny5"

There are a few dozen operators in Java:

  • a few are unary (taking one operand, e.g. the ! operator)
  • most are binary (taking two operands)
  • one rarely used operator is ternary (taking three operands), the ? : operator

Parentheses and the rules of precedence are used to determine which operations are the operands to which other operations:

3 + 7 * 9    // * has higher precedence than +, so (7 * 9) is evaluated first and is an operand to the + operation
(3 + 7) * 9     // the parentheses override the usual precedence, so (3 + 7) is evaluated first and is an operand to the * operation

(Note that the whole point of operator precedence is so lazy mathematicians don’t have to put each individual operation in its own set of parentheses.)

The operators and their precedences are listed here.

Not all operations are denoted by operators, however, for a method call can be thought of as an operation: as determined in its definition, a method takes some number of operands of particular types and returns a value of some particular type. (Some think of the parentheses of a method call as an operator such that the name of the method and the arguments are the operands, but I prefer thinking of the method name itself as the operator and just the arguments as the operands.)

What’s an expression statement?

An expression statement simply has this form:

expression;

In Java, an expression statement must be either an assignment operation or a method call, e.g.:

x = 3 + foo;     // an assignment statement
cow.moo();    // a call statement

In some other languages with similar syntax, such as C, many compilers won’t complain if you have an expression statement like foo; (which says, ‘evaluate the value of variable foo and do nothing with it’) even though such statements are pointless. Java will complain if you write such do-nothing expression statements.

(Of course, it’s quite possible a method call expression might not do anything useful, but the Java compiler can’t know that; it only objects to expression statements it knows couldn’t possibly be useful, such as 3 + 5;)

What’s a declaration statement?

A declaration statement has the form:

type name;

For instance:

int x;     // declare a variable named 'x' of type int

Cat c;     // declare a variable name 'c' of type

For convenience, you can declare multiple variables of one type in one declaration statement using commas to separate the names, e.g.:

int x, y, z;  // declare three ints: x, y, and z

Also for convenience, you can assign a value to a variable as you declare it using this form:

type name = expression;

…where the expression evaluates into the value assigned to the variable.

In a multiple declaration, assigning values to the variables looks like this:

int x = 3, y, z = 2;

…which is no different from writing the same thing as three successive statements:

int x = 3;
int y;
int z = 2;

What’s a control statement?

A control statement is one of the statements having to do with flow of execution: if, while, for, break, continue, try-catch, return, and a few others. The rules and meaning of these statements are particular to each kind.

What’s the difference between a value variable and a reference variable?

Variables of primitive types are value variables, meaning they directly hold a primitive value:

int x = 3;
int y = x;
x = 4;  // though y got its value from x, modifying x afterwards has no effect on y
 System.out.print(y); // print 3

Here, x and y represent two locations in memory where an integer value is directly held. After the second assignment, the memory locations of x and y each hold separate copies of the value 3.

Variables of reference types are reference variables, meaning they hold a reference (address) to an object, not the object itself:

Cat x = new Cat(); // a new Cat is created, and memory location x now holds the address of that object
x.name = "Fluffy";
 Cat y = x;   // the expression (x) evaluates into the object referenced by x, and so the new Cat reference named y is assigned the address of the very same object referenced by x
 x.name = "Mittens";    // modify the property of the cat object referenced by x
 System.out.print(y.name); // prints "Mittens" because y referenced the very same object as x when we modified the object's 'name' property via the reference x

Here, x and y represent two locations in memory where addresses are held. The actual Cat object is elsewhere in memory. Both x and y are assigned the same Cat object address, so modifying the object referenced by x is the same thing as modifying the object held in y—they’re the same Cat.

What are the primitive types?

The integer primitive types are:

  • byte
  • char
  • short
  • int
  • long

The floating-point primivite types are:

  • float
  • double

For all these types, see here for their exact sizes. If your code requires high-precision floating-point calculations, you should read up on the strictfp modifier.

Finally, there’s the boolean primitive type, which consists of two special values, true or false. In other languages, such as C, numbers are used to mean true or false in special contexts—usually zero represents false while all other values represent true—so why have a unique type for true and false? Well the thinking is that this protects you against accidentally using a number when you meant to use a true/false value and vice versa and helps clarify the intent of code when it is read.

Is = really an operator?

In Java, yes. However, the = operator may seem like something of a special case, and that’s because it is: the = operator’s left operand is not a value (and hence not an expression) but rather a target, i.e. a variable to assign to. Consider:

foo = 2;

It wouldn’t make sense for foo to evaluate into its value here because you can’t assign a new value to a value—that just doesn’t make any sense. Rather, we are assigning a value to the variable itself.

Despite this unique difference, = is still an operation and does return a value, which is the value assigned to the target:

3 + (x = 4)       // first 4 is assigned to x, then 3 is added to 4

Because = operations are right-to-left associative, we can chain assignments:

x = y = z = 5;

This is equivalent to:

x = (y = (z = 5));

A common typo is to type = when you meant to type ==. In C, this creates a problem in such cases as:

if (x = 3)  { ... }

…because, in C, an integer can be used as a true/false value, so the value 3 returned by (x = 3) is accepted as the condition value even though the programmer certainly meant to use == instead. In Java, in contrast, a condition must be a boolean value, so the compiler will complain here about an invalid if condition.

Some other languages think it’s a bad idea to allow assignments to occur in unexpected places, so they make = useable only as the outer operation of an expression statement. In general, you should follow this convention, as code is hard to read when assignment operations occur in unexpected places. E.g., instead of:

 foo(x = 3);

…do this:

x = 3;
foo(x);

What’s the order of the modifiers?

Java contains some keywords which modify reference-type declarations (classes, interfaces, and enums), local variables, fields, and methods. Some of these modifiers can’t be used with other modifiers, but Java doesn’t care about the order in which you write the modifiers as long as they all go before the thing they modify, like so:

  • class: modifiers class Name {}
  • variable (local or field): modifiers type name;
  • method: modifiers return_type name(parameters) { }

So, for instance, you could write a field as :

static final int x = 3;

…or:

final static int x = 3;

…but not:

final int static x = 3;  // modifier 'static' must go before the type ('int')

Why is the syntax for creating objects so verbose? E.g. Foo foo = new Foo();

The reason it’s so verbose is because really two independent things are going on in such a statement. If we have a class named Foo, then this:

Foo foo;

…declares a reference of type Foo. No object has yet been assigned to foo, so it holds the default value of null, the special value representing a reference to nothing. To actually create a Foo object, we use the new operator followed by a call to the constructor:

new Foo()

Understand that, like all other calls to functions, this is an expression! It just looks funny because ‘new’ (which is a unary operator just like ‘!’) is always separated by whitespace from its operand (the call to the Foo constructor). The new operator must be used when calling a constructor to create a new object. In the most common use of new, the newly created object is immediately assigned to a reference, like so:

Foo foo = new Foo();

…but a new operator expression is really an expression like any other, so you can create a new object of a type anywhere an object of that type can be used without having to assign it to a reference:

funky(new Foo());  // the method funky taking a new Foo object as its argument
 new Foo().bla();  // create a new Foo object and then call its bla() method

In this last example, the new Foo object is not assigned to a reference and so lost after the statement. This is not the most common thing to do, but it is not all that rare in real code.

An initially confusing thing about the syntax of new is that it has a higher precedence than the dot operator, so the last statement of the last example could equivalently be written:

(new Foo()).bla();

For clarity, it might help to always imagine parentheses around every use of new and its constructor call operand like above.

What’s the deal with calling one constructor from another?

Within a constructor, we might call another constructor of the same class—or even call the current constructor recursively—to continue the job of constructing the object. To do this, you wouldn’t prefix the call with new. Conceivably—if rarely—you might wish to create a new separate instance of the same class inside a constructor (e.g. inside a Cat constructor, you might wish to instantiate a new Cat that is independent of the Cat being constructed), in which case you would use new.

Is new necessary?

Arguably, if you got rid of new, the compiler could simply infer the creation of a new instance from the fact that you are calling a constructor, e.g.:

Cat c = Cat();    // not proper Java (assuming Cat() is a constructor call)

However, (as discussed in the previous section) the new operator helps distinguish between calling one constructor to another and creating a separate object of the same type. For instance, inside the Cat constructor, new Cat() would create a new separate Cat while Cat() would either invoke another Cat constructor or the current Cat constructor (recursively) for the purpose of constructing the current object, not a new separate Cat.

Arguably a better solution could have been used to make this distinction, allowing us to avoid typing new so much, but in any case, I find having new is nice because it makes object instantiations stand out in code.

What is “the stack”?

When loaded, your program is alloted a contiguous piece of memory called ‘the stack’, where all of its local variables are stored. It works like this:

  • The stack starts out empty.
  • As a function executes, its local variables are pushed (stored) on the top of the stack.
  • When the currently executing function call returns, the local variables it created are popped (removed) from the stack before execution returns to the function which called it.

The set of locals created by a function call is called a stack frame.

Notice that, AT ALL TIMES, the frame at the top of the stack belongs to the currently executing function, and the frame below it belongs to the function which called the currently executing function, and the second frame down belongs to the function which called the function which called the currently executing function…and so on, until you get all the way to the bottom frame, which belongs to the main() function of your program. When main() returns, the program ends and the stack is empty.

Such stack-based execution is the dominant model of execution in programming.

If your program’s stack outgrows its space in memory, the Java runtime will request more memory from the operating system if it needs more; if the OS refuses the request (which happens when there isn’t enough memory to be had), the Java runtime causes an exception to be thrown in your program, crashing your program if you don’t catch the exception. (This is a good example of an exception you shouldn’t catch, as there’s generally nothing you can do to recover from running out of stack space; at most, you should catch it, try to do some clean-up work and preserve data you don’t want to lose, then terminate the program.) This kind of error is called a stack overflow because your stack needed to ‘overflow’ its space to continue execution. On a modern desktop system, the amount of memory available makes it hard to imagine a well-designed program that needs more stack space than it can have, so an overflow should generally be seen as a bug or design flaw in your code, not a vexing system limitation. By far, the most common cause of stack overflows is accidentally allowing a function to make too many recursive calls.

(Programs may actually use more than one stack: as I’ll discuss later, each ‘thread’ of a program has its own stack. However, all programs start with one thread, and many programs never need more than one, so we can ignore multi-threading for now.)

What is “the heap”?

Aside from the stack, the ‘heap’ is the other part of memory used by your program. Remember that local variables in Java consist of only primitives and references to objects, not any objects themselves, so you won’t find any objects on the stack; rather, all objects are stored in the heap.

No matter how many threads you have, there is always only one heap for your program.

While the stack is a strictly organized piece of memory that preserves the local variables of each function call, in contrast, the heap, as its name suggests, is a disorderly place. Much intelligence goes into keeping track of which areas of the heap are free, deciding which spots to place new objects in, and avoiding wasteful gaps between objects, but you don’t worry about all that, for it is the job of the Java runtime to manage all of it.

Like with the stack, the Java runtime will request more memory from the operating system if it needs more; if the OS refuses the request (most likely because there isn’t enough more memory to be had), an exception is raised.

Classes are objects too!

When you start a Java program, each class that is used in the course of the program is loaded as an object of the special class java.lang.Class. Among other things, this object is where the static fields of a class are stored (in case you were wondering.)

Yes, it is confusing to have a class named Class. For one thing, if all classes are represented by a Class object, what about Class itself? Is there an instance of Class representing the Class class? Which came first: the Class class or the Class instance? There can’t simply be code that says new Class() because the Class class would need to be loaded as a Class instance first before the Class constructor could be used. (In fact, Class has no constructors.) The answer is that Class requires special treatment by the Java runtime at load time.

You can get the Class object of a class using the static forName method of Class:

Class stringClass = Class.forName(“java.lang.String”); // get a Class object representing the String class

There’s not much you’d normally want to do with Class objects, but they make some meta-programming techniques possible that otherwise wouldn’t be.

What’s a local variable?

Any variable you declare inside a function—including the parameter variables—are local variables. A local variable is created when its declaration statement is reached in the execution of a function. The local variables of a function call are discarded when the call returns: they don’t actually get erased on the spot, but the memory they occupy on the stack becomes free game to be overwritten by the creation of local variables in later function calls, so they’re as good as dead.

Actually, if a local variable is declared within a control block, such as of an if, for, try, catch, etc., then it is said to be local to that block rather than local to the whole function, and it will actually be discarded from the stack when the block is finished, not when the whole function call is finished. Therefore, you can use a variable only in the block in which it is declared or in sub-blocks thereof.

If you declare a local with the same name as a variable of an outer scope, then the name in that inner scope refers to the local:

int x = 3;

if (true) {    // an if with the condition true is, of course, a silly thing to have, but it serves our demonstration
    int x;
    x =  5;
    if (true) {
        int x;
        x = 7;
        System.out.println(x); // print the value 7
    }
    System.out.println(x); // print the value 5
}
System.out.println(x); // print the value 3

What is overloading good for?

Method overloading (not to be confused with method overriding) occurs when a class is given multiple methods of the same name; this is allowed as long as the methods do not share the same number and/or type of parameters. When you call the method of that name, the compiler can tell which method you mean to call based upon the number and type of the arguments.

So just like some operators vary their effect and type of returned value based upon the type and number of their operands, an overloaded method can vary its effect and type of returned value based upon the type and number of its operands (the parameters).

Understand that overloaded methods are really entirely independent methods that just happen to share a name. The reason Java allows overloading is because it is often simply desirable to have a method which is callable in different ways without having to come up with distinct names for each variation. In general, a set of methods in a class that share the same name should all perform something like the same purpose as each other; if not, you are probably just confusing the users of your class. (Just like having the + operator used for both addition and String concatenation is confusing for learners of Java.)

What’s the difference between “overriding” and “overloading”?

If your class inherits a method foo but you write a method of the same name and same number and types of parameters, then this is considered overriding. If you add your own variants of foo with the same name but different numbers and/or types of parameters, that is overloading because the new variants don’t replace the inherited foo.