The problems with Fexprs; a review.

This is the second article in a series about the design of a programming language; The first article in the series articulates the design goal and some of the problems with the design goal. This one is an article about the only strategy capable of achieving that design goal, and some of the problems with that strategy.

 

In a previous article, I wrote about a “programming language with the structure of water.”  The basic idea was that anything expressible in any tractable programming language ought to be expressible in this new language using an Abstract Syntax Tree  isomorphic to that of the original program. And that, paradoxically, that could make it entirely possible for this programming language to be used in completely intractable ways. I hope to provide enough shortcuts to doing things in nice ways, and warnings about the pitfalls, to encourage well-structured code, but the language certainly will not force any particular structure.

 

The idea is that it should make it possible to translate programs written in other languages into this new language, without lowering their level of abstraction.  Not lowering the level of abstraction is the huge win in this crazy plan.  If you can do the translation without lowering the level of abstraction, then you can wind up with a program in the new language that is as readable and structured as the original program — and therefore, with a program that is maintainable and extendable in the new language.

 

So how does one do that?  It requires a level of control over syntax and structure that can only be achieved by the use of fexprs.(*1) Fexprs are a peculiar construction familiar from some very early Lisps, now deeply obscure because the lisp community mostly decided around 1980 that macros were better.  And honestly, Lisp macros are astoundingly good.  Because they match on  and manipulate the abstract syntax tree of the program directly, they are drastically more expressive than text macros (as in C) and much easier to work with than template macros (as in C++).

 

Macros and Fexprs are examples of what’s called first-order code — that is, code that transforms the structure of other code.  Macros do this by introducing phased computation, so the macros are run before the code that they transform is run.  Fexprs do this by manipulating the structure of their own subexpressions at runtime.  The ability to write first-order code is what distinguishes Lisps from other languages.  In fact, once you have the ability to write first-order code, whatever language you are working with becomes arguably a Lisp dialect. Not all Lisps have the traditional love-it-or-hate-it parenthesized notation; Dylan, for example, was a Lisp with a much more mainstream syntax. It’s just popular because it makes it easier to write and understand the kind of macros that make a language into a Lisp. Writing macros in Dylan was harder than in other Lisps because of its less-uniform syntax.

 

Fexprs are, strictly speaking, more expressive (*2) even than Lisp macros, because fexprs are also first-class code.  A routine defined using a fexpr is a first-class object, like any other value in the program.  It can be used as an argument or returned from function calls. It can be applied using the APPLY form, it can be stored in structures, etc, just like any other routine.

 

So why didn’t fexprs dominate over Macros?  Well, first of all, because both fexprs and macros were initially implemented in what is, in retrospect, an astonishingly hamfisted way on dynamically scoped systems with poor hygiene. Fexprs made the inherent problems of that environment much worse while Macros made them only a little worse.   “Poor Hygiene” in this case means that expressions or operands were frequently evaluated in environments which were not the environment where they originally appeared.  This had seemed okay at the time, because programs were smallish and fairly simple, and good programmers really could keep all the interactions in their heads. But it just doesn’t scale, and building any kind of large program in a dynamically scoped Lisp required nearly superhuman attention to consistently refer to the definitions you wanted to be referring to.

 

With macros, you do all of your macroexpansion, then the environments used for that expansion go away and don’t contribute to confusion in the runtime environments. Similarly, the runtime environments don’t exist yet when macros are expanded and won’t interfere with the macroexpansion environments. Finally, the expanded code is injected at the call site where the argument subexpressions can be evaluated in the same environment where they appear. The only remaining problem caused by poor hygiene for macros is that all the code except the argument expressions also appears at the macro definition site and is being evaluated in the environment at the argument site.

 

With Fexprs, you have code which is accepting/injecting argument subexpressions from the caller environment to the callee environment at runtime, and that makes for more opportunities to get things wrong. The environment in which Fexprs run exists at runtime, which means you can get these environments crossed up with each other in ways just not possible with macros.

 

Dynamic scope is also a problem here because it sort of blends caller and callee environment.  Dynamic scope means that if you don’t find a binding for a function in the local (callee) environment, you look at the environment of your caller for a binding.  But the person who defines the function can’t possibly have knowledge of the environments at all the eventual call sites; after all, most of them don’t even exist yet. This is what results in the ‘Bad Hygiene’ problem mentioned above.  Formally, it’s called ‘Name Capture.’  It  happens when the caller tries to use a name to refer to a nonlocal definition for it, but because the caller is in an environment where there is a different variable with the same name, purely by accident the callee winds up using, or even mutating, a different definition than the one intended.  It’s hard to control, especially as programs get larger in scale, because good programmers often have similar ideas of what makes good variable names and more code results in more possible name conflicts. Modern Lisps use static scope instead, which means that instead of looking at the caller environment when you don’t have a local definition of something, you look at the environment where the function was defined. This is a lot less confusing and scales better, because now the scopes of all the normal identifiers are fixed and easy to understand just by looking at the code and the guy who writes the function can see what definitions his function will be affected by.

 

Finally, we come to the problem that the lisps of the day used a hamfisted implementation of fexprs to start with.  In those early systems, Fexprs were given the abstract syntax trees of their argument expressions, just like macros.  Macros manipulate the argument expressions coming up with another abstract syntax tree which is plugged back into the caller environment in place of the macro call and evaluated in the environment there.  Fexprs, on the other hand, are supposed to do the job they’re called for at runtime rather than producing code before runtime to do it at runtime. Fexprs therefore don’t (usually) return code; instead they return a value like any function.  If you’ve been paying attention you already know what problem this exposed with fexprs.  When a fexpr received its arguments, it didn’t receive any information about what environment to evaluate those arguments in.  If you get a fexpr passing one or more of its arguments’ syntax trees to another fexpr, you quickly create a situation in which it is literally impossible at the callee site to tell how to evaluate something, or even a situation in which different arguments in the same call may need to be evaluated relative to different environments in order to function correctly.

 

Before I go on to write about how I intend to fix some of these problems, I’m going to provide some links to papers written up by a couple of guys who are awfully smart.  Kent Pitman, at the 1980 Conference on Lisp and Functional Programming, presented a paper entitled Special Forms In Lisp. He compared different special forms and means of defining them, and concluded that fexprs had some serious problems that Macros avoided. One of the people who read his paper was Mitchell Wand. Wand thought hard about why fexprs had such problems and published his own paper, which was entitled The Theory of Fexprs Is Trivial. Wand pointed out, quite correctly, that because fexprs as implemented in those early systems had direct access to the syntax of their arguments, they could treat calls differently based on any difference in syntax and therefore the only expressions that were guaranteed to have the same semantics when provided as arguments to fexprs were those expressions which were absoutely textually identical.

 

To understand how important those two early papers were, you have to know that both of these guys are brilliant, and well known to be brilliant, and have impeccable academic credentials as well as ‘street cred’ among people who actively use the languages they write about (as regards these particular papers, among people who actively use the descendants of the languages they were then writing about).  And, considering fexprs as they knew them and the programming environments they were working with, they were absolutely right. These papers were highly influential, and over the following few years, as more lisp variants came out, fexprs were quietly abandoned as having been a bad idea to start with.

 

Macros still had some problems too.  The ‘Name Capture’ or ‘Hygiene’ issue that I’ve already explained was the main one.

 

But there was a simple invention which, if you used it in a consistent and disciplined way, could solve this problem, and its name was gensym, short for ‘generate symbol.’  Gensym created variable names guaranteed not to be mistaken for the name bound to any definition in the current program, including the other uses of gensym.  So, if you were careful and methodical, you could guarantee that the code you injected at the call site never ever used any names that might be captures — you could use without substitution the names (or other expressions) you were given as arguments, but if you used gensym to rename absolutely everything else that you were going to inject at the call site, that would guarantee that nothing you injected would be mistaken for anything else.

 

But some more really smart guys noticed something; the process of using gensym to avoid name captures was hard work, filled with details, and made your macro definitions long-winded and horribly difficult to understand. But it was unambiguous work requiring no real creativity to derive that long-winded, obfuscated code from a relatively simple, clean definition of the macro that completely ignored the issue.  All you had to do was keep track of which names were introduced by arguments (because you wanted to refer to the same things that the guy who called the macro was talking about) and rename absolutely everything else, because you just don’t know which of the other things might have a different definition at the call site.   In fact, it turned out that you could write a macro which transformed ordinary macro definitions into definitions of macros that used gensym to achieve perfect immunity from name capture. Once they’d worked out and documented a clear algorithm for it, they called the technique ‘Alpha Renaming’. This finally allowed lisps to have simple, clear macro definitions, and also avoid the problems of name capture.

 

So these guys (whose names are Kohlbecker, Freidman, Felleisen(*3), and Duba) published a paper that explained how to do it.  That paper was Technical Report 194, which is better known by its subtitle, Hygienic Macro Expansion. Speculation is that the term ‘bad Hygiene’ for the programming problem otherwise known as ‘Accidental Name Capture’ probably came from Kohlbecker, since it’s also the term used in mathematics for a related kind of category error, and Kohlbecker was the one who had the most contact with the kind of mathematicians who would be using the term.

 

By this time the Lisps that were dominant were Scheme and Common Lisp, which are two of the three lisps that are still dominant today along with Clojure. Scheme adopted ‘Hygienic Macroexpansion’ as the default, mandated behavior for macro expansion in its very next standard and has continued to require it ever since. The Common Lisp community has been more conservative about change, and there’s a fair amount of existing Common Lisp code that relies on Intentional Name Capture (*4). So hygienic macro expansion has become a common library for Common Lisp, but hasn’t been adopted as a default behavior or a required part of the standard.

 

I see that I’m over two thousand words again; it’s probably best to continue this story in another post.

 


*1  just pretend there’s a schwa vowel between the p and the r, and you can pronounce it. “Fexper,” right? Think of it as one more breakdown in the already strained relationship between phonology and orthography in English. As usual, it happened because the orthography unashamedly stole vocabulary from another language. This time it happens to be a programming language, which only goes to show you that English orthography is a thief with poor taste. In fact, if our orthography keeps stealing from other languages, our phonology may eventually have to file for divorce, shack up with a new orthography, and get a restraining order.(*5)

*2 I’m using Matt Felleisen’s definition of expressiveness again.

*3 Yes, the same Felleisen whose definition of expressiveness I found so compelling and referred to in the first article of this series. He’s done a lot of good work.

*4 Intentional Name Capture is just like Accidental Name Capture except for one thing. The programmer did it on purpose! This is like a deliberate automobile collision performed by a stunt driver or an insurance fraudster; it looks just like an accident, but it isn’t. It allows code returned from macros, using ‘gensym’ to protect all the other names it uses, to refer to or affect definitions in scope at the call site even when the names of those definitions are not given in the arguments to the macro. This is something you cannot do with Hygienic Macros. Therefore standard macros as created with defmacro in Common Lisp (plus gensym) are strictly more expressive in Felleisen’s sense than Hygienic Macros. The code to define such macros, however, puts the “Ugh!” in “Ugly!” at least according to my own programming tastes. The semantic conditioning on or mutations to local variables at the site when the technique is used feels wrong to me because their association with the macro usage is not visible. It just seems a little too wild and magical, I think, to be a good plan. But that may be because I got used to hygienic macros in Scheme before I learned Common Lisp.

*5 This process is known as “spelling reform.” It doesn’t look terribly likely just yet, but wait a century or two, and if English orthography keeps up its nefarious tradition of theft, conversations on the topic may become very serious.

Leave a Reply