Category Archives: Development Tools

Revisiting fexprs; avoiding some problems.

 

This is the third article in a series. In the first article, I introduced a design goal straight out of mad science, which is to create a programming langauge capable of expressiveness encompassing the structures of nearly all other programming languages. In the second article, I identified fexprs as the only known means of actually achieving this, and articulated some of the problems that have been discovered with fexprs. And that brings us to the current article, which is about my efforts and those of a few others to rehabilitate fexprs.

 

Fexprs allow one to define first-class functions that control when, whether, and the rules by which arguments are evaluated, or to construct and execute at runtime arbitrary expressions using the arguments as subexpressions. That much can be done while retaining a sane model of computation and presenting a reasonably tractable theory. In the past, Fexprs have been implemented as routines that took their arguments unevaluated and purely in syntactic form. In that form, fexprs are a deeply problematic construct in programming language design, as Mitchell Wand pointed out. His observation was that efforts to analyze the semantics or theory of a language depend on the ability to show when and how different programming language expressions have identical semantics. Because Fexprs getting arguments in purely syntactical form could have entirely different behavior triggered by any difference at all in the forms of argument expressions supplied to them, the only programming language expressions that have identical semantics in all cases, in a language with syntactic-argument fexprs, are those which are lexically identical.

 

This is … true, darn it. If I fully embrace syntax-recieving Fexprs, I will be creating a language whose semantics are intractable in a very deep way. Further, finding semantic equivalences is a very important subpart of this entire project’s objective and one I don’t want to subvert. While not quite as entirely hopeless as Wand made it sound, it would mean that equivalence of code expressions cannot be fully treated by any merely syntactic analysis, nor in fact by any analysis known to halt in finite time. You could find answers in a high percentage of programs and contexts within those programs, but you’d have to treat each individually, and there would always be cases for which you simply cannot find an answer.

 

While for a translation-target dialect syntax-recieving fexprs must be possible as the only means of emulating certain constructs in intractable languages, they are not needed for any “reasonable” language. On the other hand, evaluation-controlling and expression-building fexprs are key to a general ability to mimic the syntactic forms and the control forms of nearly any tractable language.

 

So I sought some limitation to fexprs which would preserve the benefits and the tractable parts of the semantics. And, unlike most language designers, I also sought a way to unlock the full abstraction-destroying power of syntax-recieving fexprs, although in a way that provides a simple barrier to overcome, complete with flashing warning signs, which will serve as a clear bright line between sanity and insanity. I believe I have found both, but before I explain, I need to mention some other design challenges because the solution is rooted in the requirements to solve those.

 

Another problem with Fexprs is hygiene. To solve it, I need to ensure that expressions are not accidentally evaluated relative to environments which are not the environment where they appear. The alpha-renaming methodology of macros is not sufficient in the case of fexprs. At first I thought that I would pass a reference to the calling environment to the fexpr along with the arguments, allowing it to evaluate arguments in that environment while using its own environment otherwise — essentially permitting alpha-renaming. But macros and fexprs operate in different phases; while alpha renaming is adequate for a pre-runtime phase of computation, fexprs are runtime constructs capable of being composed and therefore referring to multiple runtime environments. This happens when routines pass arguments they’ve gotten from their callers on to routines that they’re calling along with other arguments that refer to bindings visible in their own environment. The result in that case is that the second callee is looking at arguments from different environments, and alpha-renaming to use its caller’s environment for those arguments wouldn’t give the correct result. The implication therefore is that each expression must be passed along with a reference to the environment in which it appeared.

 

The combination of expression and environment is the basic structure of what is called a ‘promise’ in computer science. Promises allow the evaluations they represent to be invoked if and when needed rather than immediately, which of course is one of the things that fexprs have to be able to do. Often the use of promises allows you to save a lot of work, or use a simpler program structure to achieve something because it allows you to choose the sequence of the two tasks of determining how to do something and determining whether you actually need to do it. This is how arguments get passed in ‘lazy’ languages – that is, in languages where argument evaluation is under the control of the called function. And, as was fairly obvious, if I include lazy languages among the translation sources for this mad-science project, I would need to be passing arguments as promises anyway.

 

Supporting various argument passing and evaluation disciplines is at the heart of providing a translation-target dialect, and lazy vs eager isn’t the important fact about argument passing disciplines. Algol for example has call-by-name semantics that allow for repeated evaluation. And because the translation target has to be able to define procedures that mimic the structure of loop syntax defined in other languages, the translation target also needs to provide for repeated evaluation of its arguments. Although this isn’t typically an operation defined on promises, promises contain all the information necessary to do it.

 

Putting these things together to get the requirements for the argument passing semantics of the translation target language, it is the case that all the machinery needed to use fexprs in non-problematic ways is already present. There are a small set of primitives which handle all the uses that can be done without invoking Wand’s curse. The first of these is force which evaluates the promise once and replaces it with the value returned from that evaluation. The second is evaluate which evaluates the promise and leaves it in place so that it can be evaluated again. The third is define! which extends the promise’s environment with a binding for the expression and optionally initializes it with a value, or changes the value in such a binding. And the fourth is assign! which changes the value bound to the promise’s expression in the promise’s environment. Those are things that allow all the syntactic forms of most languages to be defined, allow them to be defined as first-class routines, and do so without exposing the syntax of the argument to the called function. With such primitives I can do everything that I would ever want to do with fexprs in any sane language.

 

Alas, some languages, such as the one I’m working on, are designed by folk who are completely raving bonkers. If I want the whole expressive power of fexprs, to do anything at all with them up to and including directly translating code from languages with trivial theory, mistaken design, and intractable use, I need a fourth primitive. Its name is break and it makes available separately both the environment and the literal syntax of the argument. break is used to define all of the “sane” primitives, but any use of it in other code should be seen as a sign that the code is wrong. It is among the things that you might possibly need to emulate some languages or demonstrate how some semantic errors can happen in languages whose design made a particular set of mistakes, but the other forms are entirely sufficient for any use that isn’t a mistake.

 

Therefore, I feel entirely justified in putting a few simple restrictions on it so that people don’t use it casually, so that it doesn’t wind up in libraries people download and use without reviewing, and so that hopefully even nonprogrammers in senior management should know better than to allow its use in major projects. For example, to get an environment with a binding for break in the first place, you might need to set a couple of flags in the interpreter source code and recompile the interpreter. And the flags might be named something like #I_AM_A_COMPLETE_IDIOT_AND_WANT_TO_USE_BREAK and #IA!_IA!_CTHUHLU_FTAGHN!. Second, break will only work when a specific library is imported, and the library’s name could be something like I_AM_INSANE_AND_WANT_TO_BREAK_EVERY_ABSTRACTION_THERE_IS. Third, to prevent its use in some obscure downloaded library from infecting a program without a warning to everyone working on that program, no code that doesn’t import that library will ever link to any code that does. I think that ought to cover it.

 

Of course working out the argument passing discipline is necessary but not sufficient for working out the whole calling discipline.

 

Firstly, there are questions of scope. I’m opting to support static scope directly and by default, but must be able to transparently emulate languages with dynamic scope. Such languages allow a called function to refer to things visible in the caller’s environment even when those things weren’t explicitly passed as arguments. That implies that the function call semantics, in order to be capable of mimicking both the function calls and syntax of many other languages, will need to pass an explicit reference to the caller’s environment regardless of whether it also passes argument environments. Global scope can be considered to be the base case of both static and dynamic scope. Many languages have other scoping mechanisms such as file scope, module scope, etc. Many of these are special cases of static or dynamic scope, but a few will require first-class environments to emulate fully and naturally. Several languages, including Common Lisp, support variables with static scope and variables with dynamic scope in the same language. Scheme does this too with things such as *current-input-port* et cetera, but doesn’t allow users to define any new dynamically scoped variables.

 

There are also languages, including several important ones, that support named arguments, pattern matching on argument sequences, and procedure definitions that permit varying numbers of arguments. There are languages where calls may have multiple return values or no return value. And finally, there are languages with first-class undelimited continuations (another construct that provides nearly as many problems as break and which I will probably treat similarly), which means that we do not necessarily even get to use a stack structure for our call frames, in general. Instead, call frames will form a tree when continuations are in use.

 

Next there are questions of types. Many languages, including most modern ones, support what is known as ‘type dispatch’ – that means that intentional types (ie, types that express semantics) rather than descriptive types (types that describe semantics) are used to select what function to call. I’m opting to support implicit descriptive types with simple semantics and a type erasure property directly and by default, but explicit intentional types with complex user-defined semantics, subtyping, and static analysis properties are a part of many language designs for languages I need to be able to support in translation. That implies that the translation target dialect has to provide ways to build all the type constructors and type semantics used by other languages, and that means that types will have to be treated as first-class values.

 

There are several fundamentally different paradigms of type dispatch. One of the most general is that embodied by generic functions in Common Lisp or function overloading in a few other languages, where the types of every argument are considered in selecting a function to dispatch to. One of the most common is that embodied by Java, in which the type of only the first argument is used in selecting a function to dispatch to. And one of the most confusing is the possible presence or absence of implementation inheritance, wherein some types are considered as special cases of others and dispatch may be directed by the most precise type of an argument or by some supertype of it selected by rules that range from reasonable and straightforward to arcane and straitjacketing by different languages. This means that some very complex higher-order operations and relationships on types will need to be definable, and that function dispatch must be smart enough to dispatch depending on the results of type expressions. That will be both ugly and tricky.

 

Finally, function return and function call are duals in several models of computation. Allowing for continuation-passing style, we therefore need to be able to do anything on a function return that we can do in a function call, including type dispatch and passing arguments as unevaluated promises.

 

Taken together, those things will make the function calling discipline heavy, but very general. In fact, the calling discipline is probably the most central feature of designing for semantic adaptability, so that should be expected.

 

By the way, mine is not the only solution. John Shutt has also done a lot to rehabilitate fexprs. He has developed a credible model of computation called the $vau calculus, which uses fexprs in a different but disciplined way. He has implemented a language based on the $vau calculus, which is called Kernel.

Here is Shutt’s thesis, on the $vau calculus.

This is a blog post by John Shutt on Fexprs.

Here is the homepage of the language Kernel, which is a lisp dialect based on John Shutt’s $vau calculus.