Room 101

A place to be (re)educated in Newspeak

Saturday, October 06, 2018

Reified Generics: The Search for the Cure

Many have argued that run time access to generic type information is very important. A very bitter debate about this ensued when we added generics to Java. The topic recurs whenever one designs a statically typed object oriented language. Should one reify generic types, or erase them? Java chose erasure, .Net and Dart chose reification, and all three solutions are in my mind unsatisfactory for various reasons, including but not limited to the handling of erasure or its presumed alter ego, reification.

Pedantic note 1: Throughout this post, I will use the terms erasure and reification as shorthand for erasure and reification of generic type information.

In a well designed object-oriented language, erasure and reification are not contradictory at all. This statement might bear some explanation, so here we go ...

A while back, I discussed the problem of shadow language constructs. I gave examples of shadow constructs such as Standard ML modules, traditional imports etc. Here is another: reified generics.

Generics introduce a form a shadow parameterization. Programming languages all have a perfectly good mechanism for declaring parameterized constructs and invoking them. You may have heard of it - it is widely known by the name function, and it goes back to the 17th century.

Pedantic note 2: Yes, programming language functions are usually not mathematical functions. The parameterization mechanism is however, essentially the same.

Generics introduce a different form of formal and actual parameters. There is a purpose to that: static analysis. However, when languages try to provide run time access to these parameters (i.e., reification of generics), we are creating a lobotomized twin of the existing runtime parameter passing system. A new, redundant, confusing and costly set of mechanisms is added to the run time in order to declare, pass, store and access these parameters.

The first guiding principle of any solution is to avoid shadow constructs. We already have parameterization support, let's use it.


Generics are functions from types to types, typically classes to classes.

Pedantic note 3: If your language is prototype based, generics might be considered functions between prototypes. If your language has primitive types - well, you're up the creek without a paddle anyway. There is no justification for primitive types in an object oriented language.

If classes are expressions, we can write reified generics as ordinary functions. Here's some sample pseudo-code. It's given in a quasi-standard syntax, so I don't waste time explaining Newspeak syntax.

public var List = (function(T) {  
   return class {
    var hd, tl;
       class Link {
         public datum;
         public next;
       }
       public elementType() { return T}
       public add(e) {
           var tmp := Link new.
           tmp.datum := e;
           tmp.next := hd;
           tl := hd;
           hd := tmp;
           return e;
       }
 }
}).memoize();

 Here's a summary of what the above means:
  • We declare a variable named List, initialized to a closure.
  • The closure takes a class T as a parameter and returns a class as its result.
  • The result class is specified via a class expression, which implements a linked list.
  • The class expression includes a nested class declaration, Link.
  • The method memoize() is called on the closure to, well, memoize it. Memoize() returns a memoized version of its receiver.
Each call to List() returns a list class specialized to the actual parameter of List(). We can create a list of integers by saying

var lst := List(Integer).new();

and we can dynamically check what type of elements lst holds

lst.elementType(); // returns Integer, the class object reifying the integer type.

The reified element type is shared among all instances of a given list class, because it is stored in the closure surrounding the class. We avoid duplicating classes with the same parameters - this is just function memoization (and I assume a memoize() method on closures for this purpose). All this works independent of any static types. We are just using standard runtime mechanisms like closures, memoized functions, objects representing classes and yes, class expressions. Smalltalk had these, in essence, over 40 years ago.

What if I don't have class expressions? Well, don't you know that everything should be an expression? Besides, this works fine if you have the ability to define classes reflectively like Smalltalk, or have properly defined nested classes like Newspeak, though it may be a bit more verbose and require more library support to be palatable.

Now let's add types. In the code below, type annotations are completely optional, and have absolutely no runtime effect. They are shown in blue.

public var List = (function(T : Class) {  
   return class {
    var hd, tl : Link;
       class Link {
         public datum : T;
         public next : Link;
       }
       public elementType() : Class { return T}
       public add:(e : T) : T {
           var tmp: Link := Link new.
           tmp.datum := e;
           tmp.next := hd;
           tl := hd;
           hd := tmp;
           return e;
       }
 }
}).memoize();

You may notice one odd thing - we use the name of the formal parameter to the closure, T, as a type. This is justified by the following rule.

Rule 1: In any method or closure m, a formal parameter T of type Class (or any subtype thereof) implicitly defines a type variable T which is in scope in type expressions from the point T is declared to the end of the method/closure.

Next, we need to be able to use the information given by the declaration of List when we write types like List[Integer]. We use the following rule.

Rule 2: If e is an expression of function type with parameter(s) of type Class and return type Class, e's name can be used inside a type expression as a type function; an invocation of the type function e[T1, ..., Tn] denotes the type of the instances of the class returned by the value expression
e(T1, ..., Tn).

We can then write, and typecheck

var lst : List[Integer] := List(Integer).new();
var i : Integer := lst.add(3) + 4;

Oh, and we can still do this:

lst.elementType(); // returns Integer, the class object reifying the integer type.

Similarly, if you wish to create new instances of the incoming type parameters, you should be able to do that in the above regime - though you will have to confront the fact that different subtypes may have different constructors and plan around that explicitly - say, by defining a common construction interface for these types.

The beauty of this scheme is that no runtimes were harmed in the making of this reified generic type system. The type system is completely optional. And this is my point: reification was there all along. The typechecker simply needs to understand this fact and leverage it. The basic approach would work with any language with types reified as values, regardless of whether it has generics.

Interestingly, we now have reification of generics, and erasure, at the same time. The two are not in conflict. Reification is supported by the normal runtime mechanisms, independent of types, which are optional and always erased, carrying no runtime cost or semantics.


Reification of generics is now a choice for library implementers. If they think it is worthwhile to pay the costs, so that, for example, someone can cheaply test if a collection is a collection of integers or a collection of strings, they are free to do so.

If they don't want to pay a price for reification but still want to typecheck generics, they can do that too. Nothing prevents one from explicitly declaring type parameters (as opposed to the implicit ones derived from the class-valued value parameters used for reification).

Tangent, TL; DR: Now is the time to mention that traditional reification of generics - that is, runtime support for a shadow parameterization mechanism - is a disaster. It hurts performance in both space and time; Just ask the brave VM engineers who struggled with these issues on the Dart VM. Mitigating that introduces enormous complexity into the runtime and requires a huge effort, which would be better spent doing something good and useful instead.

In systems designed to support multiple programming languages, reification brings a different problem. All languages must deal with the complexity of reification; worse they must conform to the expectations of the reified generic type system of the "master language" (C# or Java, for example).

Consider .Net, the poster child of generic reification. Originally, .Net was intended to be a multi-language system, but dynamic language support there has suffered, in no small part due to reification. Visual Basic was a huge success until .Net came along and made it conform to C#. And what Iron Ruby/Python programmer ever enjoyed being forced to feed type arguments (whatever those might be) into a collection they are creating?

In contrast, the JVM was conceived as a monolingual system. Sun management deluded themselves that Java was the ultimate programming language (though yours truly did try to hint, ever so gently, that further progress in PL was at least conceivable). And yet, the JVM has become home to a wide variety of languages. This is due to multiple factors, invokedynamic not the least among them. But erasure plays a crucial and underappreciated role here as well. If not for erasure, the JVM would have the above-mentioned problems wrt dynamic languages, just like .Net.

Of course, generics have many issues that are independent of reification. The great difficulties with generics come up when they interact with subtyping. All the problems of variance, as well as inference, are rooted in that interaction. If you are happy with any existing approach, you should be able to incorporate it into the above reification strategy - but I am not aware of any pre-existing generic design that I would consider satisfactory.

I think I may now have a plausible approach to the typing issues, but the margins of this blog are too narrow to contain it.  A follow up post will either make it all clear, or confess that it hasn't worked out. The above comments on reification stand on their own in any case.

Monday, May 29, 2017

Dead Program's Society

In my last post I discussed live literate programming. I concluded the post by noting that the approach I had discussed had one glaringly obvious flaw. No one seems to have pointed out that flaw, so I was forced to point it out myself, in my Programming 17 keynote. The recording of the talk was a bit deficient, in that the camera operator focused on me rather than on the screen. Alas, I am not nearly as interesting as the screen; the screen was where all the action, demos etc. took place. To rectify that, this post will include a few video snippets that should be very close to the demos given in the talk.

But first, what of the glaring flaw? The flaw is that the mechanism I described for creating literate programs using Madoko and Ampleforth was not compositional. It allowed for embedding live widgets inside rich text, but only at one level. The widgets themselves would contain text, but there wasn't a way to include widgets in that embedded text recursively. If only the rich text editor could treat itself as a widget that it could self-embed, the system would compose to arbitrary depth.

Sadly, non-compositional text editors are the norm. Most widget sets have some sort of text widget, but that widget typically traffics in text only. Over twenty years ago, the Strongtalk system addressed this problem (and we were by no means the first to do so). I demo'ed this in my talk: here's a brief video recreating the main points of that demo.

The demo shows the embedding of live IDE components in both the ordinary text editors used to edit code, and in rich text generated from markup. One open question is when to use either approach, or how to integrate them. Another point I highlight is the failure of liveness in instance methods, something I've discussed in previous posts. Having raised that problem, I moved on to showing my approach to solving it by demoing Newspeak's exemplar mode.

The Strongtalk demo emphasized literate programming issues; the Newspeak demo focused on liveness. Ampleforth is aimed at addressing both of these, and so I went on to demonstrate Ampleforth, the system I described in my prior post. In the talk, I showed Ampleforth embedded in the presentation itself. Here's a recording of essentially the same thing.

Obviously, one cannot embed live programs in conventional presentation tools like PowerPoint or Keynote (or Prezi, for that matter). Instead, the presentation was built using Lounge, a system being developed by Bill Burdick. Bill shares a very similar vision for live literate programming, which he calls Illuminated programming. He uses Lounge to run Leisure, a purely functional, lazy language. Lounge certainly lacks the visual polish of commercial tools, but unlike those tools, its fundamental architecture is sound. A Lounge document (such as a presentation) is defined using emacs's Org mode format. Some adjustments were necessary to embed Ampleforth into Lounge - basically setting things up to run within an IFrame; these tweaks should make it easier to embed Ampleforth into other web based tools as well.

What of our glaring flaw? The flaw is not in Ampleforth itself. Rather, it lies in the text editors in which it is embedded and those which are embedded in the widgets we use. Fortunately, the DOM actually does better in this fundamental respect, and so on the web, this is fixable. Unfortunately, the web's basic text editing facility, content-editable text, is all but unusable. Lounge itself has a text editor that doesn't suffer from this weakness. In principle, we could access the Lounge editor from Newspeak, but in this case we embedded Ampleforth with its rudimentary editor (based on content-editable text). I'm planning to modify Newspeak's web environment to use CodeMirror as its default editor, which should help as well.

An important question that came up in the Q&A was whether liveness was actually desirable. After all, the code is supposed to be correct for all possible inputs, and relying on examples rather than abstract reasoning and proofs might be dangerous. My response was that liveness takes nothing away - you can go prove invariants to your heart's content. Liveness can be abused, but overall it makes people more productive by reducing the cycle between specifying intent and measuring actual results.

Dave Ungar later gave another, even better answer: that invariants themselves should be reified in the environment, so we can  remember them all, communicate them to others, ask them to monitor the program to see if they are violated etc.

To Life, Literacy and the Pursuit of Happiness!


Sunday, November 20, 2016

Illiterate Programming

I have long been a fan of literate programming, especially live literate programming. I wrote a brief note about the topic a while ago, but for various reasons did not distribute it. Recently, the early release of Eve (very nice work) has injected some new life in this area.  So I decided to belatedly post my musings on the subject.

Ironically, posting live programming content is difficult on many web publishing venues, such as this blog, or Medium.  So if you actually want the substance of this post, you'll have to follow this link.

Friday, November 28, 2014

A DSL with a View

In a previous post, I promised to explain how one might define UIs using an internal DSL. Using an internal DSL would allow us to capitalize on the full power of a general purpose programming language and avoid having to reinvent everything from if-statements to imports to inheritance.

Hopscotch, Newspeak's UI framework, employs such an internal DSL.  Hopscotch has been discussed in a paper and in a talk.  It will take more than one post to describe Hopscotch; here we will focus on its DSL, which is based on the notion of fragment combinators.

Fragments describe pieces of the UI; they may describe individual widgets, or views constructed from multiple pieces, each of which is in turn a fragment. A fragment combinator is then a method that produces a fragment, possibly from other fragments.

One of the simplest fragment combinators would be label:, which takes a string. The expression

label: 'Hello Brave New World' 

would be used to put up the string "Hello Brave New World" on the screen.  Other examples might be

button: 'Press me' action: [shrink]

which will display a button



that will call the method shrink when invoked.  The combinator button:action: takes two arguments - the first being a string that serves as the label of the button, and the second being a closure that defines the action taken when the button is pressed. Closures in Newspeak are delimited with square brackets, and need not provide a parameter list if no parameters are required.  This is the most lightweight syntax for literal functions you will find. Along with the method invocation syntax, where method names embed colons to indicate where arguments should be placed, this gives a very readable notation for many DSLs.

Further examples:

row: {
  button: 'Press me' action: [shrink]. 
  button: 'No, press me' action: [grow].
}

The row: combinator takes a tuple of fragments (tuples in Newspeak are delimited by curly braces, and their elements are separated by dots) as its argument and lays out the elements of the tuple horizontally:


the column: combinator is similar, except that it lays things out vertically

column: {
  button: 'Press me' action: [shrink]. 
  button: 'No, press me' action: [grow]
}

produces:



In mainstream syntax (Dart, in this case) the example could be written as

column(
  [button('Press me', ()=> shrink), 
   button('No, press me', () => grow)]
 )

The Newspeak syntax is remarkably readable though, and its advantage over mainstream notation becomes more pronounced as examples grow. Of course none of this works at all if your language doesn't support both closures and literal lists/arrays.

So far, this is very standard stuff, much like building a tree of views in most systems.  In most UI frameworks, we'd write something like

new Column([new Button('Press me', ()=> shrink), 
            new Button('No, press me', () => grow)]
    )

which is less readable and more verbose.  Since allocating an instance is more verbose than calling a method in most languages, the fact that fragment combinators are represented via methods, which act as factories for various kinds of views, helps make things more concise. It's a simple trick, but worth noting.

The advantage of thinking in terms of fragments becomes clearer once you consider  less obvious fragments such as draggable:subject:image:, which takes a fragment and allows it to be dragged and dropped. Along with the fragment, it takes a subject (what you might call a controller) and an image to use during the drag. Making drag-and-drop a combinator means everything is potentially draggable. Conventional designs would make this a special attribute of certain things only, losing the compositionality that combinators provide.

Presenters are a a specific kind of fragment that is especially important. Presenters provide user-defined views in the Model-View-Controller sense.  To define your own view, you subclass Presenter. Because presenters are fragments, any user defined view can be part of a predefined compound fragment like column: or draggable:subject:image:.

A presenter has a method definition which computes a fragment tree which is used to render the presenter. The fragment DSL we discussed is used inside of presenters. All the combinators are methods of Presenter, so they are inherited by any class implementing a view, and are therefore in scope inside a presenter class.  In particular, combinators are available in the definition method.

To see how all this works, imagine implementing the well known todoMVC example.  We'll define a subclass of Presenter called todoMVCPresenter  to represent the todoMVC UI.  The UI is supposed to present a list of todo items. It consists of a column with:


  1. A header in large text saying "todos" 
  2. An input  zone where new todos are added.
  3. A list of todos, which can be filtered using controls given in (4). 
  4. A footer, that is empty if there are no todos at all. It materializes as a set of controls once there are todos.

We can translate these requirements directly:

definition = (
     ^column: {
      (label: 'todos') hugeFont.
      inputZone.
      todoList.
      footer.
     }
)

More notes on syntax:  methods are defined by following their header with an equal sign and a body delimited by parentheses; ^ signifies return; method invocations that take no parameters list no arguments, e.g., inputZone, not inputZone(); chained method invocation does not require a dot - so it's 
(label: 'todos') hugeFont rather than label('todos').hugeFont.

We haven't yet specified what inputZone, todoList and footer do. They are all going to be defined as methods of todoMVCPresenter.  We can define the UI in such a top down fashion  because we are working with a language that supports procedural abstraction. You get it for free in an internal DSL.  

We can then define the pieces, such as

footer = (
 ^subject todos isEmpty 
 ifTrue: [nothing]
 ifFalse: [controls]
)

Here, we use conditionals to determine what view to produce, depending on the state of the application.  The application logic is embodied in the controller, subject, which we query for the todos list. The nothing combinator does exactly what it says; controls is a method we would have to define in todoMVCPresenter, detailing what should appear in the footer if it is visible.  Again, the code corresponds closely to the natural language description in bullet (4) above.

To elaborate todoList we'll need a loop or recursion or something of that nature; in fact, we'll use the higher order method collect:, which is Newspeak's version of map

todoList = (^list:[subject todos collect: [:todo | todo presenter]])

The list: combinator packages a list of fragments into a list view. We pass list: a closure that computes the list to todo items. 

Aside: We could have passed it the list itself, computed eagerly. Often, fragment combinators take either suitable fragment(s) a closure that would compute them.

To compute the fragment list we compute a presenter for each individual todo item by mapping over the original list of todos.
The closure we pass to collect: takes a single parameter, todo. Formal parameters to closures are introduced prefixed by a colon, and separated from the closure body by a vertical bar.

What are the odds that higher order functions (HOFs) were part of your external DSL? Even if they were, one would have to define a suite of useful  HOFs. One should factor the cost of defining useful libraries into any comparison of internal and external DSLs.

The Hopscotch DSL has other potential advantages. Because fragment combinators are methods, you can override them to customize their behavior.
We believe we can leverage this  to customize the appearance of things, a bit like CSS. To make this systematic, we expect to define whole groups of overrides in mixins. I'm not showing examples where Hopscotch is used this way because we have done very little in that space (and this post is already too long anyway). And we haven't spoken about the other advantages of Hopscotch.
such as its navigation model, lack of modality and very clean embodiment of MVC.

Ok, now it's time for the caveats.


  1. First and foremost,  Hopscotch currently lacks a good story for reactive binding.  In our example, that means you'd have to put explicit logic to refresh the display in some of the controls.  This makes things less declarative and harder to use. We always planned to solve that problem; I hope to address it in a later post.  But the high order bit is that we have code in a general purpose language that gives a very readable, declarative description of the UI. It corresponds directly to the natural language description of the requirements.
  2. Hopscotch lacks functionality in order to support richer UIs, but the design is naturally extensible: one adds more fragment combinators.
  3. We also want more ports, especially to mobile/touch platforms. However, Hopscotch has already proven quite portable: it runs on native Win32, on Squeak's Morphic and on HTML (the latter port is still partial, but that is just an issue of engineering resources).   More ports would help us deal with another controversial goal - defining a UI platform that works well across OS's and devices.


Regardless of the current engineering limitations, the point here is simply to show the advantages of a well-designed internal DSL for UI.  The lessons of Newspeak and Hopscotch apply to other languages and systems, albeit in an attenuated fashion.










Monday, September 29, 2014

A DOMain of Shadows

One of the advantages of an internal DSL  over an external one is that you can leverage the full power of a general purpose programming language. If you create an external DSL, you may need to reinvent a slew of mechanisms that a good general purpose language would have provided you: things like modularity, inheritance, control flow and procedural abstraction.

In practice, it is unlikely that the designer of the DSL has the resources or the expertise to reinvent and reimplement all these, so the DSL is likely to be somewhat lobotomized. It may lack the facilities above entirely, or it may have very restricted versions of some of them. These restricted versions are mere shadows of the real thing; you could say that the DSL designer has created a shadow world.
I discussed this phenomenon as part of a talk I gave at Onward in 2013. This post focuses on a small part of that talk.

Here are three examples that might not always be thought of as DSLs at all, but definitely introduce a shadow world.

Shadow World 1: The module system of Standard ML.

ML modules contain type definitions. To avoid the undecidable horrors of a type of types, ML is stratified.  There is the strata of values, which is essentially a sugared lambda calculus. Then there is the stratum of modules and types. Modules are called structures, and are just records of  values and types. They are really shadow records, because at this level, by design, you can no longer perform general purpose computation. Of course, being a statically typed language, one wants to describe the types of structures. ML defines signatures for this purpose. These are shadow record types. You cannot use them to describe the types of ordinary variables.

It turns out one still wants to abstract over structures, much as one would over ordinary values. This is necessary when one wants to define parameterized modules.  However, you can’t do that with ordinary functions. ML addresses this by introducing functors, which are shadow functions. Functors can take and return structures, typed as signatures. However, functors cannot take or return functors, nor can they be recursive, directly or indirectly (otherwise we’d back to the potentially non-terminating compiler the designers of ML were trying so hard to avoid in the first place).

This means that modules can never be mutually recursive, which is unfortunate since this turns out to be a primary requirement for modularity. It isn’t a coincidence that we use circuits for electrical systems and communication systems, to name two prominent examples.  

It also means that we can’t use the power of higher order functions to structure our modules. Given that the whole language is predicated on higher order functions as the main structuring device, this is oddly ironic.

There is a lot of published research on overcoming these limitations. There are papers about supporting restricted forms of mutual recursion among ML modules.  There are papers about allowing higher-order functors. There are papers about combining them. These papers are extremely ingenious and the people who wrote them are absolutely brilliant. But these papers are also mind-bogglingly complex.  

I believe it would be much better to simply treat modules as ordinary values. Then, either forego types as module elements entirely (as in Newspeak)  or live with the potential of an infinite loop in the compiler. As a practical matter, you can set a time or depth limit in the compiler rather than insist on decidability.  I see this as a pretty clear cut case for first class values rather than shadow worlds.

Shadow World 2: Polymer

Polymer is an emerging web standard that aims to bring a modicum of solace to those poor mistreated souls known as web programmers. In particular, it aims to allow them to use component based UIs in a standardized way.

In the Polymer world, one can follow a clean MVC style separation for views from controllers. The views are defined in HTML, while the controllers are defined in an actual programming language - typically Javascript, but one can also use Dart and there will no doubt be others. All this represents a big step forward for HTML, but it remains deeply unsatisfactory from a programming language viewpoint.

The thing is, you can’t really write arbitrary views in HTML. For example, maybe your view has to decide whether to show a UI element based on program logic or state. Hence you need a conditional construct. You may have heard of these: things like if statements or the ?: operator. So we have to add shadow conditionals.

<template if="{{usingForm}}">

is how you’d express  

if (usingForm) someComponent;

In a world where programmers cry havoc over having to type a semicolon, it’s interesting how people accept this. However, it isn’t the verbose, noisy syntax that is the main issue.

The conditional construct doesn’t come with an else of elsif clause, nor is their a switch or case. So if you have a series of alternatives such as

if (cond1) {ui1}
else if (cond2) {ui2}
        else {ui3}

You have to write


<template if = "{{cond1}}">
<ui1>
</template>
<template if = "{{cond2 && !cond1}}">
<ui2>
</template>
<template if = "{{cond3 && !cond2 && !cond3}"}>
<ui3>

</template>


A UI might have to display a varying number of elements, depending on the size of a data structure in the underlying program. Maybe it needs to repeat the display of a row in a database N times, depending on the amount of data. We use loops for this in real programming. So we now need shadow loops.


<template repeat = "{{task in current}}">

There’s also a for loop


<template repeat= "{{ foo, i in foos }}">

Of course one needs to access the underlying data from the controller or model, and so we need a way to reference variables. So we have shadow variables like

{{usingForm}} 

and shadow property access.

{{current.length}}

Given that we are building components, we need to use components built by others, and the conventional solution to this is imports. And so we add shadow imports.


<link rel = "import” href = "...">

UI components are a classic use case for inheritance, and polymer components support  can be derived from each other, starting with the predefined elements of the DOM, via shadow inheritance.  It is only a matter of time before someone realizes they would like to reuse properties from other components in different hierarchies via shadow mixins.

By now we’ve defined a whole shadow language, represented as a series of ad hoc constructions embedded in string-valued attributes of HTML.  A key strength of HTML is supposed to be ease-of-use for non-programmers (this is often described by the meaningless phrase declarative). Once you have added all this machinery, you’ve lost that alleged ease of use - but you don’t have a real programming language either. 

Shadow World 3: Imports

Imports themselves are a kind of shadow language even in a real programming language. Of course imports have other flaws, as I’ve discussed  here and here, but that is not my focus today. Whenever you have imports, you find demands for conditional imports, for an aliasing mechanism (import-as) for a form of iteration (wildcards).  All these mechanisms already exist in the underlying language and yet they are typically unavailable because imports are second-class constructs.

Beyond Criticism

It is very easy to criticize other people’s work. To quote Mark Twain:

I believe that the trade of critic, in literature, music, and the drama, is the most degraded of all trades, and that it has no real value 

So I had better offer some constructive alternative to these shadow languages. With respect to modularity, Newspeak is my answer. With respect to UI, something along the lines of the Hopscotch UI framework is how I’d like to tackle the problem. In that area, we still have significant work to do on data binding, which is one of the greatest strengths of polymer. In any case, I plan to devote a separate post to show how one can build an internal DSL for UI inside a clean programming language. 

The point of this post is to highlight the inherent cost of going the shadow route. Shadow worlds come in to being in various ways. One way is when we introduce second class constructs because we are reluctant to face up to the price of making something a real value. This is the case in the module and import scenarios above. Another way is when one defines an external DSL (as in the HTML/Polymer example). In all these cases, one will always find that the shadows are lacking. 


Let’s try and do better.