Existential Types are the flip side of generics

Generic types, as can now be seen in all the major programming languages have a flip side that has yet to be widely appreciated: existential types.

Variables whose types are generic may not be modified within a generic function (or class): they can be kept in variables, they can be passed to other functions (provided they too have been supplied to the generic function), but other than that they are opaque. Again, when a generic function (or class) is used, then the actual type binding for the generic must be provided – although that type may also be generic, in which case the enclosing entity must also be generic.

Existential types are often motivated by modules. A module can be seen to be equivalent to a record with its included functions: except that modules also typically encapsulate types too. Abstract data types are a closely related topic that also naturally connect to existential types (there is an old but still very relevant and readable article on the topic Abstract types have existential type)

One way that I like to visualize existential types is that of a box with a type inside it. As a programmer using the box, you know there is a type in there, it has a name and it has a set of functions that reference the type but you know nothing about its actual realization.


Inside the box, you know what type foo is, and it can be any type so long as the other functions in the contract are implemented appropriately.

Outside the box you know nothing about the foo type. What this means is that any program text that uses the type from the box is strictly limited in what it can do: it is limited essentially to passing foo values around and to invoking other functions from the same box.

Thus, existentials and universals have a complementary role in programming: generics are used when the entity being defined must be general purpose. Existentials should be used when the use of the entity should be generic.

An excellent example of where this might be useful is in frameworks such as testing frameworks.


Most test frameworks are pretty lame: you define a bunch of void functions with embedded assertions. A full test framework with existentials could be designed to test arbitrary functions and/or classes. All that the programmer would need to do to use such a framework to test her programs would be to invoke it!

A final note about existentials: they represent a neat way of characterizing system or built-in types. For example, the string type can be viewed as an existentially quantified type whose implementation belongs to the compiler. The difference here is that there is typically only one privileged instantiation of string; however, it would be very convenient for a language system to be able to support multiple kinds of string with different implementation characteristics.

Concept Oriented Markup

I have long been frustrated with all the different text mark up languages and word processors that I have used. There are many reasons for this; but the biggest issue is that markups (including very powerful ones like TeX) are not targeted at the kind of stuff I write.

Nowadays, it seems archaic to still be thinking in terms of sections and chapters. The world is linked and that applies to the kind of technical writing that I do.

I believe that the issue is fundamental. A concept like “section” is inherently about the structure of a document. But, what I want to focus on are concepts like “example”, “definition”, and “function type”.

A second problem is that, in a complex environment, the range of documentation that is available to an individual reader is actually composed of multiple sources. Javadoc exemplifies this: an individual library may be documented using Javadoc into a single HTML tree. However, most programmers require access to multiple libraries; each with its own Javadoc. What would be nice would be seamless access to all relevant documentation from a single portal.

Hence the introduction of a new kind of document markup language: concept oriented markup. This is (will be) structured around a series of technologies:

  • The fundamental unit of document is the concept.
  • There is an ontology of concept types; my current list includes topicthemeexamplecodeapiseedefinition, amongst others.
  • graph is a network of concepts that are related for any reason. A graph is typically assembled from a combination of markup processing and source text processing.
  • A textual markup language that allows hand-written concepts to be written
  • A series of rendering engines that process graphs into HTML/PDF/LateX etc.
  • Software tools that make editing graphs easier.

So far, I have designed a simple markup language together with the meta-architecture for representing concepts as a graph. I don’t claim that this markup is ‘pretty'; it is not simple enough yet.

Here is an example:

@id topic
A {@link topic} is a description of a topic of interest.
All topics have an {@link id}. If one is not given explicitly, then it will be
generated automatically.
@end topic

As might be obvious, this is from a cm document that describes the markup language for content based markup.

Comments Should be Meaningless

This is something of a counterintuitive idea:

Comments should be meaningless

What, I hear you ask, are you talking about? Comments should communicate to the reader! At least that is the received conventional wisdom handed does over the last few centuries (decades at least).

Well, certainly, if you are programming in Assembler, or C, then yes, comments should convey meaning

because the programming language cannot

So, conversely, as a comment on the programming language itself, anytime the programmer feels the imperative to write a meaningful comment it is because the language is not able to convey the intent of the programmer.

I have already noticed that I write far fewer comments in my Java programs than in my C programs.  That is because Java is able to capture more of my meaning and comments would be superfluous.

So, if a language were able to capture all of my intentions, I would never need to write a comment.

Hence the title of this blog.

Robotic Wisdom

It seems to me that one of the basic questions that haunt AI researchers is ‘what have we missed?’ Assuming that the goal of AI is to create intelligence with similar performance to natural intelligence; what are the key ingredients to such a capability?

There is an old saw

It takes 10,000 hours to master a skill

There is a lot of truth to that; it effectively amounts to 10 years of more-or-less full-time focus. This has been demonstrated for many fields of activity from learning an instrument, learning a language or learning to program.

But it does not take 10,000 hours to figure out if it is raining outside, and to decide to carry an umbrella. What is the difference?

One informal way of distinguishing the two forms of learning is to categorize one as `muscle memory’ and the other as ‘declarative memory’. Typically, skills take a lot of practice to acquire, whereas declarative learning is instant. Skills are more permanent too: you tend not to forget a skill; but it is easy to forget where one left one’s keys.

Another way of viewing the difference between skills and declarative knowledge is that skills are oriented towards action and declarative knowledge is oriented towards reflection and analysis. Deciding to carry an umbrella depends on being able to ruminate on the observed world; it has little to do with skills (normally).

Today, most machine learning is based on techniques that have more in common with skill learning than with declarative learning.

Anyway, let us assume that both forms of learning are important. Is that enough for high performance? One factor that is obviously also necessary is action.

The issues with action are complementary to those of learning: there are many possible actions that an agent can perform but most of those are either useless or have negative consequences. Early robots did not perform very well because researchers believed that the same mechanisms needed to plan were also needed to act. That is the moral equivalent of planning the movements of one’s leg muscles in order to walk to the front door.

It think that is may be useful to think that emotions are a mechanism that help animals and people to act. (This is not an original idea of course.) In this view, emotions drive goals; which in turn drive the actions that animals and people perform. The connection is direct; in much the same way that skills are directly encoded in muscle.

For our purposes the exact basis of emotions is not relevant. However, the field of affective computing has used a bio-chemical response/decay model for modeling how emotions arise and fade.

What then, is wisdom. If emotions provide a way of rapidly and fluidly motivating action, the declarative dimension accounts for reflection on emotions: in the same way that declarative memory allows for reflection on perception.

It seems to me that, if this is right, we should be able to build a wise robot: by allowing it to reflect on its emotions. For example, a robot might decide that acting too quickly when it encounters a threat situation may not always be conducive to its own survival; much in the same way that it concludes that it is raining when it gets wet.

A Tale of Three Loops

This one has been cooking for a very long time. Like many professional programmers I have often wondered what is it about programming that is just hard. Too hard in fact.

My intuition has led me in the direction of turing completeness: as soon as a language becomes Turing complete it also gathers to itself a level of complexity and difficulty that results in crossed eyes. Still, it has been difficult to pin point exactly what is going on.

A Simple Loop

Imagine that your task is to add up a list of numbers. Simple enough.

If you are a hard boiled programmer, then you will write a loop that looks a bit like:

int total = 0;
for(Integer ix:table)
total += ix;

Simple, but full of pitfalls. For one thing we have a lot of extra detail in this code that represents additional commitment:

  • We have had to fix on the type of the number being totaled.
  • We have had to know about Java’s boxed v.s. unboxed types.
  • We have had to sequentialize the process of adding up the numbers.

While one loop is not going to hurt anyone; real code is stuffed with them. There have been occasions (not many thankfully) where I have written loops nested to a depth of 7 or 8. Such code really does become impossible to follow; let alone to write.

A Functional Loop

In a functional programming language, there are two ways to accomplish the task. The apprentice’s approach might be to write a recursion:

total(nil) is 0;
total(cons(E,L)) is total(L)+E;

While workman-like, for many instances a smarter way is to use a fold:


Apart from being more concise; the fold is higher-level: it abstracts away the machinery of the loop itself and it is also independent of the representation of the collection of numbers.

(That is assuming that you have a functional language with overloading).

What is really interesting in relation to my original thesis is that the fold expression is closer to a problem-solving representation of the task.

However, ask any functional programmer about their use of fold and you will likely encounter a fairly procedural interpretation of how it works and how it should be used. (Something about how it successively applies the + function to each element of the list accumulating the answer as it goes.)

I.e., fold is better than for; but is not good enough.

A Totalization Query

My third version of this would be familiar to any SQL programmer:

total X where X in L

I.e., if you want to add up the elements of the list, then say so!

This query — which is based on notation in the Star programming language — declaratively states what is required. Although it’s form is a little too specific, a more realistic variant would be:

fold X with (+) where X in L

I argue that either of these queries is better than either of the previous solutions because it comes closest to the original description and makes the fewest assumptions about the nature of the collections or the arithmetic.

It is also much closer to a problem oriented way of thinking about the world. I would argue that more people — especially non-programmers — would be able to follow and even to write such a query than either of the earlier formulations.

Why is that?

The Homunculus

Traditional programming is often taught in terms of programs being sequences of steps that must be followed. What does that imply for the programmer? It means that the programmer has to be able to imagine what it is like to be a computer following instructions.

It is like imagining a little person — a homunculus — in the machine that is listing to your instructions and following them literally. You the programmer have to imagine yourself in the position of the homunculus if you want to write effective programs.

Not everyone finds such feats of imagination easy. It is certainly often tedious to do so.

The true role of domain specific languages

It is easy to be confused by the term domain specific language. It sounds like a fancy term for jargon. It is often interpreted to mean some form of specialized language. I would like to explore another role for them: as vehicles for policy statements.

In mathematics there are many examples of instances where it is easier to attack a problem by solving a more general, more uniform, problem and then specializing the result to get the desired answer.

It is very similar in programming: most programs take the form of a general mechanism paired with a policy that controls the machine. Taken seriously, you can see this effect down to the smallest example:
fact(n) where n>0 is n*fact(n-1);
fact(0) is 1

is a general machine for computing factorial; and the expression:fact(10) is a policy ‘assertion’ that specifies which particular use of the factorial machine is intended.

One important aspect of policies is that they need to be intelligible to the owner of the machine: unlike the machine itself which only needs to be trusted by the owner.

So, one critical role for a DSL is to be the policy language for the user of a mechanism.

Starting from this light leads to interesting conclusions. In particular, DSLs should be ubiquitous not rare; in particular, DSLs support the role that abstractions play in programming: by layering an appropriate syntax on top of the expression of the abstraction. It also means that programming languages should be designed to make it easy to construct and use DSLs within systems as well as externally.

A simple example: the query notation in Star, as well as in formalisms such as LINQ, may be better viewed as simple DSLs where the user is the programmer. The difference between these and more traditional DSLs is that the DSL expressions are embedded in the program rather than being separated from the code.

I think that embracing the DSL in this way should make it easier for a programming language to survive the evolution of programming itself. With an effective DSL mechanism a language ‘extension’ encoding a new language concept (for example, queries over C# or objects over C) and be done without invalidating the existing language. (The mechanisms in Star go further: it is possible to construct a DSL in Star that either augments the base language or even replaces it. We have used both approaches.)

It also explains why LISP’s macro facilities have allowed it to survive today more-or-less unchanged (nearly 60 years after being invented) whereas languages like Java and C++ have had to undergo painful transitions in order to stay relevant.

Single Inheritance and Other Modeling Conundrums

Sometimes a restriction in a programming language makes sense and no sense at all — all at the same time.

Modeling the real world

Think about the Java restrictions on the modeling of classes: a given class can only have one supertype and a given object’s class is fixed for its lifetime.

From a programming language perspective these restrictions make a good deal of sense: all kinds of ambiguities are possible with multiple inheritance and the very idea of allowing an object to be ‘rebased’ fills the compiler writer with horror. (Though SmallTalk allows it.)

The problem is that, in real life, these things do happen. A ‘natural’ domain model is quite likely to come up with situations involving multiple inheritance and dynamic rebasing.

For example, a person can go from being a customer, to an employee, to a manager to being retired. A given person might be both an employee and a customer simultaneously (someone else may not be).

Given a domain that is as flexible as this one if forced to ‘simulate’ it in Java. I.e., one cannot use a Java class called Customer to represent a customer; because Java’s idea of class is not rich enough to model the domain.

At the same time, the modeling is not random and a good architect will try to ensure some discipline in the application.

The logical conclusion is that large applications tend to contain a variant of ‘the type system’ where the domain model is represented. Java is used to implement the meta model, not the domain model.

This dynamic type system may or may not be based on a well founded model (such as that of description logic); but in any case the programming language is not helping as much as it should.

What is a language to do?

On the face of it, it seems that the logical thing is to make a programming language’s type system sufficiently flexible to actually model real world scenarios.

However, there is a difficulty with that: it is not the case that any one modeling system is best suited to all applications. In addition, a modeling system that is well-suited to modeling domain knowledge is not guaranteed to be equally well suited to regular programming tasks.

A better approach is to embrace diversity. A combination of DSLs and libraries enable one to build out a particular modeling system and to support the programmer with direct appropriate syntax.

For example, this pseudo-code example:

  customer isa person 
  customer has account
  person has name
  C instance of customer
  if overdrawn(C's account) then

shows one example of a modeled customer. The ‘actual’ code implied by this fragment might look like:

  C : object;
  if overdrawn(findAttribute(C,"account")) then 

The principal point here is that the syntactic sugar offered by a DSL is not mere syntactic sugar: it can help the application programmer to use a language that is appropriate for her needs while at the same time enforcing sanity checks implied by the particular modeling language.

At the same time, there is no implied permanent commitment to one particular way of modeling with the host language.