Skip to main content

A Tale of Three Loops

This one has been cooking for a very long time. Like many professional programmers I have often wondered what is it about programming that is just hard. Too hard in fact.

My intuition has led me in the direction of turing completeness: as soon as a language becomes Turing complete it also gathers to itself a level of complexity and difficulty that results in crossed eyes. Still, it has been difficult to pin point exactly what is going on.

A Simple Loop


Imagine that your task is to add up a list of numbers. Simple enough.

If you are a hard boiled programmer, then you will write a loop that looks a bit like:

int total = 0;
for(Integer ix:table)
total += ix;

Simple, but full of pitfalls. For one thing we have a lot of extra detail in this code that represents additional commitment:

  • We have had to fix on the type of the number being totaled.

  • We have had to know about Java's boxed v.s. unboxed types.

  • We have had to sequentialize the process of adding up the numbers.


While one loop is not going to hurt anyone; real code is stuffed with them. There have been occasions (not many thankfully) where I have written loops nested to a depth of 7 or 8. Such code really does become impossible to follow; let alone to write.

A Functional Loop


In a functional programming language, there are two ways to accomplish the task. The apprentice's approach might be to write a recursion:

total(nil) is 0;
total(cons(E,L)) is total(L)+E;

While workman-like, for many instances a smarter way is to use a fold:

fold((+),0,L)

Apart from being more concise; the fold is higher-level: it abstracts away the machinery of the loop itself and it is also independent of the representation of the collection of numbers.

(That is assuming that you have a functional language with overloading).

What is really interesting in relation to my original thesis is that the fold expression is closer to a problem-solving representation of the task.

However, ask any functional programmer about their use of fold and you will likely encounter a fairly procedural interpretation of how it works and how it should be used. (Something about how it successively applies the + function to each element of the list accumulating the answer as it goes.)

I.e., fold is better than for; but is not good enough.

A Totalization Query


My third version of this would be familiar to any SQL programmer:


total X where X in L


I.e., if you want to add up the elements of the list, then say so!

This query — which is based on notation in the Star programming language — declaratively states what is required. Although it's form is a little too specific, a more realistic variant would be:


fold X with (+) where X in L


I argue that either of these queries is better than either of the previous solutions because it comes closest to the original description and makes the fewest assumptions about the nature of the collections or the arithmetic.

It is also much closer to a problem oriented way of thinking about the world. I would argue that more people — especially non-programmers — would be able to follow and even to write such a query than either of the earlier formulations.

Why is that?

The Homunculus


Traditional programming is often taught in terms of programs being sequences of steps that must be followed. What does that imply for the programmer? It means that the programmer has to be able to imagine what it is like to be a computer following instructions.

It is like imagining a little person — a homunculus — in the machine that is listing to your instructions and following them literally. You the programmer have to imagine yourself in the position of the homunculus if you want to write effective programs.

Not everyone finds such feats of imagination easy. It is certainly often tedious to do so.

Popular posts from this blog

Comments Should be Meaningless

This is something of a counterintuitive idea: Comments should be meaningless What, I hear you ask, are you talking about? Comments should communicate to the reader! At least that is the received conventional wisdom handed does over the last few centuries (decades at least). Well, certainly, if you are programming in Assembler, or C, then yes, comments should convey meaning because the programming language cannot So, conversely, as a comment on the programming language itself, anytime the programmer feels the imperative to write a meaningful comment it is because the language is not able to convey the intent of the programmer. I have already noticed that I write far fewer comments in my Java programs than in my C programs.  That is because Java is able to capture more of my meaning and comments would be superfluous. So, if a language were able to capture all of my intentions, I would never need to write a comment. Hence the title of this blog.

Sub-turing complete programming languages

Here is an interesting intuition: the key to liberating software development is to use programming languages that are not, by themselves, turing-complete. That means no loops, no recursion 'in-language'. Why? Two reasons: any program that is subject to the halting problem is inherently unknowable: in general, the only way to know what a turing-complete program means is to run it. This puts very strong limitations on the combinatorics of turing-complete programs and also on the kinds of support tooling that can be provided: effectively, a debugger is about the best that you can do with any reasonable effort. On the other hand, a sub-turing language is also 'decidable'. That means it is possible to predict what it means; and paradoxically, a lot easier to provide a rich environment for it etc. etc. An interesting example of two languages on easier side of the turing fence are TeX and CSS. Both are designed for specifying the layout of text, TeX is turing complete and CSS

On programming languages and the Mac

Every so often I dig out my Xcode stuff and have a go at exploring developing an idea for Mac OS X. Everytime the same thing happens to me: Objective-C is such an offensive language to my sensibilities that I get diverted into doing something else. All the lessons that we have learned the hard way over the years -- the importance of strong static typing, the importance of tools for large scale programming -- seem to have fallen on deaf ears in the Objective-C community. How long did it take to get garbage collection into the language? I also feel that some features of Objective-C represent an inherent security risk (in particular categories) that would make me very nervous to develop a serious application in it. As it happens, I am currently developing a programming language for Complex Event Processing. Almost every choice that I am making in that language is the opposite to the choice made for Objective-C -- my language is strongly, statically typed; it is designed for parallel exe