Skip to main content

About the right tools for the job

Some time ago I was involved in a running debate about whether we should be using Ruby on Rails rather than the Java stack (junkyard?) that we were using. At the time, I did not really participate in the discussion except to note that everything seemed to be at least 5 times too difficult. I had this strong intuition that there were so many moving parts that that was the problem. The application itself was not really that hard. My assertions really ticked some of my colleagues off; for which I apologize; sort of.

I guess that I come from a tradition of high-level programming languages, by high level, I would say that I would consider LISP to be a medium level language, and Prolog is slightly better. I would say that it is a pretty common theme of my career that I end up having to defend the position of using high-level tools. I have gotten a number of arguments, ranging from "it will not be efficient enough" to "how do you expect to find enough XX programmers?". I used to try to answer these questions, because I thought that they are raised in good faith. Most of them, with the possible exception of the last, have all but been made moot by progress in silicon and compiler technology.

Anyway, afterwards, I decided to take a more serious look at RoR. I picked up a book on it, and followed along. At the end of three days, I had managed to replicate perhaps 60-70% of the functionality of the site I had been working on; and I became furious.

If we had used RoR at the beginning, I began to think, it is entirely possible that I would still have a share in a company that was going to go places. Not that RoR is perfect; far from it. For example, when something goes wrong with your Ruby program a neophyte has very little support. And Ruby is a pretty weird language. But, to replicate 60% of an application that had taken 5 man years of developer effort in two days really pissed me off.

You see, one key reason that everything fell apart was that we had a competitor; a competitor who got out into the market before we did. It is hard to be sure, but we did not get the feeling that they had started before us even. What they did do was use a much easier to get going technology (PHP). So, maybe PHP does not scale; but so what? The first to market can gain enough time to re-implement should the idea prove sufficiently interesting.

So, the next time someone says that they can't find programmers, or some other reason for not using advanced techniques; my response is likely to be more robust. If we need to train people, then so be it. Using technology that lets you get going quickly can make the difference between life or death for a startup.

I may even start pushing some of the languages that I have been involved in developing...

Popular posts from this blog

Comments Should be Meaningless

This is something of a counterintuitive idea: Comments should be meaningless What, I hear you ask, are you talking about? Comments should communicate to the reader! At least that is the received conventional wisdom handed does over the last few centuries (decades at least). Well, certainly, if you are programming in Assembler, or C, then yes, comments should convey meaning because the programming language cannot So, conversely, as a comment on the programming language itself, anytime the programmer feels the imperative to write a meaningful comment it is because the language is not able to convey the intent of the programmer. I have already noticed that I write far fewer comments in my Java programs than in my C programs.  That is because Java is able to capture more of my meaning and comments would be superfluous. So, if a language were able to capture all of my intentions, I would never need to write a comment. Hence the title of this blog.

Another thought about Turing and Brooks

Rodney Brooks once wrote that robots would be human when treating them as though they were human was the most efficient way of interacting with them. (Not a precise quote.) This is an interesting variation on the Turing test. It assumes that we decide the smartness of machines in the context of frequent interactions with them. It also builds on an interesting idea: that in order to deal with another entity, be it human, animal or mineral, we naturally build an internal model of the entity: how it behaves, what it can do, how it is likely to react to stimuli etc. That model exists for all entities that we interact with; a rock is not likely to kick you back, your word processor will likely crash before you can save the document etc. When the most effective way to predict the behavior of a machine is to assume that it has similar internal structure to ourselves, then it will, for all intents and purposes, be human. So, here is another thought: how do we know that another human is human?

What is an Ontology for?

I am sure that everyone who has ever dabbled in the area of Ontology has been asked that question. Personally, I have never heard a truly convincing response; even though I strongly feel that Ontologies are quite important. I recently listened to a radio segment in which someone in Algeria (I think) was complaining about the new law that required all teaching to be done in Arabic. It seems that most university-level education is in French, and that many parents try to send their kids to schools that teach in French. The issue was that Arabic simply does not have the vocabulary demanded by a modern high-tech education. Arabic is not alone in this dilemma: French itself is littered with Les mots Anglais; and English is a true hodge-podge of Anglo-Saxon, French, German, Hindu, Japanese, and many other languages. It often happens that when a culture acquires a set of concepts, it does so in the language of the originators of those concepts. It is often considerably easier to import wholes